98 pagesV  < 1 2 3 4 5 6 > »  
 
REPLY
> OpenGL 2.0+ for Android
usnavii
Message#1
10.01.13, 22:39
Old resident
*******
[offline]

Group: Friendssavagemessiahzine.com
Messages 967
Check in: 07.06.12

Reputation:-  115  +

What is this topic for?
Many created the illusion of the complexity of learning "OpenGL", and not understanding the simplicity of this library for a programmer.
And even using the "engine" you need to understand how it interacts with the OS, which can / cannot be specific devices.

In this article I will try to go beyond the standard examples - namely, I will try to tell why and for what.
(the more I promised it long ago)
Readers are required at least a superficial knowledge of any PL.

Corrections are welcome.

Everything further is devoted to the library OpenGL ES 2.0 for Android, and subsequent versions.

OpenGL Description
Introduction
What is the OpenGL ES 2.0 library?
At a basic level, OpenGL ES 2.0 is just a specification, that is, a document describing a set of functions and their exact behavior. Equipment manufacturers based on this specification create implementations - libraries of functions corresponding to a set of specification functions (W:).

OpenGL focuses on the following two tasks:
Hide the complexity of adapting various 3D accelerators, and provide the developer with a single API.

For the programmer, OpenGL provides a low-level library for accessingGPU ( GPU ).

Scheme of options for the implementation of the library (from the point of view of the programmer + for comparison DirectX):
Attached Image

Android 99.99% uses option B.
That is, the implementation of OpenGL ES is part of the driver,
unlike DirectX, which is rather a layer between the application and the driver.
There are still some OpenGL implementations, for example, Mesa3D, but they mostly develop quite slowly and often lag behind several generations of chip maker solutions.

Which is better, DirectX or OpenGL?
The question is not correct. For example, if you need multiplatform - you can forget about DirectX.
And in the opinion of the author DirectX is too “overgrown” with tails ... (but it is very subjective)
+ The comparison is not entirely correct, since DirectX, in addition to graphics, implements many interfaces (and quite kosher - including sound, input, network, etc.)

Which is faster, DirectX or OpenGL?

Also not a correct question, it all depends on the experience of the programmer.
But again, in the opinion of the author (me), non-standard features are easier to implement on modern versions of OpenGL and even more so.
this does not require transitions to new operating systems (unlike DirectX 10).
The time for studying is also much less. + portability.

Now a little about the GPU:
Currently (December 2012), Android devices have two generations of GPUs that support OpenGL ES 2.0 (almost 95%) and only support versions 1.0 and 1.1.
Hardware backward compatibility is NOT.
Therefore, to consider the version of the standard less than 2.0 in the opinion of the author except for archaeologists does not make sense.
(standard version 3.0 is backward compatible with 2.0)

The structure of the pipeline OpenGL 1.x:
Attached Image


The structure of the pipeline OpenGL 2.x +:
Attached Image


That is, part of the blocks with the "iron logic" was replaced by programmable processors.
Question: why?

And the whole thing is that the hardware has implemented quite a few functions, because of this, significant restrictions were created in the further development and flexibility was zero.
History (some):
The first attempts to transfer calculations from Cpu (central processor) were implemented in the first Geforce (and not in Voodoo, as many people think), to call the technology T & L.
It allowed the hardware to calculate the lighting on the GPU and to perform the simplest shaders.
It turned out "fast", but there was not even minimal flexibility left. There is a hardware-implemented lighting method, for example, using. No - and will not.
The next milestone is GeForce 3, which already had a completely programmable logic, but the processor units were not yet universal.
That is, the blocks were divided into machining vertices and fragment (machining pixels).
Some could be overloaded, others were idle ...
What is the point of increasing the processors (computing units) of the GPU?
The fact is that graphic errors are almost linearly scaled, that is, an increase in processors from 100 to 200, for example, gives almost 100% performance gains, since in computer graphics the current calculation usually does not depend on the previous one - that is, it is easy to pair.
But there are some limitations, which will be written below.
Now about OpenGL ES itself:

What can OpenGL ES do?
The basic principle of OpenGL is to obtain sets of vector graphics primitives in the form of points, lines and polygons, followed by mathematical processing of the data and the construction of raster images on the screen and / or in memory. Vector transformations and rasterization are performed by a graphics pipeline (graphics pipeline), which is essentially a discrete automaton. The absolute majority of OpenGL commands fall into one of two groups: either they add graphic primitives to the entrance to the pipeline, or they configure the pipeline for different transformations.
The key feature is that the CPU and GPU do not work synchronously, that is, the CPU does not wait until the end of execution of commands from the GPU, but continues to work (if there were no additional instructions).
There is a stack of commands (instructions) OpenGL.
(The stack is of two types, fifo and lifo. FIFO - the acronym "First In, First Out" (eng.). The principle of "first come, first go", LIFO - the acronym "Last In, First Out" (eng.), denoting the principle of “last came, first went.” OpenGL uses fifo (queue)).

Lesson first end.
OpenGL is a finite state machine.
What does it mean?

Imagine the conveyor production Ded.moroz =)

On the one hand you throw blanks, on the other hand there is a finished product.
But you are standing at the console, which has several levers - and depending on the switching of these levers, tanks, dolls, crackers come out.
But a doll can never be born, that is, at a given time only one type of product is possible.
Line is locatedalways in one state onlyandcan only produce certain products at a time.

This is the final state machine. I can not easily explain. Who did not understand -here

Continued as a little sleep ...
(The next topic is what is behind GLSurfaceView, how is it bad and what is EGL.)
OpenGL 2.0+ for Android (Post Leopotam # 18544726)
What is behind GLSurfaceView or the EGL library. Detailed parsing of the initialization OpenGL ES.
GLSurfaceView. improved application framework using GLSurfaceView.
About engines, optimization and general rules. + specifically about java for Android.
OpenGL Es 2.0 primitives. Matrices, vectors and transformations in OpenGL. Coordinate system for Android.
Textures in OpenGL ES 2.0.
Shaders in OpenGL ES 2.0 and GLSL.
Particle systems. Billboards Dot sprites.

All articles in PDF, thanks to who-e,Attached fileoges2.zip(2.76 MB)
. I hope it will be useful. Thank you who-e.


There is no curator in the subject. If there is a user in the subject who wants to become a Curator and the correspondingRequirements for candidates, he can apply in the topicI want to be curator(after having studied the topic header and all materials for curators).
Prior to the appointment of the curator, on filling caps, please contactmoderatorssection through a buttonPictureunder the messages to which you want to add links.


Post has been editedvaalf - 23.11.17, 12:12
Suvitruf
Message#22
28.01.13, 12:11
Alea iacta est
*******
[offline]

Group: Developers
Messages 1295
Check in: 08.09.12
Sony Xperia X Compact

Reputation:-  174  +

And on android, what better to use opengl or opengl es? And actually in short, what is the backside?


Uh .... on Androyd there is only OpenGL ES.
OpenGL ES is a slightly trimmed OpenGL specification for cell phones, if I do not confuse anything.



usnavii, can you write a tutorial about how to view everything in C ++ in a view in Java?

And here we have the whole engine in C ++. I compiled everything, did a view, render.

It seems that all methods work, native methods are called without errors, but I see only a gray screen.

Post has been editedSuvitruf - 28.01.13, 12:11


--------------------
If someone does not remember, then it does not exist.
usnavii
Message#23
28.01.13, 22:29
Old resident
*******
[offline]

Group: Friendssavagemessiahzine.com
Messages 967
Check in: 07.06.12

Reputation:-  115  +

Suvitruf @ 01/28/2013, 13:11*
Usnavii, can you write a tutorial on how to output everything in C ++ to a Java view?


It is better to leave the view to yavovskaya and use four calls to native functions using the example of gl2jni (from the samples folder).

In the example, they use two native calls:
Init (width, height) in onSurfaceChanged and step () in the onDrawFrame Renderer class.

In Init (width, height), initialization of resources is understandable, etc.
step () - frame rendering. The call to the buffer swap comes after working onDrawFrame.

I would add two more calls - for onPause () and a destructor for the native library. Yes, and Input \ Sensors can also be transferred from here.

The GL2JNIView class is a modified GLSurfaceView.
The changes affected only the initialization of EGL.
Above, I just described this topic in detail, so I think there will be no problems to sort it out.

If you need GLES1.0-1.1 then
EGL_OPENGL_ES2_BIT (0x0004) replace with EGL_OPENGL_ES_BIT (0x0001)
and
EGL_CONTEXT_CLIENT_VERSION, 2 at EGL_CONTEXT_CLIENT_VERSION, 1

--------------------------------------

Then, if OpenGL methods are invoked and do not generate errors, this does not mean that they work, everything will scold only if the video driver is dropped, and so it may not work silently.
Check workAllcalled functions via glGetError () at least in a debug build.
Suvitruf
Message#24
29.01.13, 01:08
Alea iacta est
*******
[offline]

Group: Developers
Messages 1295
Check in: 08.09.12
Sony Xperia X Compact

Reputation:-  174  +

I have a problem with it
glGenVertexArrays (1, & verticesVAO);
glBindVertexArray (verticesVAO);


The methods are working out, but there is some crap inside.

Addresses like right received
glGenVertexArrays = (PFNGLGENVERTEXARRAYSOESPROC) eglGetProcAddress ("glGenVertexArraysOES");
glBindVertexArray = (PFNGLBINDVERTEXARRAYOESPROC) eglGetProcAddress ("glBindVertexArrayOES");
glDeleteVertexArrays = (PFNGLDELETEVERTEXARRAYSOESPROC) eglGetProcAddress ("glDeleteVertexArraysOES");


In general, so far no ideas why it does not work.


--------------------
If someone does not remember, then it does not exist.
usnavii
Message#25
29.01.13, 07:39
Old resident
*******
[offline]

Group: Friendssavagemessiahzine.com
Messages 967
Check in: 07.06.12

Reputation:-  115  +

Suvitruf @ 01/29/2013, 02:08*
In general, so far no ideas why it does not work.


eglGetProcAddress ()
It is not supported by the access control. It’s not necessary to ensure that an extension function can be used at any given time or for example. "


That is, even a non-zero value does not guarantee that this function is supported in runtime.
Check with glGetString (GL_EXTENSIONS) or eglQueryString (display, EGL_EXTENSIONS).
Also, eglQueryString () works with EGL extensions and not GLES, to get the GLES list, you need glGetString ().

GL_OES_vertex_array_object in glGetString (GL_EXTENSIONS) list?

On real hardware, chip makers make these libraries, so what extensions will be available depends only on them.

Else what version of OpenGL ES is used?

Post has been editedusnavii - 29.01.13, 08:24
Suvitruf
Message#26
29.01.13, 09:20
Alea iacta est
*******
[offline]

Group: Developers
Messages 1295
Check in: 08.09.12
Sony Xperia X Compact

Reputation:-  174  +

Nevermind.

Removed VAO, now everything is on VBO. Works)


--------------------
If someone does not remember, then it does not exist.
usnavii
Message#27
29.01.13, 09:51
Old resident
*******
[offline]

Group: Friendssavagemessiahzine.com
Messages 967
Check in: 07.06.12

Reputation:-  115  +

Suvitruf @ 01/29/2013, 10:20*
Removed VAO, now everything is on VBO. Works)


Well, right.

For the future, who can come in handy:

There is a glview program (win, ios, mac, android).
It has a database (rather large) of extensions by device (and oper. Systems).

It is quite easy to see which devices support a specific extension.

For example, the same GL_OES_vertex_array_object under androyd only supports PowerVR SGX 540.
Suvitruf
Message#28
29.01.13, 18:18
Alea iacta est
*******
[offline]

Group: Developers
Messages 1295
Check in: 08.09.12
Sony Xperia X Compact

Reputation:-  174  +

Well, right.


I do not really rummage in this.

But an acquaintance who wrote the engine says that without VBO it works several times slower.


--------------------
If someone does not remember, then it does not exist.
usnavii
Message#29
30.01.13, 15:05
Old resident
*******
[offline]

Group: Friendssavagemessiahzine.com
Messages 967
Check in: 07.06.12

Reputation:-  115  +

4. GLSurfaceView. improved application framework using GLSurfaceView.

To initialize OpenGL ES and as the basis of the application in the future I will useGLSurfaceView .

The reasons for this, or why not write it yourself:
1. InGLSurfaceViewThe code is already debugged and verified.
2. Some developers make their changes to the standard GLSurfaceView to improve any performance or performance of a particular iron.
3. FromGLSurfaceViewIt is possible and convenient to work as with any other View / Widget.
4. We will not reinvent the wheel.

GLSurfaceViewincludes the following interfaces:

EGLConfigChooser - Interface for selecting EGLConfig configuration from the list of possible configurations.
EGLContextFactory - Interface for its implementation of eglCreateContext and eglDestroyContext calls.
EGLWindowSurfaceFactory -Interface for its implementation of eglCreateWindowSurface and eglDestroySurface calls.
GLWrapper - GL wrapper
Renderer - Visualization interface.

In this frame will needEGLConfigChooserandRendererinterfaces and myselfEGLConfigChooser.

GLSurfaceView methods:

void onPause () - inform GLSurfaceView about the onPause event in the activation.

void onResume () - inform GLSurfaceView about the onResume event in the activation.

void queueEvent (Runnable r) - put in the queue in the rendering stream. Further about this moment in more detail.

void requestRender () - request frame rendering

void setEGLConfigChooser (GLSurfaceView.EGLConfigChooser configChooser) - use custom EGLConfigChooser.

void setEGLConfigChooser (boolean needDepth) - set the configuration to 16bit RGB with a 16bit depth buffer (or as close as possible to this value) depending on the needDepth.

void setEGLConfigChooser (int redSize, int greenSize, int blueSize, int alphaSize, int depthSize, int stencilSize) - set the configuration with specific values ​​of the color depth per channel, depth-buffer and depth of the traffic buffer.

void setEGLContextClientVersion (int version) - set the version of the OpenGLES Context, version = 1 for OpenGLES1.0-1.1 and version = 2 for OpenGES2.0

void setRenderMode (int renderMode) - set the frame shift mode.
Two options are possible:
RENDERMODE_CONTINUOUSLY - automatically update the screen.
RENDERMODE_WHEN_DIRTY - update on request (by calling requestRender ()).

Now the question is how many frames per second with automatic update?
It all depends on the device. For example, the maximum possible number of frames, or stabilization with a hardware screen update. Which is not always good. We write for mobile devices and where it is possible to use 30 frames per second instead of 60, but let the device work for an hour and a half longer ...
So we will stabilize the frames manually.


void setRenderer (GLSurfaceView.Renderer renderer) - set your implementation Renderer . The method is required to initialize the GLSurfaceView.

And the last method protects GLContext when the OnPause event arrives:
setPreserveEGLContextOnPause (boolean preserveOnPause)

The default value is False. Only available on API11 and higher.
The fact is that not all GPUs can work with multiple contexts simultaneously. Because of this, it was necessary to unload (destroy) the old context and in its place create a new one for the new task.
GPUs that support ES 2.0 support multiple contexts.

In order for GLSurfaceView not to destroy the context in the case of an OnPause event for API11 and higher, setPreserveEGLContextOnPause (true) must be enabled;
For API less than 11, the mode will turn on automatically if GLES is greater or even 2.0.

Here are all the methods that we may need inGLSurfaceView.

Implementation of the EGLConfigChooser interface:

In EGLConfigChooser, for work we need to implement one method - public EGLConfig chooseConfig (EGL10 egl, EGLDisplay display), which returns the selected config to GLSurfaceView.


An example implementation of EGLConfigChooser for selecting RGB888 without a depth buffer and with MSAA anti-aliasing (which cannot be achieved with the GLSurfaceView methods):

File: Config2D888MSAA.java
import android.opengl.GLSurfaceView;
import javax.microedition.khronos.egl.EGL10;
import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.egl.EGLDisplay;


public class Config2D888MSAA implements GLSurfaceView.EGLConfigChooser {
private int] Value;
public EGLConfig chooseConfig (EGL10 egl, EGLDisplay display) {
Value = new int [1];
int] configSpec = {// set the specification template
EGL10.EGL_RED_SIZE, 8,
EGL10.EGL_GREEN_SIZE, 8,
EGL10.EGL_BLUE_SIZE, 8,
EGL10.EGL_RENDERABLE_TYPE, 4,
EGL10.EGL_SAMPLE_BUFFERS, 1,
EGL10.EGL_SAMPLES, 2,
EGL10.EGL_NONE
};
if (! egl.eglChooseConfig (display, configSpec, null, 0, Value)) {
throw new IllegalArgumentException ("RGB888 eglChooseConfig failed");
}
int numConfigs = Value [0];
if (numConfigs <= 0) {// If there are no suitable RGB888 configurations, then try to get RGB888 or, at worst, RGB565 without smoothing
configSpec = new int] {
EGL10.EGL_RED_SIZE, 5,
EGL10.EGL_GREEN_SIZE, 6,
EGL10.EGL_BLUE_SIZE, 5,
EGL10.EGL_RENDERABLE_TYPE, 4,
EGL10.EGL_NONE
};
if (! egl.eglChooseConfig (display, configSpec, null, 0, Value)) {
throw new IllegalArgumentException ("RGB565 eglChooseConfig failed");
}

numConfigs = Value [0];

if (numConfigs <= 0) {
throw new IllegalArgumentException ("No configs match configSpec RGB565");
}
}
EGLConfig] configs = new EGLConfig [numConfigs];
int] num_conf = new int [numConfigs];
egl.eglChooseConfig (display, configSpec, configs, numConfigs, Value); // get an array of configurations
return configs [0]; // return the config
}


}

A more detailed description of the choice of configurations was in the previous lesson.


Now we need to make our interface implementation.Renderer.

To work, we need to implement three methods:
onDrawFrame () - drawing the frame itself

onSurfaceChanged (GL10 glUnused, int width, int height) - creation of GLSurface.
It will be called for example when changing the screen orientation and initial loading.
The required parameters are int width, int height, width (x) and height (y), respectively.

onSurfaceCreated (GL10 glUnused, EGLConfig config) - will be called before creating GLSurface, but after initializing OpenGL.
The parameter we need is EGLConfig config.
That is, the GLSurface configuration, on the basis of which we can decide which resources to load, etc.
If this method has repeatedly worked - consider the system lost GLContext and with it all loaded textures, arrays and settings. A reboot of all resources is required.

Initialization of resources can be carried out on onSurfaceChanged (if resources depend on the screen geometry) or onSurfaceCreated if the project is small.
It is very desirable to load ALL resources at once (of course, if memory consumption, etc. allows), otherwise it would be better to perform the load with a separate thread, since blocking the Render stream is not a good idea.

When initializing and running OpenGL, these methods are called in the following sequence: onSurfaceCreated =>onSurfaceChanged =>onDrawFrame =>onDrawFrame =>... onDrawFrame ...

If a screen flip occurs without losing GLContext, then the onSurfaceCreated method is not called.
onSurfaceCreated when loading and restoring the GL context.

I have added an onTouchEvent method to the GLSurfaceView.Renderer implementation for an example of screen input processing.
In a real application, at least the onPause method is still required in order to save the current state since it is not a fact that the system will not close the application after onPause and onResume /
I also made a constructor for passing the reference to the application context.

An example implementation of the Renderer interface:
File: GLRender.java
import android.content.Context;
import android.opengl.GLSurfaceView;
import android.view.MotionEvent;

import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;


public class GLRender implements GLSurfaceView.Renderer {

private Context cnx;

public GLRender (Context context) {
this.cnx = context;
}

public void onDrawFrame (GL10 glUnused) {
// Perform the whole rendering here.

}
public void onSurfaceChanged (GL10 glUnused, int width, int height) {
GLES20.glViewport (0, 0, width, height); // set the viewport to the size of the GLSurface.
}
public void onSurfaceCreated (GL10 glUnused, EGLConfig config) {

}

public void onTouchEvent (final MotionEvent event) {


}
}



IMPORTANT: FLOWS.

ClassRenderer runs in a separate thread. and not in the UI or main thread for better performance.

GLSurfaceView executes in UI thread since it is a view.

To synchronize Renderer with GLSurfaceView and other threads, use the GLSurfaceView.queueEvent () method.

Example:
public boolean onTouchEvent (final MotionEvent event) {
glSurfaceView.queueEvent (new Runnable () {
public void run () {
render.onTouchEvent (event);
}});
return true;
}



FPS stabilization.

You can get the refresh-rate (hardware) screen in this way:

Display display = ((WindowManager) getSystemService (Context.WINDOW_SERVICE)). GetDefaultDisplay ();
int refreshRating = display.getRefreshRate ();


I think it is clear what to do FPS above the screen refresh-rate does not make sense.

If you want to lower the FPS (to save battery power, or it happens that the full-refreshrate screen no longer pulls and jumps from 35 to 60 look worse than stabilization by 30), it is better to choose a multiple ratio, for example 1/2 or 2/3.

An example implementation of FPS stabilization:

private int FPS = 30; // 30 frames per second
private Boolean RPause = false; // pause flag
�...
void reqRend () {
mHandler.removeCallbacks (mDrawRa); // kill just in case all deferred calls mDrawRa
if (! RPause) {
mHandler.postDelayed (mDrawRa, 1000 / FPS); // run Runnable with a delay
glSurfaceView.requestRender (); // render frame
}
}
�.
private final Runnable mDrawRa = new Runnable () {
public void run () {
reqRend ();
}
};
�..
@Override
protected void onResume () {
super.onResume ();
glSurfaceView.onResume ();
RPause = false;
reqRend (); // start rendering
}
��..


And now the full Activity code (using Android 4.0.4 platform (API15), works from version 2.2 (API8)):

import android.app.Activity;
import android.graphics.PixelFormat;
import android.opengl.GLSurfaceView;
import android.os.Build;
import android.os.Bundle;
import android.os.Handler;
import android.view.MotionEvent;
import android.view.Window;
import android.view.WindowManager;

public class MyActivity extends Activity {
private GLSurfaceView glSurfaceView;
private GLRender render;
private Config2D888MSAA ConfigChooser;
private Handler mHandler = new Handler ();
private Boolean RPause = false; // pause flag
private int FPS = 30; // frames per second
@Override
public void onCreate (Bundle savedInstanceState) {
super.onCreate (savedInstanceState);

requestWindowFeature (Window.FEATURE_NO_TITLE); // remove the title
getWindow (). setFlags (WindowManager.LayoutParams.FLAG_FULLSCREEN, WindowManager.LayoutParams.FLAG_F
ULLSCREEN); // set the full-screen mode

glSurfaceView = new GLSurfaceView (this);

if (Build.VERSION.SDK_INT> 10) glSurfaceView.setPreserveEGLContextOnPause (true); // if the API version is higher than 11 and higher, set the GLContext protection
// If you build under API below 11, then there will be an error.


glSurfaceView.getHolder (). setFormat (PixelFormat.RGBA_8888);
glSurfaceView.setEGLContextClientVersion (2);
glSurfaceView.setEGLConfigChooser (ConfigChooser = new Config2D888MSAA ()); // use our own implementation of EGLConfigChooser

render = new GLRender (this); // initialize our Renderer implementation

glSurfaceView.setRenderer (render);

glSurfaceView.setRenderMode (GLSurfaceView.RENDERMODE_WHEN_DIRTY); // set the frame on call


setContentView (glSurfaceView); // set our glSurfaceView as a root View Activation.
}

void reqRend () {
mHandler.removeCallbacks (mDrawRa);
if (! RPause) {
mHandler.postDelayed (mDrawRa, 1000 / FPS); // deferred call mDrawRa
glSurfaceView.requestRender ();
}
}

private final Runnable mDrawRa = new Runnable () {
public void run () {
reqRend ();
}
};

@Override
public boolean onTouchEvent (final MotionEvent event) {// pass the MotionEvent event to the Renderer stream
glSurfaceView.queueEvent (new Runnable () {
public void run () {
render.onTouchEvent (event);
}});
return true;
}

@Override
protected void onPause () {
super.onPause ();
glSurfaceView.onPause ();
RPause = true; // pause flag
}
@Override
protected void onResume () {
super.onResume ();
glSurfaceView.onResume ();
RPause = false; // pause flag
reqRend (); // start rendering

}
@Override
protected void onStop () {
super.onStop ();
RPause = true;
this.finish ();
}

}


That's all.

SourcesAttached fileogl3.rar(77.08 KB)
under IntelliJ IDEA (free IDE (Community Edition), I like it more than Eclipse, and it works faster). In any case, I think it will be easy to connect 3 files to the project of the eclipse.

Eclipse SourcesAttached fileogl3_eclipse.zip(938.06 KB)
. Thank you lightes.


An example (.APK) animation (fast, medium in speed and time-lapse) on this frame with different fps:
Attached ImageAttached fileogl4.apk(665.88 KB)

Three modes: 30FPS, 40-60FPS (variable, for comparison) and 60FPS.

Further:
A little bit about engines and libraries:
Internal organization.
How to write and how not to.

and finally

Textures. Textures are not only valuable fur ...
+ OpenGL ES 2.0 coordinates, primitives, sprites.
(+ examples where something is drawn and moves =))


Post has been editedusnavii - 02.02.13, 08:05
Suvitruf
Message#30
31.01.13, 07:58
Alea iacta est
*******
[offline]

Group: Developers
Messages 1295
Check in: 08.09.12
Sony Xperia X Compact

Reputation:-  174  +

Hmm, not quite catching up with something.
IonTouchEventsdefined in his view inherited fromGLSurfaceViewand it works.

Why push this event in Activiti?


--------------------
If someone does not remember, then it does not exist.
usnavii
Message#31
31.01.13, 10:01
Old resident
*******
[offline]

Group: Friendssavagemessiahzine.com
Messages 967
Check in: 07.06.12

Reputation:-  115  +

Hmm, not quite catching up with something. I defined onTouchEvents in my view inherited from GLSurfaceView and everything works. Why should I push this event into Activiti?


You can not push in the activit.
Only activation will have a higher priority.
And first, it will work onTouchEvents in activations and already in pots.
And so, in principle, there is no difference, unless of course something interrupts the processing of the event before your view (marks the event as processed).

onTouchEvents in this case is used to show how to associate the Rendering stream with the others, since I have repeatedly met the code where people changed the data in the Renderer directly from another thread. What dropped the application through time.

Post has been editedusnavii - 31.01.13, 10:04
lightes
Message#32
01.02.13, 13:08
User
****
[offline]

Group: Friendssavagemessiahzine.com
Messages 70
Check in: 06.11.11

Reputation:-  7  +

Although if someone is not lazy and collects the project under the eclipse - I will be grateful and will lay out an additional project under the eclipse.


Collected the project under Eclipse. I after all correctly understand, what should be just a dark screen? Inserted in onTouchEvent output to the log - touches work (nothing added to the project).

Attached files

Attached fileogl3_eclipse.zip(938.06 KB)
usnavii
Message#33
02.02.13, 08:00
Old resident
*******
[offline]

Group: Friendssavagemessiahzine.com
Messages 967
Check in: 07.06.12

Reputation:-  115  +

Collected the project under Eclipse. I after all correctly understand, what should be just a dark screen? Inserted in onTouchEvent output to the log - touches work (nothing added to the project).

Thanks, I will attach to the post.
usnavii
Message#34
02.02.13, 16:39
Old resident
*******
[offline]

Group: Friendssavagemessiahzine.com
Messages 967
Check in: 07.06.12

Reputation:-  115  +

Retreat A.
About engines, optimization and general rules.
+ specifically about java for Android.


1. The best is the enemy of the good or be simpler.

All beginners familiar with OpenGL at first trying to make a universal wrapper, engine / set of classes.
This is a completely dead-end branch.
Remember - YOU NEVER PREVIOUS EVERYTHING. It's a waste of time.
Work on deviations. Implement only what is really needed in a specific project.
Anyway, there will always be a moment that you have not foreseen.
The “extensibility” and “not all in one” approach is always better.

The abundance of unnecessary functions already "killed" many large projects. Remember ACDSee or Nero.
Who loved for the fact that they were reliable and fast.
Now they can do everything, including searching on a screw, play videos and bring slippers.
Only not all image formats are already open and not all disks are writing ...

In general, check that if there was “one most correct way to do everything,” then it would already be in the API standard.
Of course, you can make the most universal way, but it will be neither fish nor meat.

In this case, there is a way to do something in only three ways:

1. It works fast. Implementation for a specific task and iron.
2. Just. Not zamarachivatsya with optimizations. It is done quickly, the speed is average.
3. Universally. There is no universality (since it’s impossible to actually sort through all the options), it works slowly, it is written before long.

About optimization:

You only need to optimize what really affects the performance of a specific task / application.

Suppose you accelerated the loading and initialization twice, spending three months on it.
Previously, the application was loaded in 0.8 seconds, and now it has become in 0.4
That is, instead of making the application faster you were engaged, hell knows what, no one user can see.

Another thing, if earlier loading and initialization took half a minute. Users is nervous.
In other words, we additionally optimize only what is “critical”, and do not immediately try to make the fastest code everywhere.
Even large companies do not have enough time to optimize everything that can be said about indies or small desktops.
Here everyone will say - it is understandable to the fool, but I actually met many cases when this was the case.
Instead of moving to the next task, the programmer continued to play with the previous one, under the pretext of optimizing which she didn’t need nafig.

2. For those who like to shove OOP where necessary and not very. (mainly about Java, but mostly true for C #, lua, etc.)

Do not try to build a harmonious chain of classes to the end (!) Of the development (!) Of all the basic functionality.
It's pointless. All the same, everyone will rewrite a hundred times.

Do not wrap simple types in classes.
It rolls on C ++ and disgusting in tongues with a garbage collector.

Use simple types!

Napimer:

class Vertex {
Float x, y, z;
}

Vertex] model = new Vertex [10000];

For "this" you need to kick.

Float - an object, respectively, it has a link and a counter that refers to it (for the garbage collector)
The link is at least 32bit, the counter is at least 32bit.
That is, to write 32 bits (data) we spent 96. (o_0)
Further Vertex] model will add its 64.
That is, it turns out that Float x, y, z weighs 352 bits instead of 96 bits.


Take another example:

Vertex [3] [10000]
Vertex [10,000] [3]

It seems there is no difference?

In the first case, we have three arrays of ten thousand elements.
In the second ten thousand links to an array of three elements.

All this will entail the rage of the garbage collector, quite noticeable inhibition and wild consumption of memory (spitting towards U.)
In real applications you have to implement your collector, since the link to the ten-meter texture takes 32 bits,
and there is no possibility to wait when the native collector is honored (he kills large objects first of all, and there is no guarantee that the little one will slam).

3. Safety factor and performance.

Always count the memory and performance reserves when designing at least one and a half times. Better in two.

From the old book (no longer remember the name, year 70 ~):
Bridges in the USA:

Takomsky Bridge - collapsed. It was calculated according to all canons.
Golden Gate - stand like everyone else did.

The answer is simple - one was calculated for the maximum theoretical loads, the other for the maximum theoretical loads * 2.
Which one to say I think it makes no sense ...

Of course, you can’t foresee everything, but you should strive =).
And assume that from the initial design at least one and a half times the demand will grow.

So you do not expect that if the device has at least 512 memories, then you can turn on them.
Always set the cost bar and do not forget that you are not alone in the system, and other applications and processes will not support you in capturing all memory.

4. Unnecessary code.

Code for anyone.

Take for example the ShaderProgram.java from libgdx.

To use it (922 lines) you need to know exactly and well what uniforms and attributes are.
But if anyone knows it can write the same thing in 10 lines, for whom is this code?
Who does not know - and so will not write, who knows will write better. For whom and what is it?

This code is for anyone.

5. Engine or what is meant by it.

Universal fast engines NO. And it can not be.
Each fast engine is sharpened for one thing.
If the creators want versatility, then you will get difficulty, poor speed, poor expandability and minimum portability at the output.
Even titans like UE could not register all possible moments of even a simple “universal” rendering of the “sprite”.
For 15 years ...

That says that there is no general solution and that the laws in computer graphics change too quickly.
The algorithm that two years ago was considered the fastest day of the last century and will only slow down your program / game under the current GPU.

Well, a general look at the engines for Android:

1. Titans.
UE, FrostByte, GameBro, Q3,4,5 ~, etc.
Pros:
100,500 games on them. Real multiplatform. Speed, stability.
Sharpened by the work of a large group of people.
Very expensive. As the engine itself and support.
But very well reduce the time to develop, which often goes more expensive than the cost of these engines.
Small offices and indies usually do not need as many bells and whistles for such a price.
And usually even the AAA class engines are not without pathologies, as they pull a lot of tails behind them.

2. 2D engines are free.
Free - for bezrybe cancer fish. Some of the possibilities are realized through the ass, often it seems that this is a student's student who, for some reason, decided to finish it.
But it is not all that bad. There are quite suitable for certain tasks.
Usually, though with available sources - developers 1-3 people.
How does the enthusiasm - open source end in half a year ...

3. 2D engines are paid.
Usually more features, sometimes even implement additional functionality on request for / without money.
Often, companies write their own toys and at the same time sell the "engine".
In general, it makes no sense if there is no source.

4. Advertising engines. When attracting users and programmers, not because of quality or use - but because of advertising.

DarkBasi, Unity3D, etc.


DarkBasi is dead. He was completely wretched.

Unity at the peak of popularity.
But there are still few finished projects on it (compared to other large engines).
Not going to die, the community is big.
+ completely usable expandable IDE.
Among the shortcomings - a large weight of an empty project, memory requirements.

5. Structures.
Well, what about them to say?
Implemented by developers - use.
Not implemented - do not use.
At advanced there is the possibility of expansion.
The choice for those who are weak in programming.


Engine selection:

Have you seen pigs from the creators of AngryBirds?
To slow down dual cores on the simplest failures, not everyone is given ...
About memory consumption in general, you need to be silent ...
This is a concrete example of how NOT to choose a platform.

Choose the engine for the task and not the task for the engine.

When you repair a car, choose a key under the nut or take your favorite key and walk around the car, thinking what to unscrew?
Usually, developers describe in detail the possibilities, so it’s easy to find what you need.

-------------

Next texture and what they eat.

Post has been editedusnavii - 03.02.13, 12:01
Suvitruf
Message#35
03.02.13, 01:30
Alea iacta est
*******
[offline]

Group: Developers
Messages 1295
Check in: 08.09.12
Sony Xperia X Compact

Reputation:-  174  +

Usnavii @ 02.02.2013, 17:39*
class Vertex {
Float x, y, z;
}

Vertex] model = new Vertex [10000];

For "this" you need to kick.

Float - an object, respectively, it has a link and a counter that refers to it (for the garbage collector)
The link is at least 32bit, the counter is at least 32bit.
That is, to write 32 bits (data) we spent 96. (o_0)
Further Vertex] model will add its 64.
That is, it turns out that Float x, y, z weighs 352 bits instead of 96 bits.


Um, does anyone even use a float object instead of the usual float type? Or something did not understand o.O

Usnavii @ 02.02.2013, 17:39*
All beginners familiar with OpenGL at first trying to make a universal wrapper, engine / set of classes.


Heh, it's everywhere.
Along the way, with games, I am still developing IP. So there people often write all the wrappers.
Often the whole layers, for access to data for example.
Actually there are TZ where it is specified, for example, that the base is used by Oracle. That is, all these wrappers - gag extra = /

Post has been editedSuvitruf - 03.02.13, 01:33


--------------------
If someone does not remember, then it does not exist.
usnavii
Message#36
03.02.13, 11:48
Old resident
*******
[offline]

Group: Friendssavagemessiahzine.com
Messages 967
Check in: 07.06.12

Reputation:-  115  +

Suvitruf @ 02/03/2013 02:30*
Um, does anyone even use a float object instead of the usual float type? Or did not understand something


Use, use =)

I met quite a common engine where exactly this implementation.
There are ideological fighters for the purity of the code, as the Bible says - all objects =)))
And in spite of all the losses they go only this way.
And if you mention with them static methods begin attacks =)
Suvitruf
Message#37
03.02.13, 12:56
Alea iacta est
*******
[offline]

Group: Developers
Messages 1295
Check in: 08.09.12
Sony Xperia X Compact

Reputation:-  174  +

Use, use =)

I met quite a common engine where exactly this implementation.
There are ideological fighters for the purity of the code, as the Bible says - all objects =)))
And in spite of all the losses they go only this way.
And if you mention with them static methods begin attacks =)


Yeah. How much I remember, all sensible people advise juzat always primitives. And maintain, and carry easily, and nothing more)


--------------------
If someone does not remember, then it does not exist.
Prospekt2
Message#38
08.02.13, 22:48
Local
*****
[offline]

Group: Friendssavagemessiahzine.com
Messages 113
Check in: 06.02.13
Samsung Galaxy Spica GT-I5700

Reputation:-  10  +

If not difficult.
How to overlay a 2d image?

There is a universal option - through the imposition of a texture on 2 triangles that make up the desired rectangle. The variant has many advantages: you can scale, you can rotate, you can display a part of the picture (especially when the picture is not square and the dimensions are not a power of two).
But probably there is something simpler, not so universal and for one fast. Are there any functions that allow to convert a texture pixel to a 1-by-1 screen pixel, i.e. when the dimensions of the drawing area and texture match.
How can I speed up / simplify the overlay of pictures in 2d? It seems like a 3d machine for 2d toys - it's like a hydrogen bomb on sparrows. Or am I wrong?
Mmx ice
Message#39
09.02.13, 13:30
Local
*****
[offline]

Group: Friendssavagemessiahzine.com
Messages 106
Check in: 16.02.09
360 N4s

Reputation:-  2  +

Next texture and what they eat.

The textures are not ready yet? ;)
And more about downloading 3D models to read.
rock88
Message#40
09.02.13, 17:42
Local
*****
[offline]

Group: Friendssavagemessiahzine.com
Messages 260
Check in: 27.12.09
Apple iPhone 4S

Reputation:-  55  +

Prospekt2 @ 02.09.2013, 01:48*
2 triangles texture mapping

You can still through glDrawTex, but still better to draw on the quad, because we get another turn / zoom. If you want faster - combine bitmaps into atlases, quads can also be combined and closed VBO with GL_STATIC_DRAW. But even if this is not done, the speed will be enough.

Prospekt2 @ 02.09.2013, 01:48*
Are there any functions that allow converting a texture pixel to a 1 in 1 screen pixel

Set the projection matrix via glOrtho?

Prospekt2 @ 02.09.2013, 01:48*
3d machine for 2d toys

And who said that OpenGL is purely for 3D and use it for 2D - waste of resources?
Prospekt2
Message#41
09.02.13, 19:28
Local
*****
[offline]

Group: Friendssavagemessiahzine.com
Messages 113
Check in: 06.02.13
Samsung Galaxy Spica GT-I5700

Reputation:-  10  +

So I decided, while the distinguished author of the topic temporarily does not indulge us with lessons, to show what I already know. On the one hand, I systematize my knowledge. On the other hand can help someone to understand this process. I’ll say right away that I still bastard from the slenderness of OpenGL, well, everything was done beautifully and as it should, and not through the 5th point. I study this library with the help of the book "Superbook OpenGL". Written quite sensibly, but all at once a person does not master.

1. How it all works
In the internal representation there is a 3-dimensional space, and it is three-dimensional even for 2d graphics. The programmer has in this space different objects, and tells OpenGL to draw them. In this space is also located an observer who is called a camera. That part of the 3-dimensional world that enters the camera is displayed on the display, as if it came from a signal from this camera. Here, in my opinion, everything is extremely simple. In this space there is a coordinate system, usually the observer is positioned so that the X axis is from left to right, the Y axis is from bottom to top (just like we were taught in school, and not as is customary in most languages), and accordingly the Z axis was directed to side of the observer, i.e. the smaller the z coordinate, the farther the object is from the observer. We assume that the programmer knows how to position the observer in space, select the desired camera (there are 2 types), and set the unit matrix to return to the original coordinate system.

It should be well deal with this coordinate system, since relative to it all and drawn. It would be very inconvenient each time to calculate all the coordinates of the objects of the virtual world, so they do this: they turn the coordinate system itself.
The coordinate system is changed using the basic functions of rotation, displacement (shift) and scaling (stretching). In addition, there are other ways to change the coordinate system, but this is more subtle, as long as you can do without it.

Suppose we have no matter who wrote the functions of drawing different parts of an object, such as a tank. Moreover, each of these parts is drawn relative to the origin. This is done like this:
  • We translate the coordinate system to the current position of the tank.
  • We draw the body of the tank.
  • Mix the center system under the tower.
  • We rotate the system around the Y axis to the desired angle (the tower can rotate relative to the body).
  • Raise the coordinate system to the level of the tower, draw a tower.
  • Now we move a little forward, rotate the coordinate system relative to the X axis to the required angle, and draw a gun in this place.


We do the same with tank wheels. For this we need to return to the coordinate system of the tank hull. Shift the coordinate system to the edge of the body, rotate the coordinate system 90 degrees and draw the wheel, then shift the coordinates and draw the second wheel, and so on.
I didn’t get a beautiful picture, but I don’t want to lay out anything.
To rotate the method is usedglRotatef (float, float, float, float) where the last 3 parameters define the vector in relation to which the rotation takes place. The axis of rotation always passes through the origin of coordinates, so if you need to make a rotation around a certain axis, you must first move the coordinate system to the point of this axis, and then make a rotation. The rotation occurs clockwise in the direction of the rotation vector, i.e. if the vector is directed from the observer, then he will see a clockwise rotation.
MethodglTranslatef (float, float, float)used to shift the axis of coordinates. The new coordinate system is transferred to the place indicated by the coordinates in the method.
Here is an example:
glRotatef (20, 0.0f, 1f, 0.0f)- this is a twelve degree rotation about the vertical axis. Those. the system will turn a little to the left. BUTglTranslatef (0f, 0f, 1.5f)Is the displacement by one and a half units along the Z axis. I note that space has a conditional measure, i.e. there are no meters, centimeters or kilometers in it, there are conditional units in it, and it depends only on you what they will become.

In the OpenGL system, there are many different modes that are turned on and off as the program runs. Modes will turn on and off using the functionglEnable (value)andglDisable (value)where value is the conditional code of the mode to be switched. For example,glEnable (GL10.GL_TEXTURE_2D)Includes texture drawing mode.

2. Drawing primitives
To approach the drawing of textures, you must first understand how simple primitives are drawn. In the trimmed OpenGL primitives are triangles and lines, as well as the sets of these 2 primitives. For example, there is a primitive fan of trianglesGL10.GL_TRIANGLE_FANwhich consists of the same triangles that have a common vertex for all. While this can not be cut.

As I understand it, there are several approaches for drawing primitives, but usually specifying points of vectors through an array is used. And this array is not quite java-vsky. This is a native array, remember that OpenGL is not written in java. Only 2 types of float and short arrays are accepted. The first contains the coordinates, the second index. The code for creating such a native (system) array float is given below.
public final FloatBuffer createAndFillFloat (float] _massive) {
int len ​​= _massive.length;
ByteBuffer bb = ByteBuffer.allocateDirect (len * 4);
bb.order (ByteOrder.nativeOrder ());
FloatBuffer result = bb.asFloatBuffer ();
result.put (_massive);
result.position (0);
return result;
}

The classes of such arrays are contained in the packagejava.nio. *
Initially, a byte array is allocated (4 bytes for each float variable), and then it is already used as an array of floating point numbers. Methodput ()can take a single value or a whole array as an argument. Thus, the array is filled with data.
For example, how a rectangle is drawn through triangles.

public void draw (GL10 _gl) {
gl.glColor4f (1.0f, 0.0f, 0.0f, 1.0f);
gl.glVertexPointer (3, GL10.GL_FLOAT, 0, points);
gl.glDrawElements (GL10.GL_TRIANGLES, 6, GL10.GL_UNSIGNED_SHORT, indexes);
}

The first line sets the current color (red) in the RGBA system, where the channels are set from 0 to 1. So it is very convenient, no need to mess around, how many bits are allocated to the channel. The second line sets the points array as an array of coordinates. It is from this array that the coordinates will be taken to draw the primitives until another array is selected as the active one. Parameters: 3 (maybe 2) - the number of coordinates of a point (if 2 is specified, then the z coordinate is perceived as 0),GL10.GL_FLOATsays that the array contains floating-point values, 0- indicates that the numbers are packed without “spaces” (that is, the next value goes after the previous one, 0 bytes is skipped), as I understand the spaces are used for alignment inside the array. And finally, the very drawing of two triangles. Options:GL10.GL_TRIANGLES- triangles are used as a primitive, 6 is the number of vertices that are involved (2 triangles with 3 vertices in each),GL10.GL_UNSIGNED_SHORT- you already understand what it is for,indexes - array with vertex indices. Drawing happens like this, the first 3 indexes are taken from the indexes array, 3 points from the array are fought by these indexes points , and triangles are already built on them based on how the coordinate system is located and how the observer is located. Then the next 3 points are taken, and so on ... a total of 6 vertices will be taken.
The arrays themselves are generated this way:

private FloatBuffer points;
private ShortBuffer indexes;

points = createAndFillFloat (new float] {
0.0f, 0.0f, 0.0f,
3.0f, 0.0f, 0.0f,
3.0f, 2.0f, 0.0f,
0.0f, 2.0f, 0.0f
}); // 4 points in total
indexes = createAndFillShort (new short] {0,1,2,0,2,3});


Remember how in 3d toys the camera sometimes went inside the wall, and then the wall on the other side seemed to disappear? The trick is that the engine draws only those flat objects that are directed to the observer. Of course, why draw the part of the wall that is on the other side, the observer will not see it, and time will be saved. Such is the optimization. So, how will OpenGL understand that the primitive is addressed to the observer? Very simple, according to the direction in which the points cost. By default, the points of the visible triangle are counterclockwise (as in our example). But there are a couple of moments. The first moment, this mode of verification of directivity should be enabled throughglEnable (GL10.GL_CULL_FACE)if it is turned off no directional checks will be performed. In addition, you can change the direction of the visible traversal throughglFrontFace (GL10.GL_CW)(second version of GL10.GL_CCW).
Plus, OpenGL has a terrific depth checker. While drawing a primitive for each pixel, the distance to the observer is calculated. If the new value is greater than the previous one, the pixel is drawn and the distance value in the distance buffer is changed to the new one. This is JUST SUPER chip. This alone is worth it to go to OpenGL and abandon the standard Android drawing. Now there is no need to sort objects by distance to the observer when drawing. Instead, you draw them in a free order, but specify the z coordinate, and the library itself draws the part that is closer to the previous one. This is especially useful when 2 planes intersect, i.e. none of them are fully visible. In this case, depth checking also solves the problem. Naturally, depth checking must be enabled throughglEnable (GL10.GL_DEPTH_TEST). In addition, when initializing the graphic container, you need to select an implementation where at least a few bits are allocated to the depth, preferably not less than 16.
So, with this kind of everything.

3. His majesty texture
So, we came close to the textures. At once I will say that textures are drawn in the same way as primitives, only before that an active texture is selected, and the coordinates of the texture points are set. First of all textures need to be loaded into OpenGL. The download process itself will be left for later, there is nothing interesting there. So, we will assume that we have loaded a 64 by 64 pixel texture, under the identifierIdTex . Let's try to draw it as in the previous example. But for this, first set the texture drawing mode through glEnable (GL10.GL_TEXTURE_2D) , and after finishing work with the texture, turn off this mode to return to drawing with simple color primitives. Next, set the active texture through glBindTexture (GL10.GL_TEXTURE_2D, IdTex) . Next, you need to enter a few explanations. Any texture should be a multiple of a power of two (1.2.4.8.16.32) and preferably square. Any texture is normalized in the OpenGL view to a single size, i.e. {0f, 0f} is one corner of the texture, and {1.0f, 1.0f} is the opposite corner. The next step is to set the binding points of the texture. To do this, call the method glTexCoordPointer (2, GL10.GL_FLOAT, 0, tpoints) here the parameters are the same as the method glVertexPointer only coordinates are only 2, but it is understandable (although coordinates can be 3 in the case of three-dimensional textures, Yes, there are such in OpenGL). After that, the glDrawElements method known to us is called. But now instead of the specified color the triangles will be filled with textural information. As it happens, first the indices of 3 points are selected, then 3 points are selected (array points ) in the space corresponding to the indices, and a triangle is drawn. For the same 3 indices, 3 two-dimensional points are selected from the array tpoints , and proceeding from them the texture on triangles is stretched. Moreover, it is stretched so that these 3 points in the texture are transferred to the corresponding points from the array points . And actually everything, thus, is an indication of where to take the texture, which part of the texture to stretch, and to which points in the space the texture should be pulled, and nothing more is needed.

In total, this code works for me:

FloatBuffer tpoints;

tpoints = createAndFillFloat (new float] {
0.0f, 0.0f,
1.0f, 0.0f,
1.0f, 1.0f,
0.0f, 1.0f
});

public void drawTex (GL10 _gl) {
_gl.glEnable (GL10.GL_TEXTURE_2D);
//_gl.glActiveTexture (GL10.GL_TEXTURE0);
_gl.glBindTexture (GL10.GL_TEXTURE_2D, IdTex);
_gl.glTexCoordPointer (2, GL10.GL_FLOAT, 0, tpoints);
_gl.glVertexPointer (3, GL10.GL_FLOAT, 0, points);
_gl.glDrawElements (GL10.GL_TRIANGLES, 6, GL10.GL_UNSIGNED_SHORT, indexes);
_gl.glDisable (GL10.GL_TEXTURE_2D);
}

I don't know exactly whyglActiveTexture (GL10.GL_TEXTURE0), it works for me without it. Points in arrayspoints and tpoints There is no need to form such triangles, the library will find a way to convert one triangle into another.

For proper operation, strings must be called somewhere in the code, for example, after initialization
_gl.glEnableClientState (GL10.GL_VERTEX_ARRAY);
_gl.glEnableClientState (GL10.GL_TEXTURE_COORD_ARRAY);

What they do, I do not know for sure, but apparently they set the mode of drawing primitives by points of arrays by the form and name.

Actually that's all. It remains to load the texture. This is done somehow like this:
public int loadTexture (GL10 _gl, int _id) {
int] a = new int [1];
_gl.glGenTextures (1, a, 0); // I understand that 1 texture starting from zero
int tid = a [0];
Log.i ("TEX", "S1" + tid);
_gl.glBindTexture (GL10.GL_TEXTURE_2D, tid);
_gl.glTexParameterf (GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST);
_gl.glTexParameterf (GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR);
_gl.glTexParameterf (GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_CLAMP_TO_EDGE);
_gl.glTexParameterf (GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_CLAMP_TO_EDGE);
_gl.glTexEnvx (GL10.GL_TEXTURE_ENV, GL10.GL_TEXTURE_ENV_MODE, GL10.GL_REPLACE);
Bitmap bit;
InputStream is = mContext.getResources (). OpenRawResource (_id);
try {bit = BitmapFactory.decodeStream (is); } finally {
try {
is.close ();
} catch (IOException ex) {
}
}
GLUtils.texImage2D (GL10.GL_TEXTURE_2D, 0, bit, 0);
bit.recycle ();
return tid;
}

The download works like this: first, the system generates an id for a new texture (or just for several), then the texture with the generated id is set as active, 5 parameters are set for this texture, then the image is loaded from resources, OpenGL is transferred, which converts it into its own color system. It remains to explain what those 5 lines of texture settings mean. The first two lines establish the interpolation system of the minimizing and maximizing filters, respectively. After all, it is clear that the texture pixels do not exactly coincide with the screen pixels, therefore these filters are needed to fill in the intermediate data. If you are knowledgeable in image processing, then it will not be news to you thatGL10.GL_NEAREST- the nearest neighbor algorithm, andGL10.GL_LINEAR- linear interpolation algorithm. The next 2 lines indicate what should happen if the coordinates of the texture that go beyond its limits are specified. And the last line sets the algorithm for mixing the texture and material on which it will be drawn. Those. the texture, generally applied to the green wall, will differ from the same texture applied to the red material. ValueGL10.GL_REPLACEindicates that the texture ignores the material, i.e. completely replaces it.

And finally, it seemed to me strange that when loading the texture both axes were inverted, i.e. the upper part of the picture had a coordinate 0.f. and the lower part 1.f. the same thing happened with the X axis.

If need be, I will clean the code and lay out what I got.

98 pagesV  < 1 2 3 4 5 6 > » 


 mobile version    Now: 20.05.19, 07:28