over 10yrs ago I used opengl a bit and then ogre but not done any such since.
Been trying to get a grip of opengl on the rpi2/3 and write some classes
to help but there is one aspect that I dont understand - as per the title.
(have been looking at peepo's tuts)
so if use the same pair of vertex and fragment shaders for all meshes,
and the vertex program would have a var : uniform mat4 MVP;
and a call in the mesh init section , something like
GLuint MatrixID = glGetUniformLocation( programID, "MVP");
so each pass thru the rendering loop would set that :
glUniformMatrix4fv( MatrixID, 1, GL_FALSE, &MVP);
so if drawing N meshes then if they are all done one after the other
then if the uniforms were all set when each mesh job was started in the gpu
then no problem , but afaics , opengl cant do the meshes in parallel ?
It feels like I am misunderstanding something ?
I had expected with the modern graphics cards and their 2k 'streams'
that could achieve parallel mesh rendering . Is that true for them ?
as a completely different Q - why is the glVertexAttribPointer called at each
rendering call for each attribute - that data seems to belong in the gpu - it is
never going to change ?