Arkanong part 6: batching draw calls

Published March 21, 2014
Advertisement
Once again I must apologize for how long it took to post this - There are many things competing for my time so this journal doesn't always get the tender love and care it should. I've managed to find a few spare hours so I'm going to show you how I batched the draw calls for my game objects.


Up to this point every game object has been in charge of drawing itself - that is, each object has its own texture and draw() function which loads and then renders the sprite. The object manager loops over all the objects and calls Object->draw() for every one of them.
This approach is fine if your game doesn't have that many objects but ideally you should try to batch your draw calls because they are computationally expensive. Batching draw calls means that you only call draw() once for every texture rather than once for every object. So if you have 50 objects which use 2 textures (e.g. 25 blue balls and 25 red balls) you will only have to call draw() twice instead of 50.

Since we're only calling draw() once for every texture, we need a way to tell the renderer where the texture is to be applied (that is, what objects will be using this texture). Luckily SFML has a class which does exactly this: the vertex array

A vertex is a (graphical) point which has the following datamembers:
- a position (x and y coordinates)
- a color (I won't be using this since I'm drawing textures)
- a pair of texture coordinates (which determine the part of the texture you want to use)


Every game object consist of a number of these vertices - a triangle would have 3, a rectangle 4 (which is why it is commonly referred to as a 'quad'). So really they are just the "corners" of your object with a bit of extra information attached.
The texture coordinates in particular are handy because you can use them to make draw calls more efficient. Let's say that I put the "red ball" and "blue ball" textures in a single texture called "balls". By specifying the texture coordinates so that only the left or right half of the texture is drawn I can "cut out" a piece of the image and apply it to my game object. If I did this I would be able to render all 50 blue/red balls with only one draw call since they are drawing off the same texture. You can see an example of this technique in the link I provided above.

Even when you specify texture coordinates the renderer will not automatically know how to display the vertices. 4 points could be a 'filled in' rectangle or 4 lines or really just 4 single points - you might even draw the objects with a texture one minute and press a button later to display everything as lines to produce a 'wireframe' effect.

So in order to draw a texture using vertex arrays we will need to provide the following to the renderer:

- The texture (duh)
- The collection of points that use this texture
(i.e. all the corners that make up the objects that use this texture)
- The way that they should be drawn
(Quad, line, point respectively meaning 'filled in', 'wireframe' and 'single points'.
There are other so-called primitive types but for now just knowing about quads will suffice)


The textures are kept in a map inside a new managing class called TextureManager. If you're not familiar with the map datatype, it's a way to store key/value pairs similar to the way primary keys work for a database. The texture is associated with a unique string value (in this case, the name of the texture). If you'd like to know more check out the wikipedia articles on associative arrays and associative containers.

The game objects store their own vertices and a string that corresponds to one of the key values from the texture map. For example: the texturemap might contain a texture with key-string "blue ball". Every game object that uses this texture would have the datamember textureName set to "blue ball". When the renderer want to draw the blue ball texture it loops over all the objects and collects the vertices that belong to objects with textureName set to "blue ball".



In pseudocode, the steps taken to render all the textures look like this:
- Loop over the different textures - Create a vertex array to store the points of all the objects that use this texture - Loop over all the game objects - If object::textureName equals texture key-string: collect the vertices for this object and add them to the vertex array - end of game object loop - call draw() with the current texture and the vertex array- end of texture loop

Here's what that looks like in code. Though the syntax might be a little confusing it's really just following the steps I outlined above.
void TextureManager::drawAllTextures(std::vector& objectList){ std::map::const_iterator textureIterator = m_textures.begin(); //loop over all textures while (textureIterator != m_textures.end()) { //vertex array for the current texture sf::VertexArray* vertexArray = new sf::VertexArray(sf::Quads); //collect all vertices which use this texture std::vector::const_iterator objectIterator = objectList.begin(); while (objectIterator != objectList.end()) { //if the object uses this texture //retrieve object's tranformed vertices into vertexArray if ((*objectIterator)->getTextureName() == textureIterator->first) { (*objectIterator)->getTransformedVertices(vertexArray); } ++objectIterator; } //draw the current texture and move on drawTexture(textureIterator->first, vertexArray); ++textureIterator; }}
The method 'getTransformedVertices()' retrieves the (transformed) points that make up the object in question. If you don't understand what the word 'tranformed' means it's a way of 'positioning' the object in a certain place.
The object knows its own points in a local frame of reference (my toes are 1.7 meters below my head and 0 meters in front of my nose) - transforming these points places them in the 'global' space of the game (my toes are 10 meters away from the nearest crosswalk and 100 meters below the top of that building).

It's outside of the scope of this article to explain this properly so if you don't understand what I'm talking about I strongly recommend that you learn more about the use of matrixes in game development. You could start here. The benefit of working this way is that I don't have to recalculate all the points every time an object moves. I simply have to adjust the transformation matrix which 'places' the vertices in the game world. The same goes for scaling and rotation so it's a pretty neat trick to have in your toolbox.



Once all the points for the current texture have been collected, the texture is rendered in the following piece of code:void TextureManager::drawTexture(std::string name, sf::VertexArray* vertices){ sf::Texture* texture = m_textures.at(name); if (texture != 0) { sf::RenderStates renderstate; renderstate.texture = texture; m_pRenderWindow->draw(*vertices, renderstate); }}
First I attempt to retrieve the texture to verify that it exists (drawing textures that don't exist is not a good idea). Then I create a renderstate which will hold the rendering information. A renderstate is just a way to pass along information to the renderer - you might want to manipulate blending modes for example. Since I'm only interested in drawing rudimentary textures for now I simply set its texture datamember. Finally I pass the vertex array and renderstate to SFML which will take care of the drawing for me.



Ok guys, that's it for now. Keep in mind that I am a relatively inexperienced game programmer which means that all the code in this journal should be taken with a grain of salt. If you think you know a better way to batch draw calls - you probably do and I'd love to hear about it in the comments section!


Thanks for reading and as always you can download the code in its current state here.
0 likes 0 comments

Comments

Nobody has left a comment. You can be the first!
You must log in to join the conversation.
Don't have a GameDev.net account? Sign up!
Advertisement