June 2022
Randomizable Character System in Red Dust
One way to make Red Dust feel more alive was to add a way to randomize the characters so they don't feel so repetitive.

Games that use only a set amount of character models tend to be repetitive and adding more and more models quickly becomes a costly issue both from a storage point of view and from a development time point of view. When complete models are used, the number of results only grows linearly but when splitting the model in different parts and making variations for each part, the number of results grows exponentially. For example if there are 5 types of parts and 3 variations for each part, the total number of possibilities becomes 3^5 which is 243. Adding one single extra variation to only one of the parts will make the number shoot up to 324.

The problem with splitting models into different parts is that a naive approach would result in multiple deformation calculations and multiple draw calls. Needless to say, games need to keep these numbers as low as possible in order to improve framerate and processing times. If we take the part meshes and bake them into a single mesh our runtime cost remains the same as if we would use handmade complete characters.

Doing this in Unity was not without hurdles especially since the documentation isn't particularly good. Export settings from Blender were also a bit of a head scratcher.

Blender Setup

This part is going to cover how the models are structured in Blender. At the root of the scene there's an armature, also known as a skeleton, or a bone system, used for mesh deformation. The exact hierarchy here can be done in a number of ways, depending on the amount of detail needed. I highly recommend Imphenzia's video on how to do these things in Blender: https://www.youtube.com/watch?v=XkiWBSSuxLw

In Red Dust's case, the hierarchy looks like this. Note that there are some extra bones there that are used as placeholders for animations that are very specific to these games. These can be safely ignored.

Next, there are multiple models that are deformed by the armature. In Red Dust's case, these are the head, arms, pants, shirt, shoes and hair. In some of these cases there are also variations for males and females.

When putting the armature and the models together, we can deform all the meshes in the same time.

After adding animations it's time to export our models as fbx files. For this we have a little script written using python, in order to individually export all the models. This is used from the scripting tab in Blender.

First we import our dependencies and then we define some paths and names. Note that the blend files are saved in "<UnityProjectRoot>/GraphicsSources". The relative paths are based on this.

After this we have a helper selection function that sets the selection and the active object as defined in the parameters.

Finally, we export the animations and the models:

Unity Setup

Once exported as fbx files, the hierarchy is kept similar but note how there's a difference between the model files and the animation files. Blender keeps the mesh objects as a child of the armature but fbx files treat this differently so there's an extra object in the hierarchy acting as the root for fbx files containing mesh parts:

With these files exported and good to use, we use the atlasing system to add extra texture variations. For more information about how this was implemented check out this earlier article: https://www.lorinatzberger.com/articles/texture-atlasing-in-red-dust

After running the atlasing process and the final fbx files are generated, we store them neatly in a scriptable object.

The code for this is nothing special because the Odin Unity package takes care of the fancy display:

Putting it all together

Now that we have the objects exported and referenced in our scriptable object we can go ahead and build our characters.

For this, we first need a way to generically combine skinned meshes so we declare a function that will do just that. An important thing to note here is that this will only work if all the skinned meshes use the same texture but since all our character parts went through the atlasing part, this will not be an issue.

Unity has an API that will take care of most of the mesh merging but it does not work well when it comes to bone deformations. First, we allocate the mesh that will hold the result and a list of CombineInstance structs (Unity API). Note that this list is allocated inside the function here because it's easier to understand but like with any allocation that can be reused it should make use of a cache system.

Next thing we need to do is get the count of how many bones our meshes are bound to. Vertices are deformed by a maximum of 4 bones but the bone indices need to be common across our input meshes so we also make sure this is the case:

After this we run the regular Unity function to combine the meshes together:

The problem with CombineMeshes is that it acts as if every mesh will use completely different bones so the resulting geometry will use a number of bone indices for the weights that is equal to the number of bones times the number of meshes combined. In order to get around this, we go through the bone weights and fix the indices:

Because of the way Unity handles bones in skinned meshes, the resulting bind poses are also incorrect. Bind poses are transformation matrices for each bone which are used to define the exact transformation each bone is in when in the rest pose. This is the same as the bone transformations at the time of vertex weight painting in Blender. Fortunately this is really easy to fix. We just take the bind poses of one of the input prefabs and use that since the bones are all shared:

Finally, we need to create the final object. For this we again instantiate one of the prefabs and then edit it to use the correct mesh. We do not create an object from scratch because we also need to have all the bone objects created.

With this function out of the way, we can go ahead and build our animated character. First we take a structure that specifies which of the parts to use and a parent that will contain the final object:

Next, we assemble a list of the objects that we need to use. Again, 'objects' should be cached but is presented here like this to be easier to understand.

Next create our final object, and add the animator component to the first child of the root. This is because that object is the Blender armature so all animations are relative to it.

This is it. Our meshes are combined into a single one with all the bone deformations still working as intended. This way we only have one skinning computation to do and only one draw call to dispatch to the renderer for each character.

Final Words

It's worth mentioning at this point that this is not necessarily the best way to do things or the most optimised code but unless these characters are created and destroyed during the course of a game it should be good enough. In Red Dust the player starts with 3 characters and will typically have under 20 at any time. For a different scenario like an RPG game in which characters are created throughout the session the code may need to be optimized if and only if profiling data shows this to be an issue.

The process described here seems fairly straight forward, but the fbx export settings in blender and the Unity mesh combine behaviour in relation to skinned meshes are not exactly well documented. I've also failed to find any detailed article on it, that doesn't just describe the overall process on a theoretical level and one that doesn't just point to something on the asset store as I'm not a fan of using existent packages and making a franken-code-base.

I'd be excited to know if anyone has been helped by this.

Or maybe you need advice for your game?