Category Archives: C++

Game Localization and UTF-8

After some research I have decided a pretty good way to localize your game is to store all text in text files, in UTF-8 format. UTF-8 is super widely used and 100% backwards compatible with ASCII. Also the clever design of the UTF-8 encoding lets programmers treat UTF-8 buffers as naive byte arrays. UTF-8 buffers can be naively sorted and still retain valid results, as an example!


https://en.wikipedia.org/wiki/UTF-8

However sometimes UTF-16 is needed to deal with Windows APIs… So some conversions from UTF-8 to UTF-16 can be pretty useful. To make things worse there doesn’t exist good information on encoding/decoding UTF-8 and UTF-16, other than the original RFC documents. So in swift fashion I hooked up a tiny header with the help of Mitton’s UTF-8 encoder/decoder from tigr.

The result is tinyutf.h, yet another single file header library. Perfect for doing any string processing stuff, and not overly complicated in any way. Just do a quick google search for utf8 string libraries and every single one of them is absolutely nuts. They are all over-engineered and heavyweight. Probably because they all attempt to be general-purpose string libraries, which are debatably dumb to begin with.

The hard part of localization is probably the rendering of all kinds of glyphs. Rendering aside, localization can be as simple and storing original English translations in text files. A localizer (translator) can read the English and swap out all English phrases for utf8 encoded text symbols by typing the phrases in their native language. As long as localization is planned from project start, it can be very easy to support!

TwitterRedditFacebookShare

Texture Atlases

Texture atlases for 2D games is a great optimization for batching together tons of different sprites (especially quads) with a very few number of draw calls. Making them is a pain. For pixel art or other 2D art, DXT compression is often not a great choice due to the lossyness. One good way to make atlases for 2D games is with the PNG format. Single-file loaders and savers can be used to load and save PNGs with a single function call each.

Here’s an example with the popular stb_image, along with stb_image_write:

This requires two different headers, one for loading and one for saving. Additionally the author of these headers also has a bin packing header, which could be used to make a texture atlas compiler.

However I have created a single-file header called tinydeflate. It does PNG loading, saving, and can create a texture atlas given an array of images. Internally it contains its own bin packing algorithm for sorting textures and creating atlases. Here’s an example:

The above example (if given a few more images) could output an atlas like so:

atlas

Followed by a very easy to parse text formatted file containing uv information for each atlas:

The nice thing about tinydeflate is the function to create atlases has a little bit of anti UV bleeding code inside of it. Most places on the internet will tell you to solve texture atlas bleeding by adding padding pixels around your textures. This might be important for 3D applications or when mip-mapping is important, however, I don’t have this kind of 3D experience. For 2D pixel art games it seems squeezing UVs ever so gently inside the borders of a texture works quite well, and so tinydeflate takes this approach. No complicated padding necessary.

And finally here’s an older video I did with some more information on writing the code yourself to do some of the above tasks.

#ifdef vs #if and if

Compile-time settings and whatnot often make use of #define#ifdef, and #if. I would like to make the case that using none of these is the best option, and plain old if-statements ought to be used most of the time.

Here’s a typical example:

Some debug code runs inside of a paged allocator to try and detect and report double free. This code can be quite slow, so we only want it to run in debug mode.

How would the use of an if statement be better in this situation compared to #ifdef DEBUG? The problem is that the DEBUG define is assumed to only exist in debug builds. Once the DEBUG define is run through the preprocessor it usually just disappears, as it’s often defined as nothing. Code can not always safely assume the define exists, and the #syntax must be used.

If instead we redefine DEBUG like so:

Any code can use integer literals as apart of any code. This includes if statements, function calls, templates, or what have you. We can re-write the first example to:

Compilers will strip away the first if statement resulting in the same run-time code as the first example. The else keyword can also be used after this if statement. If in the future, for any reason, it becomes a necessity to turn debug features on or off at run-time the DEBUG macro can be swapped out for some kind of global watch variable — suddenly the DEBUG macro becomes more immune to code maturation. An example:

Since the macro becomes a literal a mixture of both compile-time and run-time checks can be used freely as desired. An if statement can check the compile-time literal, and also check for run-time pointers at the same time. No #ifdef necessary. The #ifdef takes up an entire line in the source file, and is often (in some code editors) automatically shifted all the way to the left of the text file. These quirks can be avoided by not using #ifdef at all.

Here’s an example of single line for each #directive, vs using plain C code:

In the above example the difference in indentation can be visually jarring, and it’s fairly annoying to re-type the same thing multiple times in multiple places. Instead we can do something more like:

Using a literal can also play a bit nicer with MSVC’s intellisense. This can be useful for inspecting compile-time constants and debugging in general.

tinysound Release

I’ve released tinysound.h, a C header implementing a an API over DirectSound for game sounds! Details found here :)

Demonstration code:

 

Faster vcvars32.bat

Recently I noticed vcvars32.bat was taking up a significant portion of my compile time. Usually this isn’t a problem if the console that ran vcvars32.bat only needs to run the command once. However in some cases the environment variables setup by vcvars32.bat need to be set each time a batch file build script is run.

After looking around inside of vcvars32 I noticed it was calling some regexes to see if various folders existed. This is almost certainly the source of slowness (about 0.3 seconds on my machine). I’ve been wanting to keep my compile times under a second, so freeing up 1/3rd of a second would be nice!

In the end I extracted just the pieces I needed to run cl.exe, with some extra include and lib directories. Running the following batch file shouldn’t introduce any measurable hit on compile times (for VS 2013):

memfile.h – Reading files from memory with fscanf

Often times it is extremely convenient to compile certain assets, or data, straight into C code. This can be nice when creating code for someone else to use. For example in the tigr graphics library by Richard Mitton various shaders and font files are directly included as character arrays into the C source. Another example is in dear imgui where some font files (and probably other things) are embedded straight into the source.

The reason this is useful is embedding things in source removes dependencies from code. The less dependencies the better. The easier it is for someone to grab your code and solve problems, the better your code is.

I’ve created a single header called memfile.h that implements a function for opening a file in memory, and also implements a bunch of fscanf overloads. Here is a link to the github repository.

Here’s the readme:

memfile is a C header for opening files in memory by mimicking the functionaliy of fopen and fscanf. memfile is not compatible with standard C FILE pointers, but works pretty much the same way. Here’s a quick example:

Just include memfile.h and you’re good to go. I’ve created an example to demonstrate usage in main.c. As extra goodies I’ve included the incbin script I used to generate the poem symbol from main.c (incbin.pl stolen from Richard Mitton).

Here’s example usage:

 

Essentials of Software Engineering – With a Game Programming Focus

I’ve been commissioned to write a big document about what I think it means, and what is necessary, to become a good software engineer! If anyone finds the document interesting please do email me (email on my resume) with any questions or comments, even if you hate it and deeply disagree with me! This is the first release so some sections may be a bit skimpy and I might have some false info in there here and there.

I’m a fairly young engineer so please do take all my opinions with a healthy dose of skepticism. Nonetheless I hope the document is useful for someone! Many of my opinions come from my interpretation of much more experienced engineers I’ve come in direct contact with, and this PDF is sort of a creation of what others taught me.

As time goes on I may make edits or add new chapters, when I do I’ll modify the version number (just above the table of contents). Here is the PDF:

Download (PDF, Unknown)

Preprocessed Strings for Asset IDs

Mick West posted up on his site a really good overview of some different methods for hashing string ids and gave good motivation for optimizing this area early on in a project. Please do review his article as it’s a prerequisite to this post, and his article is just really good.

I’ve been primarily concerned with memory management of strings as they are extremely hairy to work with. For me specifically I’ve ruled out the option of a string class — they hide important details and it’s too easy to write poor (but functional) code with them. This is just my opinion. Based on that opinion I’d like to achieve these points:

  • Avoid all dynamic memory allocation, or de-allocation
  • Avoid complicated string data structures
  • Little to no run-time string traversals (like strcmp)
  • No annoying APIs that clutter the thought-space while writing/reading code
  • Allow assets to refer to one another while on-disk or in-memory without complicated pointer translations

There’s a lot I wanted to avoid here, and for good reason. If code makes use of dynamic memory, complicated data strctures, etc. that code is likely to suck in terms of both performance and maintenance. I’d like less features, less code, and some specific features to solve my specific problem: strings suck. The solution is to not use strings whenever possible, and when forced to use strings hide them under the rug. Following Jason Gregory’s example he outlined about Naughty Dog’s code base from his book “Game Engine Architecture 2nd Ed” I implemented the following solution, best shown via gif:

anim

The SID (string id) is a macro that looks like (along with a typedef):

I’ve implemented a preprocessor in C that takes an input file, finds the SID macro instances, reads the string, hashes it and then inserts the hash along with the string stored as comment. Pretty much exactly what Mick talked about in his article.

Preprocessing files is fairly easy, though supporting this function might be pretty difficult, especially if there are a lot of source files that need to be pre-processed. Modifying build steps can be risky and sink a ton of time. This is another reason to hammer in this sort of optimization early on in a project in order to reap the benefits for a longer period of time, and not have to adjust heavy laid-in-stone systems after the fact.

Bundle Away the Woes

I’ve “stolen” Mitton’s bundle.pl program for use in my current project to recursively grab source files and create a single unified cpp file. This large cpp can then be fed to the compiler and compiled as a “unity build”. Since the bundle script looks for instances of the “#include” directive code can be written in almost the exact same manner as normal C++ development. Just make CPP files, include headers, and don’t worry about it.

The only real gotcha is if someone tries to do fancy inclusions by defining macros outside of files that affect the file inclusion. Since the bundle script is only looking for the #include directive (by the way it also comments out unnecessary code inclusions in the output bundled code) and isn’t running a full-blown C preprocessor, this can sometimes cause confusion.

It seems like a large relief on the linker and leaves me to thinking that C/C++ really ought to be used as single-translation unit languages, while leaving the linker mostly for hooking together separate libraries/code bases…

Compiling code can now look more or less like this:

First collect all source into a single CPP, then preprocess the hash macros, and finally send the rest off to the compiler. Compile times should shrink, and I’ve even caught wind that modern compilers have an easier time with certain optimizations when fed only a single file (rumors! I can’t confirm this myself, at least not for a while).

Sweep it Under the Rug

Once some sort of string id is implemented in-game strings themselves don’t really need to be used all too often from a programmer’s perspective. However for visualization, tools, and editors strings are essential.

One good option I’ve adopted is to place strings for these purposes into global table in designated debug memory. This table can then be turned off or compiled away whenever the product is released. The idea I’ve adopted is to allow tools and debug visualization to use strings fairly liberally, albeit they are stored inside the debug table. The game code itself, along with the assets, refer to identifiers in hash-form. This allows product code to perform tiny translations from fully hashed values to asset indices, which is much faster and easier to manage compared to strings.

This can even be taken a step further; if all tools and debug visualizations are turned off and all that remains is a bunch of integer hash IDs, assets can then be “locked” for release. All hashed values can be translated directly into asset IDs such that no run-time translation is ever needed. For me specifically I haven’t quite thought how to implement such a system, and decided this level of optimization does not really give me a significant benefit.

Parting Thoughts

There are a couple of downsides to doing this style of compile-time preprocessing:

  • Additional complexity in the build-step
  • Layer of code opacity via SID macro

Some benefits:

  • Huge optimization in terms of memory usage and CPU efficiency
  • Can run switch statement on SID strings
  • Uniquely identify assets in-code and on-disk without costly or complicated translation

If the costs can be mitigated through implementing some kind of code pre-processor/bundler early on then it’s possible to be left with just a bunch of benefits :)

Finally, I thought it was super cool how hashes like djb2 and FNV-1a use an initializer value to start the hashing, typically a carefully chosen prime. This allows to hash a prefix string, and then feed the result off to hash the suffix. Mick explains this in his article this idea of combining hashed values as a useful feature for supporting tools and assets. This can be implemented both at compile or run-time (though I haven’t quite thought of a need to do this at compile-time yet):

 

I hate the C++ keyword auto

Warning: rant post incoming!

I’m not really sure what other programmers are thinking about or focusing on when they read code, but personally I’m thinking mostly about the data being operated on. One of the first things I like to figure out when reading new code is if any C++ constructors, destructors, or other features that generate code are being used with a current data type.

If a piece of code is operating on only integers usually it becomes easy to have a grounded concept in what kind of assembly is to be generated. The same goes for code operating on just floats, or in general just POD data. The moment C++ features are used that generate large amounts of code at compile-time “surprises” can pop out in the form of constructor code, destructor code, implicit conversions, or entire overloaded operators.

This bugs the crap out of me. When I look at code I rarely want to know only what the algorithm does, and instead need to also know how the algorithm operates on given data. If I can’t immediately know what the hell is going inside of a piece of code the entire code is meaningless to me. Here’s an example:

Please, tell me what the hell this for loop is doing. What does that = operator do? Is there a possible implicit cast when we pass results to the StoreResults function? What is .begin, what is i, and how might the ++ operator behave. None of the questions can be answered [to my specific and cynical way of thinking about C++] since this code is operating on purely opaque data. What benefit is the auto keyword giving here? In my opinion absolutely none.

Take a non-auto example:

In the above code it is painfully obvious that the only opaque pieces of code are functions, and the functions are easy to locate at a glance. No unanticipated code can be generated anywhere, and whenever the user wants to lookup the Results struct to make sure there are no “C++ loose ends” in terms of code-flow a simple header lookup to find the Results declaration is all that’s needed. I enjoy when code is generated in a predictable way, not in a hidden or hard to reason about way. The faster I can reason about what this code will do at run-time the faster I can forget about it an move onto something else more important.

Code with unnecessary abstractions bugs me since it takes extra time to understand (if an understanding can even be reached), and that cost better come with a great benefit otherwise the abstraction nets a negative impact. In the first example it’s not even possible to lookup what kind of data the results variable represented without first looking up the SortByName function and checking the type of the returning parameter.

However, using auto inside of (or alongside) a macro or template makes perfect sense. Both macros and templates aim at providing the user with some means of code generation, and are by-design opaque constructs. Using auto inside of an opaque construct can result in an increase in the effectiveness of the overall abstraction, without creating unnecessary abstraction-cost. For example, hiding a giant templated typename, or creating more versatile debugging macros are good use-cases for the auto keyword. In these scenarios we can think of the auto keyword as an abstraction with an abstraction-cost, but this cost is side-stepped by the previously assumed abstraction-costs of templates/macros.

Please, stop placing auto in the middle of random every-day code. Or better yet, just stop using it at all.

Compute Basis with SIMD

A while back Erin Catto made a cool post about his short function to compute a basis given a vector. The cool thing was that the basis is axis aligned if the input vector is axis aligned.

Catto noted that the function can be converted to branchless SIMD by using a select operation. So, a while ago I sat down and wrote the function, and only used it once. It didn’t really seem all too helpful to have a SIMD version, but it was fun to write anyway (I made use of Microsoft’s __Vectorcall):