Category Archives: Reflection

Sane Usage of Components and Entity Systems

With some discussion going in a previous article about how to actually implement some sort of component system for a game engine, without vague theory or dogma, a need for some higher level perspective was reached, and so this article arose.

In general an aggregation model is often useful when piecing together bits of functionality or data to create something new. The ability to do so is very useful for writing game-specific gameplay code due the flexibility of code granted by aggregation. However as of late there’s been tremendous talk about OOP, Entity Systems, Inheritance, and blah blah blah within the online indie development community. More and more buzzwords get tossed around by big name writers and the audience really just looks for some guidelines to follow in hopes of writing good code.

Sadly there isn’t going to be a set of step by step rules for writing a game engine or coming up with a good architecture. Like many of said before me, writing a game is a specific task requiring specific solutions. Why do you think game engine developers such as Epic or the Unity guys have so many people working on the product? Because a generic game engine is a huge piece of software that requires a lot of features. Some features exist simply to let users add in custom features easily.

Components, aggregation, Entity Component Systems, Entity systems, these are just words and have various definitions (depending on who you ask).

Some Definitions

To hopefully avoid silly arguments and confusion lets define some terms. If you don’t like the definitions here feel free to express so, I’m all up for criticism and debate.

    • Component Based Architecture
      • A preference for aggregation over inheritance. Is just a concept and does not lead to a single specific implementation. A game object is a collection of components. A component defines data and/or functionality for a concept.
    • Entity Component System (ECS)
      • A specific implementation of Component Based Architecture. A game object would be an ID (an integer). The ID is used to form an aggregate. Usually an ECS implies an implementation similar to a database, where components are entries into a database that are looked up through some identifier. The main goals of this implementation are efficiency and simplicity. Often times the term “ECS” is used just to describe a Component Based Architecture, often leading to confusion.
    • Aggregation
      • I like to think of this as a “has-a” relationship over an “is-a” relationship. Aggregation refers to one object “having” another object, which implies an aggregate is a collection (data structure) of other objects.

Some Truth and History

Aggregation is useful from a game design perspective. It frees functionality from arbitrary classification (classes and inheritance). Classes were originally created in C++ to let a programmer tie together a piece of data and some functionality to represent some sort of real-life concept. This is in simplest terms the essence of Object Oriented Programming (OOP). Over time more features were added to help engineer relationships between classes, one such feature came in the form of inheritance.

There’s nothing inherently wrong with OOP and it makes sense in a lot of code. Problems can arise when there’s a mis-application of OOP that has implications that aren’t fully understood at the time of implementation that cause negative affects down the road. I’m sure we’ve all seen the code migration and mega-class example so commonly thrown around in articles arguing against OOP and inheritance abuse.

In response to such an abuse a new paradigm became popularized which focused on aggregation of functionality to form an object. This might be called a “component based architecture”. In general aggregation can be considered an appropriate alternative to inheritance.

OOP Diatribe

Usually when an article spews forth caustic attacks against OOP it’s directed at naive implementations that disregard implications of how memory is accessed. Perhaps in the past the bottleneck of most everything was processor speed, so a lot of literature focuses on this. Nowadays CPUs on the PC have an architecture that have ridiculous computational power with extremely limited memory access. In general one might consider accessing memory from RAM 300 times slower than multiplying two floats together. Of course this last statement is extremely anecdotal without any evidence, but exists just to give a rough perspective of reality in many current (2014) cases.

If objects with associated code (classes) are just allocated and deallocated on the heap at will then a performance bottleneck of memory access is going to rear its ugly face, likely long before other performance issues are even on the radar. This is where much of the diatribe comes from.

It should be noted that pretty much all code bases that make use of the C++ language use classes and structures in some form or another. As long as a programmer has an understanding of memory, how it’s accessed, and what implications arise from given implementations, nothing will go wrong. Alas, actually doing these things and writing good code is super hard. It doesn’t matter if a class has some implementation code within it, so long as that bit of code makes sense for the purposes it is serving.

Implementing Components, a First Draft

The most immediate implementation would be to make use of multiple inheritance. This has a clear definition of where the data goes, and it all goes in one class -the derived class. Multiple inheritance itself can get a bit tricky when dealing with pointer typecasting between derived and base types, though the C++ language itself handles the details much of the time.

Inheritance alone doesn’t provide a good mechanism to query whether a base class is apart of a specific derived aggregation and so the dynamic cast operator is born. Since the dynamic cast is a branching operation, usually implemented (afaik) by inspecting the vtable, it is avoided in general.

Multiple inheritance also does all sorts of work to member function pointers, and is just a sad part of C++. Additionally there isn’t any language feature that allows for dynamic dispatch for combinations of base classes, so if the need arises a custom solution will need to be implemented anyway.

Memory accessing, although defined, isn’t ideal. Multiple inheritance forms a blob of different data, and usually only a single piece of the blob is needed at any given time, meaning locality of reference will be poor in general. This leads to the idea of inheriting from multiple interfaces in order to decouple memory aggregation from functionality aggregation, which leads to the next draft.

Second Draft – Run-Time Aggregation

Instead of using multiple inheritance on interfaces, which is a compile-time feature, run-time support can be added. Object aggregates can be formed during run-time, and modified thereafter. This is appealing for data driven applications, and game-design friendly development iteration speed.

So lets assume that some programmer wants to implement components, but doesn’t think much about memory access patterns the implications therein. Using a vector of pointers an implementation of components becomes super simple. Each pointer can point to an interface exposing a few functions like Update, Init and Shutdown.

Searching for a particular component is as simple as linearly looping over each pointer until a matching type is found. If these pointers are ordered in some way a search can be performed, perhaps a binary search could suffice. If the identifier of a component is hashable a hash table lookup can be used.

The implementation so far is an excellent one except that there is no definition of how memory is allocated and accessed! In the most naive of implementation each game object and each component will be allocated on the heap with separate calls to malloc.

Despite having no clear memory definition there are some nice benefits that have arisen. Data driving the composition of an aggregate becomes quite trivial as each component of an aggregation can have an entirely isolated lifetime. Adding, removing, modifying, or even creating new components at run-time are all now possibilities. This dynamic aggregate architecture is great for improving game development and design iteration time!

Aggregation and Components and the Entity System Paradigm (ES/ECS)

As stated in the definitions section, an ECS is just a specific implementation of a component based architecture. A component based architecture game engine architecture would be a custom implementation of multiple inheritance. A clearly defined ECS can impose restrictions on how a component architecture is implemented and used in hopes of avoided poor memory access patterns, or in hopes of keeping code simple and orderly.

If a component is designed as a piece of memory without any code, and a game object defined as an integer ID then performance specifications can be easily imposed. Rules about where in memory components lay, and how components are actually accessed can be clearly defined in simple terms. Code can be written that operates upon arrays of components, transforming arrays linearly. This idea is actually a type of Data Oriented Design (DOD), which makes sense as DOD is just an idea! ECS is an application of the idea of DOD.

So with this type of implementation the benefits of dynamic composition can be paired with well-defined memory layout and access patterns. Suddenly prefetching and parallelism become much simpler to support.

Aggregatize all the Things!

There’s a problem. Blindly shoving the idea of an ECS implementation into every nook and cranny of an engine during development is just silly (or any complex system, not just game engines or libraries). Often times a particular system is not best implemented with a component or aggregate paradigm in mind.

An obvious case is that of a physics engine. Often times a physics engine developer is worried about collision detection, solving systems of linear equations, rigid body mechanics and allowing the engine to easily be integrated into existing code bases. These details involve a lot of math and good API design. A developer of a physics engine is going to have their focus employed in full force in solving problems specific to physics engines. This means that the engineer’s focus is finite, so the implementation that is best is one that the engineer can actually bring to completion. An implementation that can come to completion is one that makes sense for the specific details of whatever is going on inside the physics engine. The specific paradigms used are often not aggregation or component based!

In order for a physics engine to run fast it needs to have efficient memory access patterns and memory usage, on modern PC hardware, requires some form of DOD. Since this complex (often black boxed) physics engine will have it’s own specific implementation and optimization it doesn’t make sense to force a component based model to its very core with some sort of idealistic zeal. It gets really bad when strict rules are imposed (like banning all code from classes and structures that define components) on the component model (like with an ECS) and the rules start permeating the deep recesses of the entire code base.

The same thing goes for any sort of complex system. The core facilities of a game engine often times just don’t really care about components or aggregation. This means that an engine architecture that implements components will usually have to deal with middleware graphics/physics engines/libraries that don’t subscribe to a component based model (simply because it’s easier to use a library than to write your own custom things, especially if those custom things religiously follow some silly methodology like ECS or even OOP). In practice light wrapper components can be created to let the functionality of such systems be presented in a component format, ready to be used in an aggregate object.

What does this all mean? What should we all do?

Use components where it makes sense in code. Use inheritance where it makes sense in code. Use databases where they make sense. Use all the things where they should. This is a pretty sad answer but it’s the right one. There is no silver bullet paradigm that solves all the problems in the game engine architecture world, and there are no steps to follow to achieve a result that works in all cases. Specific problems require specific solutions. Good code is hard to write, and will require a lot of judgement calls. In order to make good judgement calls a lot of experience and perspective is required.

I recommend using aggregation where it really matters. Dynamic aggregation is important for gameplay specific code. Gameplay specific code, in this article, would refer to code that would not easily apply or work at all in a different game. It’s code that is your game and doesn’t define an isolated system or functionality.

Dynamic aggregation and the component based model are extremely important for game and object editors. Game design flourishes best when iteration times are driven to zero, and the ability to create new things from a composition of fundamentals is very valuable! Clearly composition is useful, but how it’s to be used is the hard part.

What Components to Make?

I recommend making components concerned with providing access to game-independent functionality to be quite large. Every 3D game engine has a concept of a mesh, and will usually have some sort of file format to associate with, like FBX. Every 2D game engine will have the concept of a sprite. Each game using Box2D will have colliders and rigid bodies, and possibly joints. These fundamental pieces of functionality don’t change very often, so static compile-time relationships aren’t a bad thing since iteration time isn’t really all that relevant.

A 3D game might have a single Mesh component for example. A Mesh component can have renderable vertices, and possibly all the skeletal and animation information as well. There may be a single Rigid Body component, which encapsulates the idea of colliders or shapes, as well as the functionality of rigid body mechanics. The Rigid Body component might even contain all necessary code and data to hold multiple joints! Or joints may be a component themselves.

For high level and gameplay related features components can become much more granular (or not if you so choose). Gameplay should be iterated, tested and changed frequently, so having small and decomposed components will probably make a lot of sense in a lot of cases. Large components that encompass more broad ideas will be useful in many cases too. Even in the gameplay world judgement calls are essential.

Usually efficiency isn’t so important for much gameplay code, so any implementation that is decently performant will suffice. Scripting languages, dynamic memory allocation and virtual dispatch, or what have you can all work. The decisions of what requires flexibility, what requires performance and all between can be difficult to make. Please see the references section for some concrete examples.

Further Readings

We live in a world of opinions and it takes time to sift through them! If you have recommendations please comment below :)

Reference Source Code

The best reference I know of is an open source game engine in progress I myself am developing. Please do send me your recommendations!

Automated Lua Binding

Capture

Welcome to the fifth post in a series of blog posts about how to implement a custom game engine in C++. As reference I’ll be using my own open source game engine SEL. Please refer to its source code for implementation details not covered in this article. The folder of interest would be LuaInterface.


Introduction

Binding things to Lua is twofold: objects and functions must be able to be sent to and retrieved from Lua. Functions can be either static C or struct/class methods. Objects can be sent “by value” or “by reference”. As you can imagine it is important to be able to unify and simplify the binding process as much as possible to reduce all manual dev-work and upkeep.

Generic C++ Functor

As with many things in a modern C++ game engine it is critical to have a generic C++ functor. Ideally this functor can wrap around class/struct methods (not only static functions). It is also possible have this functor able to refer to a function within Lua as well.

Please see my article and slides on C++ Function Binding for implementation details not covered here.

Prerequisites

This article is on the topic of automatic Lua binding; if you’re unfamiliar with how to bind simple C functions to Lua please do a little research and come back later. The deep end of the pool is actually pretty deep!

I also suggest a working knowledge of C++ templates before trying to implement these sort of features. A working knowledge of Lua is also essential.

Setting the Boundaries

With a scripting language it’s important to clearly define what you want to expose to script. Is the entire game in Lua? Are only specific parts accessible? What are the boundaries. It’s all too easy to get very caught up in what to send, what to implement, what not to do. Having clear boundaries of exactly what you want to do is the best way to start coding.

Passing Objects to Lua

Objects can be passed to Lua by reference and value. A reference would consist of 4 bytes of memory to contain a pointer to some C++ memory. This allows Lua to store a “reference” to an object in C++. Most of the work involved in this type of object binding is in allow Lua to call C++ methods or functions on the pointer its storing.

The benefits of this approach are such that: calling class methods is pretty fast and shouldn’t be a worry; fairly simple to implement as most of the work is finished by creating a generic functor in C++; no hassle or upkeep when wanting to send new types of objects to Lua -each object is just a 4 byte pointer.

Passing by Reference with lightuserdata

There are two ways I’d recommend to pass an object to Lua with: userdata and lightuserdata. A lightuserdata represents a void * in Lua and can hold a reference to an object in C++.

Here’s how one might send and retrieve lightuserdata from Lua:

This method is very fast, simple to implement and has very minimal memory overhead. Additionally lightuserdata can be compared to one another, and are equal if the underlying address is equivalent. However, one cannot attach metatables to lightuserdata and there is no sense of type safety what so ever. A lack of type safety means that if someone passes a lightuserdata into an incorrect C function the host program will likely crash.

With lightuserdata the following code is possible:

This solution will work for one, maybe two people working on a smaller project or minimal amount of code. I can imagine that the lack of type safety will be the biggest issue as time goes on.

Reflection for Type Safety

It is possible to implement type-safety in Lua. However this requires Lua code to be maintaining type information. Lua is a scripting language meaning it ought best be used to script things. Something so integral and common as type-safety might better be implemented in lower-level C++ code.

Implementing type safety on the C++ side has two benefits: efficiency of implementation; type-safety can optionally be compiled away in release mode.

I highly recommend building yourself a simple, custom introspection library in C++. All that is really needed to start is the ability to query a type’s size and name efficiently. Please see my older article on custom Introspection or the game engine SEL for examples on how to implement such a system.

With a simple macro-based registration system one can register and lookup type information via introspection like so:

After this is complete and working (if you don’t have an implementation of introspection yet this is fine, just think of it as a black box) a small generic Variable object ought to be created. Sample code of a functional Variable object is in this post.

A Variable can be used like so:

It is important to note that the Variable itself is not a templated type!

When passing an object to Lua we can send a pointer to a Variable. As long as the Variable exist in memory in C++ the lightuserdata within Lua will point to a valid Variable. Upon retrieval of the Lua object back to C++ a type assertion can be run:

Generic Static Function Binding

Bind C-style static functions in a generic way makes heavy use of custom introspection. The way I was originally taught was to just throw the entire binding function (in C++) at you all at once and let you suffer. Prepare to suffer as I did!

This function isn’t doing the bind, it’s what is bound. Every time a function in C++ is called from Lua, this function is called first.

An upvalue in Lua is akin to static variables in C. Using this we can attach a pointer to a generic functor to a bound C function within Lua. As Lua calls a C function this upvalue is retrieved and eventually used to actually call the C function.

The rest is just a matter of handling variables to/from Lua. In the above example the Variable object contains some helper functions call ToLua and FromLua. The nice thing about my implementation of this within SEL is that no heap memory is used during this entire process! All this code boils down to a very efficient method of generically calling C functions.

I will leave binding C++ methods as an exercise for the reader. By now you ought to have an idea of where to look for example implementation! The idea is to handle type information for the “this pointer” of the method, and pass around an actual “this pointer” to call the method.

Calling Methods from Lua

Lets say you have an implementation that allows Lua code like the following:

A few things need to happen here. The first is that the object in question should only call methods that are actually methods of that specific type of class; one cannot simply bind all C++ methods and place functions in Lua within the global scope. Any object type could call any method type making for a lack of type-safety and dangerous code.

At this point the lightuserdata will have to be upgraded to a full userdata. Full userdata in Lua enjoy benefits such as the ability to set and modify metatables. If you’re not familiar with Lua metatables please do a little research on the topic and come back later.

A full userdata allows us to place a copy of a Variable within Lua memory, instead of just a void *. This means a temporary Variable can be used to call ToLua, instead of requiring that the Variable sent stays valid in C++ for the duration of usage within Lua.

Currently a way to create metatables for all of our C++ types is required. Assuming a linked list of all TypeInfo objects from the introspection system is available:

This loop is just creating metatables given the string names of what each metatable should be called.

After the tables are created the actual C++ methods and functions should be bound. This turns out to be really simple! It is assumed that each function and method registered within the introspection system can be passed to the function at some point (perhaps during registration of the type information):

And that’s really all there is to it! The idea here is to make sure that a type with methods sent to Lua has its userdata fixed with a metatable containing the available methods to call. When the __index metamethod is called it will search within the metatable itself for an appropriate member. Members of the metatable are the functions we bound to Lua. After they are fetched they can be called. This is what happens behind the scenes when we do:

Passing Object by Value

Passing objects by value is actually much more difficult. The idea is to utilize tables to to store representations of the members associated with a class or struct. A table can be used to represent state of an object.

The __index and __newindex metamethods of a userdata should be set to look into the state table first. This lets users assign new values, and lets your ToLua and FromLua functions copy members from C++ to/from this Lua state table.

If a member is not found in the state table the metatable can then be searched by setting the __index metamethod of the state table to refer to the proper metatable.

All of this table indirection does incur significant overhead, however it allows objects in Lua to be used like so:

I myself have not implemented this type of Lua binding, though it is entirely possible and can be quite nice to work with. I reiterate that adding this many tables incurs both memory and performance overhead not seen with the other styles. This seems to be the only drawback.

Conclusion

Well this post turned out longer than I expected -over 2k words! Hopefully the information was clear. It’s really nice being able to refer people to a complete and working example such as the SEL engine; it makes writing articles much easier and simpler.

Hopefully this can help someone out there! As always feel free to ask questions or provide comments right here on this page.

Please see Game Programming Gems 6 ch. 4.2 for more information about binding C++ objects to Lua,

C++ Enumeration Reflection

Capture

Welcome to the third post in a series of blog posts about how to implement a custom game engine in C++. As reference I’ll be using my own open source game engine SEL. Please refer to its source code for implementation details not covered in this article. Files of interest are EnumData.h, Enum.cpp and Enum.h.


Crazy Viking Studios

Lets thank the Crazy Viking Studios guys for their generous contribution in knowledge on this topic! One day as a student I emailed them about their enumeration editing in their awesome editor for Volgarr the Viking. They responded with a bunch of source code in a demo! The techniques here have been learned from Taron their programmer.

Introduction

Enumerations in C++ are a pretty nice feature. They provide type safety and a very readable way to name a lot of various types of constants. However there could be so much more added on top of enumerations in C++ to make extremely useful.

Lets take a trip through our imagination and imagine a game editor. In this editor you can create arbitrary constants with a name and associated integral value. This would be great for some sort of scripting or game logic.

This here can be implemented in C++ (during coding time as a compile-time constant) through enumerations. However there are some features that can be added to this to allow an editor to manipulate things:

  • Add new entries
  • Modify existing entries
  • Delete entries

Basic Enumeration Editing

In order for an editor to manipulate this information within a C or C++ file some run-time memory is required to store a representation of the actual enumerations in code. Data tables (structs) will work well for this. Lets imagine a structure to contain one of these enumerations; we’ll need string representations of all of the enumeration entries:

If an Enum instance were created to contain identical string representations of the entries with the Spells enum, then a constant-time conversion of enumeration to string could be achieved just by indexing the Enum vector with a value.

Converting a string back to an enumeration would best be done with a small hash table. This will keep string to enum conversions const-time.

Much to be Desired

This is all fine and good, however if a user creates a new entry as a string this won’t update the actual C++ enumeration entries -new entries only exist until the editor shuts off. Additionally there isn’t an easy way to lookup a particular Enum struct. It would be nice to be able to lookup an Enum struct in various ways, such as by string name or template type. It would also be cool to be able to serialize enumerations to/from file.

It might be fairly simple to actually modify the source code containing a particular enumeration in C++ whenever an entry is modified, deleted or added. This would let programmers actually use enumerations created in an editor within their code (after a recompile). It is also possible to hookup the new entries to be loaded in the Enum struct as a string literal.

Automation

As you can imagine a lot of manual labor is going to be needed in order to upkeep all of this crazy editing and modifying of enumerations. Some generalization and automation is needed to keep dev-work at an absolute minimum.

This is the time when I reference an old project I created to demonstrate a simple idea for serialization in C. The trick is use a source file and include it multiple times with various macro definitions. The source file to be included fills out the macros, but the macros are interpreted differently depending on when it was included. This allows you to write data files and interpreters using the preprocessor.

This is exactly what we need for building up some automated reflection and editing of enumerations.

Imagine a data file like so (a header without multiple inclusion guards and some macro invocations):

Lets take this data file and create a normal enumeration:

As you can see the macros from the data file are going to interpret the data as an enumeration. It is important to just always #undef all the macros in case they were previously defined.

After the preprocessor runs and the macros expand we will end up with something like:

Now the key part comes with defining the macros again to interpret the data in an all new way. Here’s an example to automate the creation of the Enum struct containing string literals:

The idea here is to construct an array of const char * literals and pass them to the Enum struct’s constructor. The struct can loop over them until the sentinel NULL value is found. When expanded by the preprocessor this file might look like:

While the Enum struct is looping over the literals passed to it in the constructor, it can also be adding the strings to a hash table to lookup appropriate indices.

Editor Support

Now that a great scheme for automation of generating the actual enumeration data is setup, all that is required is to make sure that an editor can easily find an appropriate C++ file to modify when entries are modified. My solution was just to cram all enumerations into a single C++ file. This C++ is detailed with a nice comment saying something like: WARNING: This file is auto-generated by the Enum Editor.

This actually works pretty well but has a single drawback: editing an enumeration causes a global recompile of the project. There are no separate namespaces or naming schemes in my own implementation, meaning that each enumeration has to be unique to avoid compilation errors.

From here it’s just a matter of writing to your C++ data file.

Tree Heirarchy

Wouldn’t it be great to be able to say “This enum is a subset of this entry”? That might have sounded confusing, here’s an example:

The idea is to allow each enumeration entry to contain an enumeration by creating a tree hierarchy.

This would be great for all sorts of game logic or general organization! It’s also possible to implement a really fast IsA function, so you could go if(type->IsA( Dragon )). Implementing this would just be a matter of traversing the tree hierarchy.

Enumeration Features

I implemented a bunch of rag-tag features in my game engine SEL and would like to cover a couple of the more useful ones. Just take a quick look at an example declaration of the Enum struct:

I’m sure most readers can imagine how these methods are useful and how to implement them.

However looking up a specific enumeration by name (useful for macros) or by template type is something that is a little harder to implement. Please see SEL for a working reference on how to accomplish these. The idea is to use the multiple-inclusion trick on the data file to define some template specializations.

Serialization and Introspection Registration

Serializing enumerations should be really straightforward for both binary and string formats. For binary the numerical representations can be utilized. And string format uses the string arrays constructed at compile-time.

The rest is just a matter of writing some string to/from file routines.

Some introspection techniques rely on the user to register various types within the reflection system. In this case it turns out this registration can also be automated with multiple-file inclusion on the data file! Just define a routine to register each enumeration type. There’s not much to it!

Conclusion

I certainly hope this helps someone out there! Please do comment or ask questions right here on the post, I always enjoy reading them.

C++ Function Binding

Capture

Welcome to the first post in a series of blog posts about how to implement a custom game engine in C++. As reference I’ll be using my own open source game engine SEL. Please refer to its source code for implementation details not covered in this article.

I would like to thank John Edwards for his contribution to my education in the areas of reflection and function binding. You can thank him too by checking out their games at thatgamecompany!


Function binding in C++ is the act of being able to trigger a function given some form of input. Usually this applies to C or C++ by means of calling any function in a generic way. This can be achieved easily during compile-time in C++ by using some templates along with decltype. This is useful for:

  • Script binding
  • Advanced messaging
  • Advanced editor support
  • Many others

The idea is to capture the pointer to a function (or method) and pass its type around in code as a template parameter. An instance of the template type is created as a template constant. This is possible with the usual compilers as function pointers are of integral type.

The rest of the work involves creating a nice wrapper to pack arguments together and get them to the template constant pointer in a generic fashion.

I’ve created some nice slides on the topic and some demo source code. There is one slide with a video that will not play from the pdf, I attached the video to the bottom of this post. Hope this helps someone out there.

Download (PDF, Unknown)

Source code demo is here. If you wish to view a fully featured example, please see my project SEL.

Live Enumeration Editing in C++

Taron Millet, the programmer for Volgarr the Viking, created an interesting enumeration editor for their game editor used in the creation of Volgarr the Viking. This enumeration editor sparked my interest as somehow enumerations could contain within them an enumeration type. This forms a sort of tree hierarchy of enumerations! I actually emailed Taron about the editor, and he threw together a quick demo for me! If you’d like to see the demo just email me and I can send it to you.

Imagine you have an enumeration of types of items, things like breast plates, helmets, boots. Now imagine within each enumeration, lies another enumeration. You can enumerate types of helmets, types of boots and types of breast plates. Now imagine that this tree-like hierarchy is recursive with no depth boundary!

Not only was this enumeration tree really cool, but it also could be live-editted and commit back to C++ code. This is a very interesting idea and can be applied to custom editors for C++ game engines.

I’ve created my own terminal enumeration editor for a proof of concept. Here’s a video demo:

This sort of editor could be implemented in a fully featured editor, perhaps like the one Volgarr the Viking used! This is great for quick changes in gameplay and the like, and can greatly reduce the time required to setup type-safe enumerations. I myself use this editor to also reflect all constructed enumerations within a custom C++ introspection database. This allows all enumeration types to be passed to/from scripting languages, and serialized.

The implementation of such is actually super simple, and a proof of concept can be seen here: https://github.com/RandyGaul/Serialization_C. The idea is to use a single data file full of macro calls. This data file is then intentionally imported into multiple locations. Each time this import occurs different definitions of the macros are defined, thus interpreting the data in various ways upon each import. For more information about this see the link within this paragraph.

Powerful C++ Messaging

A prerequisite to this information is most of the previous C++ type introspection stuff I have been writing about for a while now. Assuming the previous information has been covered, lets move on:

There exists a design of messaging, specifically for C++, of which has minimal downsides and many positive advantages. Ideally messaging should not involve any polling or implicitly required searching (as in searching through game space to see who to message, which requires expensive collision queries). It should also have a very intuitive usage, and not be very complex to work with.

If such a messaging system can be achieved then inter-object communication can be setup, to create game logic, within a scripting language.

Here are some slides I wrote on this topic for my university, but are available for public viewing:

Download (PDF, Unknown)

C++ Reflection Part 6: Lua Binding

Binding C/C++ functions to Lua is a tedious, error prone and time consuming task when done by hand. A custom C++ introspection system can aide in the automation of binding any callable C or C++ function or method a breeze. Once such a functor-like object exists the act of binding a function to Lua can look like this, as seen in a CPP file:

The advantage of such a scheme is that only a single CPP file would need to be modified in order to expose new functionality to Lua, allowing for efficient pipe-lining of development cycles.

Another advantage of this powerful functor is that communication and game logic can be quickly be created in a script, loaded from a text file, or even setup through a visual editor. Here is a quick example of what might be possible with a good scripting language:

In the above example a simple enemy is supposed to follow some target object. If the target is close enough then the enemy damages it. If the target dies, the enemy flashes a bright color and then acquires a new target.

The key here is the message subscription within the initialization routine. During run-time objects can subscribe to know about messages emitted by any other object!

So by now hopefully one would have seen enough explanation of function binding to understand how powerful it is. I’ve written some slides on the topic available in PDF format here (do note that these slides were originally made for a lecture at my university):

Download (PDF, Unknown)

C++ Reflection: Type MetaData: Part 3 – Improvements

In our last article we learned how to store information about a classes’s members, however there are a couple key improvements that need to be brought to the MetaData system before moving on.


The first issue is with our RemQual struct. In the previous article we had support for stripping off qualifiers such as *, const or &. We even had support for stripping off an R-value reference. However, the RemQual struct had no support for a pointer to a pointer. It is weird that RemQual would behave differently than RemQual, and so on. To solve this issue we can cycle down, at compile time, the type through the RemQual struct recursively, until a type arrives at the base RemQual definition. Here’s an example:

As you can see, this differs a bit from our previous implementation. The way it works is by passing in a single type to the RemQual struct via typename T. Then, the templating matches the type provided with one of the overloads and feeds the type back into the RemQual struct with less qualifiers. This acts as some sort of compile-time “recursive” qualifier stripping mechanism; I’m afraid I don’t know what to properly call this technique. This is useful for finding out what the “base type” of any given type.


It should be noted that the example code above does not strip pointer qualifiers off of a type. This is to allow the MetaData system to properly provide MetaData instances of pointer types; which is necessary to reflect pointer meta.

It should be noted that in order to support pointer meta, the RemQual struct will need to be modified so it does not strip off the * qualifier. This actually applies to any qualifier you do not wish to have stripped.

There’s one last “improvement” one could make to the RemQual struct that I’m aware of. I don’t actually consider this an improvement, but more of a feature or decision. There comes a time when the user of a MetaData system may want to write a tidbit of code like the following:

Say the user wants to send a message object from one place to another. Imagine this message object can take three parameters of any type, and the reflection system can help the constructor of the message figure out the types of the data at run-time (how to actually implement features like this will be covered once Variants and RefVariants are introduced). This means that the message can take three parameters of any type and then take them as payload to deliver elsewhere.

However, there’s a subtle problem with the “Message ID” in particular. Param1 and Param2 are assumed to be POD types like float or int, however “Message ID” is a const char * string literal. My understanding of string literals in C++ is that they are of the type: const char[ x ], x being the number of characters in the literal. This poses a problem for our templated MetaCreator, in that every value of x will create a new MetaData instance, as the templating treats each individual value of x as an entire new type. Now how can RemQual handle this? It gets increasingly difficult to actually manage Variants and RefVariant constructors for string literals for reasons explained here, though this will be tackled in a later article.

There are two methods of handling string literals that I am aware of; the first is to make use of some form of a light-weight wrapper. A small wrapper object can contain a const char * data member, and perhaps an unsigned integer to represent the length, and any number of utility functions for common string operations (concat, copy, compare, etc). The use of such a wrapper would look like:

The S would be the class type of the wrapper itself, and the constructor would take a const char *. This would require every place in code that handles a string literal to make use of the S wrapper. This can be quite annoying, but has great performance benefits compared to std::string, especially when some reference counting is used to handle the heap allocated const char * data member holding the string data in order to avoid unnecessary copying. Here’s an example skeleton class for such an S wrapper:

As I mentioned before, I found this to be rather annoying; I want my dev team and myself to be able to freely pass along a string literal anywhere and have MetaData handle the type properly. In order to do this, a very ugly and crazy solution was devised. There’s a need to create a RemQual struct for every [ ] type for all values of x. This isn’t possible. However, it is possible to overload RemQual for a few values of x, at least enough to cover any realistic use of a string literal within C++ code. Observe:

The macro ARRAY_OVERLOAD creates a RemQual overload with a value of x. The __COUNTER__ macro (though not standard) increments by one each time the macro is used. This allows for copy/pasting of the ARRAY_OVERLOAD macro, which will generate a lot of RemQual overloads. I created a file with enough overloads to cover any realistically sized string literal. As an alternative to the __COUNTER__ macro, __LINE__ can be used instead, however I imagine it might be difficult to ensure you have one definition per line without any gaps. As far as I know, __COUNTER__ is supported on both GNU and MSVC++.

Not only will the ARRAY_OVERLOAD cover types of string literals, but it will also cover types with array brackets [ ] of any type passed to RemQual.

The second issue is the ability to properly reflect private data members. There are three solutions to reflecting private data that I am aware of. The first is to try to grant access to the MetaData system by specifying that the MetaCreator of the type in question is a friend class. I never really liked the idea of this solution and haven’t actually tried it for myself, and so I can’t really comment on the idea any further than this.

The next possible solution is to make use of properties. A property is a set of three things: a gettor; a settor; a member. The gettor and settor provide access to the private member stored within the class. The user can then specify gettors and/or settors from the ADD_MEMBER macro. I haven’t implemented this method myself, but would definitely like if I find the time to create such a system. This solution is by far the most elegant of the three choices that I’m presenting. Here’s a link to some information on creating some gettor and settor support for a MetaData system like the one in this article series. This can potentially allow a MetaData system to reflect class definitions that the user does not have source code access to, so long as the external class has gettor and settor definitions that are compatible with the property reflection.

The last solution is arguably more messy, but it’s easier to implement and works perfectly fine. I chose to implement this method in my own project because of how little time it took to set up a working system. Like I said earlier, if I have time I’d like to add property support, though right now I simply have more important things to finish.

The idea of the last solution is to paste a small macro inside of your class definitions. This small macro then pastes some code within the class itself, and this code grants access to any private data member by using the NullCast pointer trick. This means that in order to reflect private data, you must have source code access to the class in question in order to place your macro. Here’s what the new macros might look like, but be warned it gets pretty hectic:

The META_DATA macro is to be placed within a class, it places a couple declarations for NullCast, AddMember and RegisterMetaData. The DEFINE_META macro is modified to provide definitions for the method declarations created by the META_DATA macro. This allows the NullCast to retrieve the type to cast to from the DEFINE_META’s TYPE parameter. Since AddMember method is within the class itself, it can now have proper access to private data within the class. The AddMember definition within the class then forwards the information it gathers to the AddMember function within the MetaCreator.

In order for the DEFINE_META api to remain the same as before, the META_DATA macro creates a RegisterMetaData declaration within the class itself. This allows the ADD_MEMBER macro to not need to user to supply to type of class to operate upon. This might be a little confusing, but imagine trying to refactor the macros above. Is the RegisterMetaData macro even required to be placed into the class itself? Can’t the RegisterMetaData function within the MetaCreator call AddMember on the class type itself? The problem with this is that the ADD_MEMBER macro would require the user to supply the type to the macro like this:

This would be yet another thing the user of the MetaData system would be required to perform, thus cluttering the API. I find that by keeping the system as simple as possible is more beneficial than factoring out the definition of RegisterMetaData from the META_DATA macro.

Here’s an example usage of the new META_DATA and DEFINE_META macros:

The only additional step required here is for the user to remember to place the META_DATA macro within the class definition. The rest of the API remains as intuitive as before.


Here’s a link to a compileable (in VS2010) example showing everything I’ve talked about in the MetaData series thus far. The next article in this series will likely be in creating the Variant type for PODs.

C++ Reflection: Type MetaData: Part 2 – Type Reduction and Members

In the last post we learned the very basics of setting up a reflection system. The whole idea is that the user manually adds types into the system using a single simple macro placed within a cpp file, DEFINE_META.


In this article I’ll talk about type deduction and member reflection, both of which are critical building blocks for everything else.

First up is type deduction. When using the templated MetaCreator class:


Whenever you pass in a const, reference, or pointer qualifier an entire new templated MetaCreator will be constructed by the compiler. This just won’t do, as we don’t want the MetaData of a const int to be different at all from an int, or any other registered type. There’s a simple, yet very quirky, trick that can solve all of our problems. Take a look at this:

I’m actually not familiar with the exact terminology to describe what’s going on here, but I’ll try my best. There’s many template overloads of the first RemQual struct, which acts as the “standard”. The standard is just a single plain type T, without any qualifiers and without pointer or reference type. The rest of the templated overloaded version all contain a single typedef which lets the entire struct be used to reference a single un-qualified type by supplying any of the various overloaded types to the struct’s typename param.

Overloads for the R-value reference must be added as well in order to strip down to the bare type T.

Now that we have our RemQual (remove qualifiers) struct, we can use it within our META macros to refer to MetaData types. Take a look at some example re-writes of the three META macros:

The idea is you feed in the typedef’d type from RemQual into the MetaCreator typename param. This is an example of using macros well; there’s no way to screw up the usage of them, and they are still very clean and easy to debug as there isn’t really any abuse going on. Feel free to ignore specific META macros you wouldn’t actually use. I use all three META_TYPE, META and META_STR. It’s a matter of personal preference on what you actually implement in this respect. It will likely be pretty smart to place whatever API is created into a namespace of it’s own, however.

And that covers type deduction. There are some other ways of achieving the same effect, like partial template specialization as covered here, though I find this much simpler.

Next up is register members of structures or classes with the MetaData system. Before anything continues, lets take a look at an example Member struct. A Member struct is a container of the various bits of information we’d like to store about any member:

This member above is actually almost exactly what implementation I have in my own reflection as it stands while I write this; there’s not a lot needed. You will want a MetaData instance to describe the type of data contained, a name identifier, and an unsigned offset representing the member’s location within the containing object. The offset is exceptionally important for automated serialization, which I’ll likely be covering after this article.

The idea is that a MetaData instance can contain various member objects. These member objects are contained within some sort of container (perhaps std::vector).

In order to add a member we’ll want another another very simple macro. There are two big reasons a macro is efficient in this situation: we can use stringize; there’s absolutely no way for the user to screw it up.

Before showing the macro I’d like to talk about how to retrieve the offset. It’s very simple. Take the number zero, and turn this into a pointer to a type of object (class or struct). After the typecasting, use the -> operator to access one of the members. Lastly, use the & operator to retrieve the address of the member’s location (which will be offset from zero by the -> operator) and typecast this to an unsigned integer. Here’s what this looks like:

This is quite the obtrusive line of code we have here! This is also a good example of a macro used well; it takes a single parameter and applies it to multiple locations. There’s hardly any way for the user of this macro to screw up.

NullCast is a function I’ll show just after this paragraph. All it does is return a pointer to NULL (memory address zero) of some type. Having this type pointer to address zero, we then use the ADD_MEMBER macro to provide the name of a member to access. The member is then accessed, and the & operator provides an address to this member with an offset from zero. This value is then typecasted to an unsigned integer and and passed along to the AddMember function within the macro. The stringize operator is also used to pass a string representation of the member to the AddMember function, as well as a MetaData instance of whatever the type of data the member is.

Now where does this AddMember function actually go? Where is it from? It’s actually placed into a function definition. The function AddMember itself resides within the MetaCreator. This allows the MetaCreator to call the AddMember function of the MetaData instance it holds, which then adds the Member object into the container of Members within the MetaData instance.

Now, the only place that this AddMember function can be called from, building from the previous article, is within the MetaCreator’s constructor. The idea is to use the DEFINE_META macro to also create a definition of either the MetaCreator’s constructor, or a MetaCreator method that is called from the MetaCreator’s constructor. Here’s an example:

As you can see this formation is actually very intuitive; it has C++-like syntax, and it’s very clear what is going on here. A GameObject is being registered in the Meta system, and it has members of ID, active, and components being added to the Meta system. For clarity,  here’s what the GameObject’s actual class definition might look like (assuming component based architecture):




// This boolean should always be true when the object is alive. If this is
// set to false, then the ObjectFactory will clean it up and delete this object
// during its inactive sweep when the ObjectFactory’s update is called.
bool active;
std::vector components;

Now lets check out what the new DEFINE_META macro could look like:

The RegisterMetaData declaration is quite peculiar, as the macro just ends there. What this is doing is setting up the definition of the RegisterMetaData function, so that the ADD_MEMBER macro calls are actually lines of code placed within the definition. The RegisterMetaData function should be called from the MetaCreator's constructor. This allows the user to specify what members to reflect within a MetaData instance of a particular type in a very simple and intuitive way.

Last but not least, lets talk about the NullCast function real quick. It resides within the MetaCreator, as NullCast requires the template's typename MetaType in order to return a pointer to a specific type of data.

And that's that! We can now store information about the members of a class and deduce types from objects in an easily customizeable way.

Here's a link to a demonstration program, compileable in Visual Studio 2010. I'm sure this could compile in GCC with a little bit of house-keeping as well, but I don't really feel like doing this as I need to get to bed! Here's the output of the program. The format is . For the object, members and their offsets are printed:

Now you might notice at some point in time, you cannot reflect private data members! This detail will be covered in a later article. The idea behind it is that you require source code access to the type you want to reflect, and place a small tiny bit of code inside to gain private data access. Either that or make the MetaCreator a friend class (which sounds like a messy solution to me).


And here we have all the basics necessary for automated serialization! We can reflect the names of members, their types, and offsets within an object. This lets the reflection register any type of C++ data within itself.