Category Archives: Engine

Cache Aware Components

Special thanks to Danny Frisbie for a nice discussion on the PODHandler implementation!

Let me start off by saying that optimizations really only need to be applied to bottlenecks. In order to know where a bottleneck might occur (especially cache related ones) you’ll probably need some experience. The experience not need be your own, but the experience will come from someone. In my (limited) experience the only bottlenecks I’ve ever seen in any piece of game related software (aside from N^2 loops with a high N) are always due to waiting on things to be placed into the cache. It’s really easy to write bad code, and bad code is usually cache oblivious. Even conceptually clear and understandable code can still be cache oblivious!

I’ve seen some very nice 2D and 3D games, made in C++, that used only rudimentary memory allocation schemes and naive component implementations. They ran at 60 fps just fine. If you’re a hobbyist or just trying to learn, then thinking about how to write the fastest component framework ever might be fun but don’t expect to do it correctly on your first try. Expect to fail, and then iterate.

So when the time comes to actually optimize something, having some sort of idea of where to look to learn how solve cache related problems will be valuable.

Data Lookups and Cache Lines

In general the cache line size for hardware nowadays seems to be mainly 64 bytes. A cache line is a 64 byte piece of memory that is on 64 byte boundaries. Whenever data is transferred from one cache to another (or to/from main memory (RAM)) the memory is transferred in a cache line. This keeps the memory bus busy. 16 32-bit integers would be the size of a single cache line, or 16 32-bit floating point numbers. This reduces to the size of a 4×4 matrix of 4 element floating point scalars.

How fast a cache line is transferred depends on which level it is being fetched from. In general terms: when a piece of memory is fetched by the CPU from a lower level cache it is hoisted up into the L1 data cache (L1 D, L1 I is the instruction cache for code). If this memory was not in the cache it will take forever (100 to 300 cycles, probably near the 300 range for PC). Here’s a nice diagram by Naughty Dog summarizing the common cache setup for PC CPUs:

memory

When something is loaded into the L1 cache whatever was there before has to be evicted, and will be pushed into the L2 cache (again using the same 64 byte cache line size). This will probably evict something from the L2 cache back down into the L3 cache, and so on and so forth.

The implication here is that whenever a cache line is read it is up to the programmer to try to use as much of that cache line as possible. Even though we might have 8 gigabytes of RAM, if we aren’t running in the cache the CPU will be sitting there waiting. Even if a single byte is read from main memory and entire cache line will be fetched. Reading a single byte from a random location in main memory is about the worst possible way to use memory.

This tends toward the idea of using very compact and concise data structures. If a data structure is packed together in memory it can be operated upon by the CPU very quickly once it arrives to the CPU’s cache.

The cache isn’t very big. Here’s a nice slide by Scott Meyers on the topic:

cache

32KB of L1 data cache is tiny. You don’t even get to use all of it as the operating system does need to do stuff too!

Prefetching

Prefetching exists to try to hide the latency of fetching memory. A prefetch is when a cache line is preemptively fetched and placed into the cache, such that when the memory is actually requested a cache hit occurs.

Hardware can detect patterns in real-memory accesses, but it can only detect pretty simple patterns like array traversals. Scott Meyers describes (see resources section) that the hardware is made in such a way that it can detect iterating over arrays forwards, backwards and with variable (but constant) element step size. It can also do this for all hardware threads simultaneously. However, if you’re not looping over an array you can’t count on any intelligent prefetching. It will take two or more cache misses in a recognizeable pattern to start automatic prefetching.

Usually compilers provide a specific keyword to hint to the run-time to grab a specific cache line from somewhere in memory. This can be used by programmers to ease out a final bit of performance, given a proper implementation to prefetch for.

Cache (Un)Aware Components

Hopefully by now readers are convinced that contiguous arrays of data are very friendly to the cache, and where performance matters this knowledge should be exploited.

In a component based game engine architecture looking up and operating on components is often the first bottleneck encountered. It might pay to learn a little about how this might be circumvented.

A memory naive implementation of components will look something like this:

Each component is allocated on the heap explicitly by the operating system, which is going to require a context switch and be very sensitive to memory fragmentation.

Ideally all data of a certain type will be packed together in a tight linear array. When this data needs to be operated upon, the fastest sort of transformation (without manual prefetching) will look something like this (for a generalized example):

Ideally the size of a given element will be below the size of a cache line, and often times this is possible if extraneous data can be removed from the Data type.

The explicit consumption of the data where a local copy is made is probably not necessary and will be compiled away. It is fairly easy to check the assembly by using a compiler to process to an assembly file to double check. However this kind of practice can be very helpful to let the compiler know that multiple pointers cannot possibly alias the same type. For more information search for “C++ aliasing Ericson” to find Christer’s old slides.

Now that the ideal computational situation for transforming a large data set has been described, lets look at a common (albeit contrived) data transformation that we’ve all been guilty of while first learning:

Conceptually this code is very concise and easy to reason about. Though the code readability and dynamic niceties aren’t very efficient. Random calls to delete occur, the inner loop contains a branch, and called Update on an object will go to who knows where in memory. All of these things are basically punching the cache right in the gut. Even the branch can be annoying for the CPU pipeline as it may have to eject code out of the L1 I cache if a branch is mispredicted!

How can this be solved? The first step is to make sure as much data is packed together in memory as possible. In the above code snippet the list can be changed to an array (perhaps std::vector). Okay, pretty trivial change, no big deal. Objects can perhaps just be placement new’d into the array and placement deleted. This will act like a memory pool.

The next step is to identify that the UpdateGameObjects function is performing two types of tasks (assuming the Update function performs a single task); deletion and update calling. This is a result of the container of objects not being sorted. It is a non-homogeneous collection of objects that are both alive and dead. If objects can be separated into sections of dead and alive, only the alive objects need to be looped over.

Cache Aware Components

One way to implement this would be to have the beginning of the array contain a contiguous line of live elements. The rest of the array can contain “deleted” or “not yet allocated” elements. In order to uphold this invariant it might be best to design objects placed into these sorts of arrays not care if they are moved in memory. Usually this means making your data a plain old data type (POD).

Deleting things from the array is going to be a nice feature to support. A game wouldn’t be interactive for very long if it could only consume more and more memory. A simple and very effective scheme is to move the last element of your array into the location of a deleted element.

However in order to refer to a unique element within an array simple pointers are no longer going to cut it. When an element is moved from the last index into a deleted slot any pointers to the old spot (the last and now empty element) will be dangling. Some form of translation from one data type into a pointer must occur in order to ensure that the correct pointer is retrieved for a unique entry in the array.

Usually this translation comes in the form of a handle. A handle can be implemented as an integer divided into two different sections. The details of how to implement a handle should be known by the reader before continuing on, so please view Llopis as a resource.

Lets create a simple abstract data type that grants access to an array of POD elements, of which supports handle translation, allocation and deletion:

In order to implement these two functions please do refer to the Llopis resource referenced in the last paragraph.

In order to implement Release things start to get tricky. How does the PODHandler update the handle of the element it moved? Somehow the location of the internal handle entry needs to be accessible just by knowing where the element was moved from. The easiest solution is to place a handle inside the type T within each element of the array. However, it would be great if types that are held inside of PODHandlers can fit within a single cache line. Adding a handle to every single element lowers the density of the data in the array. For certain situations this data bloat, though only 4 bytes per elements, will reduce the effectiveness of every single cache line from 64 bytes to 60.

Clearly an alternative could be used! The solution is to yet again separate different types of data into different arrays. The internal array of the PODHandler should consist of homogeous data! Rip out that intrusive handle and place them all in their own arrayPODHandler can now consist of 3 arrays in total: an array of type T, an array of Handles, and an array of integers.

The array of integers share their indices with the array of PODs. This means a POD in element 3 will correspond to the integer in element 3 of the integer array. The integer array contains indices that map to the handle associated with a given POD element in the handle array m_entries.

Though readers may by now be wondering “wouldn’t three different arrays potentially have worse locality of reference than just two arrays?”, and this would be a good thing to wonder. It is true that an intrusive handle would be preferable if handle translations are extremely frequent. If they are, the original intrusive handle implementation may be ideal.

If an engine is architected to focus on cache utilization for transforming large data sets with expensive operations, then a homogeneous array will be preferential. Or in other words if you want to loop over a lot of stuff and do expensive math on each element, that array better be dense. This means that handle translations are more infrequent since the code focuses on looping over the data array itself rather than picking out individual elements at random.

Open Source Implementation

The idea PODHandler represents is important. My implementation is just my own manifestation of the concepts described in this post. My implementation is not important! The concepts are important. Hopefully by allowing readers another piece of reference, in the form of some source code, the ideas presented here can be better realized.

PODHandler Source: Link (not up yet)

Additional References

Sane Usage of Components and Entity Systems

With some discussion going in a previous article about how to actually implement some sort of component system for a game engine, without vague theory or dogma, a need for some higher level perspective was reached, and so this article arose.

In general an aggregation model is often useful when piecing together bits of functionality or data to create something new. The ability to do so is very useful for writing game-specific gameplay code due the flexibility of code granted by aggregation. However as of late there’s been tremendous talk about OOP, Entity Systems, Inheritance, and blah blah blah within the online indie development community. More and more buzzwords get tossed around by big name writers and the audience really just looks for some guidelines to follow in hopes of writing good code.

Sadly there isn’t going to be a set of step by step rules for writing a game engine or coming up with a good architecture. Like many of said before me, writing a game is a specific task requiring specific solutions. Why do you think game engine developers such as Epic or the Unity guys have so many people working on the product? Because a generic game engine is a huge piece of software that requires a lot of features. Some features exist simply to let users add in custom features easily.

Components, aggregation, Entity Component Systems, Entity systems, these are just words and have various definitions (depending on who you ask).

Some Definitions

To hopefully avoid silly arguments and confusion lets define some terms. If you don’t like the definitions here feel free to express so, I’m all up for criticism and debate.

    • Component Based Architecture
      • A preference for aggregation over inheritance. Is just a concept and does not lead to a single specific implementation. A game object is a collection of components. A component defines data and/or functionality for a concept.
    • Entity Component System (ECS)
      • A specific implementation of Component Based Architecture. A game object would be an ID (an integer). The ID is used to form an aggregate. Usually an ECS implies an implementation similar to a database, where components are entries into a database that are looked up through some identifier. The main goals of this implementation are efficiency and simplicity. Often times the term “ECS” is used just to describe a Component Based Architecture, often leading to confusion.
    • Aggregation
      • I like to think of this as a “has-a” relationship over an “is-a” relationship. Aggregation refers to one object “having” another object, which implies an aggregate is a collection (data structure) of other objects.

Some Truth and History

Aggregation is useful from a game design perspective. It frees functionality from arbitrary classification (classes and inheritance). Classes were originally created in C++ to let a programmer tie together a piece of data and some functionality to represent some sort of real-life concept. This is in simplest terms the essence of Object Oriented Programming (OOP). Over time more features were added to help engineer relationships between classes, one such feature came in the form of inheritance.

There’s nothing inherently wrong with OOP and it makes sense in a lot of code. Problems can arise when there’s a mis-application of OOP that has implications that aren’t fully understood at the time of implementation that cause negative affects down the road. I’m sure we’ve all seen the code migration and mega-class example so commonly thrown around in articles arguing against OOP and inheritance abuse.

In response to such an abuse a new paradigm became popularized which focused on aggregation of functionality to form an object. This might be called a “component based architecture”. In general aggregation can be considered an appropriate alternative to inheritance.

OOP Diatribe

Usually when an article spews forth caustic attacks against OOP it’s directed at naive implementations that disregard implications of how memory is accessed. Perhaps in the past the bottleneck of most everything was processor speed, so a lot of literature focuses on this. Nowadays CPUs on the PC have an architecture that have ridiculous computational power with extremely limited memory access. In general one might consider accessing memory from RAM 300 times slower than multiplying two floats together. Of course this last statement is extremely anecdotal without any evidence, but exists just to give a rough perspective of reality in many current (2014) cases.

If objects with associated code (classes) are just allocated and deallocated on the heap at will then a performance bottleneck of memory access is going to rear its ugly face, likely long before other performance issues are even on the radar. This is where much of the diatribe comes from.

It should be noted that pretty much all code bases that make use of the C++ language use classes and structures in some form or another. As long as a programmer has an understanding of memory, how it’s accessed, and what implications arise from given implementations, nothing will go wrong. Alas, actually doing these things and writing good code is super hard. It doesn’t matter if a class has some implementation code within it, so long as that bit of code makes sense for the purposes it is serving.

Implementing Components, a First Draft

The most immediate implementation would be to make use of multiple inheritance. This has a clear definition of where the data goes, and it all goes in one class -the derived class. Multiple inheritance itself can get a bit tricky when dealing with pointer typecasting between derived and base types, though the C++ language itself handles the details much of the time.

Inheritance alone doesn’t provide a good mechanism to query whether a base class is apart of a specific derived aggregation and so the dynamic cast operator is born. Since the dynamic cast is a branching operation, usually implemented (afaik) by inspecting the vtable, it is avoided in general.

Multiple inheritance also does all sorts of work to member function pointers, and is just a sad part of C++. Additionally there isn’t any language feature that allows for dynamic dispatch for combinations of base classes, so if the need arises a custom solution will need to be implemented anyway.

Memory accessing, although defined, isn’t ideal. Multiple inheritance forms a blob of different data, and usually only a single piece of the blob is needed at any given time, meaning locality of reference will be poor in general. This leads to the idea of inheriting from multiple interfaces in order to decouple memory aggregation from functionality aggregation, which leads to the next draft.

Second Draft – Run-Time Aggregation

Instead of using multiple inheritance on interfaces, which is a compile-time feature, run-time support can be added. Object aggregates can be formed during run-time, and modified thereafter. This is appealing for data driven applications, and game-design friendly development iteration speed.

So lets assume that some programmer wants to implement components, but doesn’t think much about memory access patterns the implications therein. Using a vector of pointers an implementation of components becomes super simple. Each pointer can point to an interface exposing a few functions like Update, Init and Shutdown.

Searching for a particular component is as simple as linearly looping over each pointer until a matching type is found. If these pointers are ordered in some way a search can be performed, perhaps a binary search could suffice. If the identifier of a component is hashable a hash table lookup can be used.

The implementation so far is an excellent one except that there is no definition of how memory is allocated and accessed! In the most naive of implementation each game object and each component will be allocated on the heap with separate calls to malloc.

Despite having no clear memory definition there are some nice benefits that have arisen. Data driving the composition of an aggregate becomes quite trivial as each component of an aggregation can have an entirely isolated lifetime. Adding, removing, modifying, or even creating new components at run-time are all now possibilities. This dynamic aggregate architecture is great for improving game development and design iteration time!

Aggregation and Components and the Entity System Paradigm (ES/ECS)

As stated in the definitions section, an ECS is just a specific implementation of a component based architecture. A component based architecture game engine architecture would be a custom implementation of multiple inheritance. A clearly defined ECS can impose restrictions on how a component architecture is implemented and used in hopes of avoided poor memory access patterns, or in hopes of keeping code simple and orderly.

If a component is designed as a piece of memory without any code, and a game object defined as an integer ID then performance specifications can be easily imposed. Rules about where in memory components lay, and how components are actually accessed can be clearly defined in simple terms. Code can be written that operates upon arrays of components, transforming arrays linearly. This idea is actually a type of Data Oriented Design (DOD), which makes sense as DOD is just an idea! ECS is an application of the idea of DOD.

So with this type of implementation the benefits of dynamic composition can be paired with well-defined memory layout and access patterns. Suddenly prefetching and parallelism become much simpler to support.

Aggregatize all the Things!

There’s a problem. Blindly shoving the idea of an ECS implementation into every nook and cranny of an engine during development is just silly (or any complex system, not just game engines or libraries). Often times a particular system is not best implemented with a component or aggregate paradigm in mind.

An obvious case is that of a physics engine. Often times a physics engine developer is worried about collision detection, solving systems of linear equations, rigid body mechanics and allowing the engine to easily be integrated into existing code bases. These details involve a lot of math and good API design. A developer of a physics engine is going to have their focus employed in full force in solving problems specific to physics engines. This means that the engineer’s focus is finite, so the implementation that is best is one that the engineer can actually bring to completion. An implementation that can come to completion is one that makes sense for the specific details of whatever is going on inside the physics engine. The specific paradigms used are often not aggregation or component based!

In order for a physics engine to run fast it needs to have efficient memory access patterns and memory usage, on modern PC hardware, requires some form of DOD. Since this complex (often black boxed) physics engine will have it’s own specific implementation and optimization it doesn’t make sense to force a component based model to its very core with some sort of idealistic zeal. It gets really bad when strict rules are imposed (like banning all code from classes and structures that define components) on the component model (like with an ECS) and the rules start permeating the deep recesses of the entire code base.

The same thing goes for any sort of complex system. The core facilities of a game engine often times just don’t really care about components or aggregation. This means that an engine architecture that implements components will usually have to deal with middleware graphics/physics engines/libraries that don’t subscribe to a component based model (simply because it’s easier to use a library than to write your own custom things, especially if those custom things religiously follow some silly methodology like ECS or even OOP). In practice light wrapper components can be created to let the functionality of such systems be presented in a component format, ready to be used in an aggregate object.

What does this all mean? What should we all do?

Use components where it makes sense in code. Use inheritance where it makes sense in code. Use databases where they make sense. Use all the things where they should. This is a pretty sad answer but it’s the right one. There is no silver bullet paradigm that solves all the problems in the game engine architecture world, and there are no steps to follow to achieve a result that works in all cases. Specific problems require specific solutions. Good code is hard to write, and will require a lot of judgement calls. In order to make good judgement calls a lot of experience and perspective is required.

I recommend using aggregation where it really matters. Dynamic aggregation is important for gameplay specific code. Gameplay specific code, in this article, would refer to code that would not easily apply or work at all in a different game. It’s code that is your game and doesn’t define an isolated system or functionality.

Dynamic aggregation and the component based model are extremely important for game and object editors. Game design flourishes best when iteration times are driven to zero, and the ability to create new things from a composition of fundamentals is very valuable! Clearly composition is useful, but how it’s to be used is the hard part.

What Components to Make?

I recommend making components concerned with providing access to game-independent functionality to be quite large. Every 3D game engine has a concept of a mesh, and will usually have some sort of file format to associate with, like FBX. Every 2D game engine will have the concept of a sprite. Each game using Box2D will have colliders and rigid bodies, and possibly joints. These fundamental pieces of functionality don’t change very often, so static compile-time relationships aren’t a bad thing since iteration time isn’t really all that relevant.

A 3D game might have a single Mesh component for example. A Mesh component can have renderable vertices, and possibly all the skeletal and animation information as well. There may be a single Rigid Body component, which encapsulates the idea of colliders or shapes, as well as the functionality of rigid body mechanics. The Rigid Body component might even contain all necessary code and data to hold multiple joints! Or joints may be a component themselves.

For high level and gameplay related features components can become much more granular (or not if you so choose). Gameplay should be iterated, tested and changed frequently, so having small and decomposed components will probably make a lot of sense in a lot of cases. Large components that encompass more broad ideas will be useful in many cases too. Even in the gameplay world judgement calls are essential.

Usually efficiency isn’t so important for much gameplay code, so any implementation that is decently performant will suffice. Scripting languages, dynamic memory allocation and virtual dispatch, or what have you can all work. The decisions of what requires flexibility, what requires performance and all between can be difficult to make. Please see the references section for some concrete examples.

Further Readings

We live in a world of opinions and it takes time to sift through them! If you have recommendations please comment below :)

Reference Source Code

The best reference I know of is an open source game engine in progress I myself am developing. Please do send me your recommendations!

Deriving OBB to OBB Intersection and Manifold Generation

OBB_Projection

The Oriented Bounding Box (OBB) is commonly used all over the place in terms of simulation and game logic. Although often times OBBs aren’t actually bounding anything, and really just represent an implicitly defined box.

OBBs commonly come in 2D and 3D form, though there aren’t all that many resources on the internet describing how to derive the 3D case. The ability to write robust collision detection, and more importantly manifold generation (contact data needed to resolve the collision), relies on a good understanding of linear algebra and geometry.

Readers will need to know about affine and linear transformations, and have intuition about the dot and cross products. To learn these preliminaries I highly recommend the book “Essential Mathematics” by Jim Van Verth. I will also be assuming readers are familiar with the Separating Axis Theorem (SAT). If you aren’t familiar with SAT please do a short preliminary research on this topic.

The fastest test I’m aware of for detection collision between two OBBs, in either 2 or 3 dimensions, makes use of the Separating Axis Theorem. In both cases it is beneficial to transform one OBB into the space of the other in order to simplify particular calculations.

Usually an OBB can be defined something like this in C++:

6 Face Axes

As with collision detection against two AABBs, OBBs have x y and z axes that correspond to three different potential axes of separation. Since an OBB has local oriented axes they will most of the time not line up with the standard Euclidean basis. To simplify the problem of projecting OBB \(A\) onto the the axes of OBB \(B\), all computations ought to be done within the space of \(A\). This is just a convention, and the entire derivation could have been done in the space of \(B\), or even in world space (world space would have more complex, and slow, computations).

The translation vector \(t\) is defined as \(t = R_{a}^{T} * (C_b – C_a)\), where \(C_i\) denotes the center of an OBB, and \(R\) denotes the rotation of an OBB. \(t\) points from \(A\)’s center to \(B\)’s center, within the space of \(A\). The relative rotation matrix to transform from B’s frame to A’s frame is defined as \(C = R_{a}^{T} * R_b\). \(C\) can be used to transform \(B\)’s local axes into the frame of \(A\). In \(A\)’s own frame it’s local axes line up with the standard Euclidean basis, and can be directly computed with the half width extents \(e_a\).

To calculate the separation \(s\) along the local axis \(l\) from an OBB examine the following diagram and matching equation:

OBB_Projection

\begin{equation}
s = |t \cdot l| – (|a \cdot l| + |(C * b) \cdot l|)
\label{eq1}
\end{equation}

Due to symmetry the absolute value of a projection can be used in order to gather a value that can be used to represent a projection along a given direction. In our case we aren’t really interested in the sign of the projection and always want the projection towards the other OBB. I understand that newer programmers may have some trouble translating math into code, so here’s an example of eq.\eqref{eq1} as C++:

9 Edge Axes

In 2D eq.\eqref{eq1} is all that is needed to detect overlap upon any given possible axis of separation. In 3D a new topological feature arises on the surface of geometry: edges. Because edges form entirely separate Voronoi regions from face surfaces they can also provide possible axes of separation. With OBBs it is fast to test all possible unique edge axes of separation.

Given two edges upon two OBBs in 3D space the vector perpendicular to both would be the possible axis of separation, and can be computed with a cross product. A possible 9 axes of separation arise from crossing each local axis of \(A\) with the each local axis of \(B\).

Since \(C\) is composed of a concatenation between \(R_{a}^{T}\) and \(R_b\), the rows of this matrix can be thought of as the local axes of \(B\) in the space of \(A\). When a cross product between two axes is required to test separation, the axis can be pulled directly out of \(C\). This is much faster than computing a cross product in world space between two arbitrary axis vectors.

Lets derive a cross product between \(A\)’s x axis and \(B\)’s z axis, all within the space of \(A\). In \(A\)’s space \(A\)’s x axis is \(\begin{Bmatrix}1 & 0 & 0\end{Bmatrix}\). In \(A\)’s space \(B\)’s z axis is \(\begin{Bmatrix}C_{20} & C_{21} & C_{22}\end{Bmatrix}\). The axis \(l\) is just the cross product of these two axes. Since \(A\)’s x axis is composed largely of the number \(0\) elements of \(C\) can be used to directly create the result of the cross product, without actually running a cross product function. We arrive to the conclusion that for our current case \(l = \begin{Bmatrix}0 & -C_{22} & C_{21}\end{Bmatrix}\). This particular \(l\) axis can be used in eq.\eqref{eq1} to solve for overlap.

Since the matrix \(C\) is used to derive cross product results the sign of the elements will be dependent on the orientation of the two OBBs. Additionally, the cross product will negate on of the terms in the final axis. Since eq.\eqref{eq1} requires only absolute values it may be a good idea to precompute abs(\(C\)), which takes the absolute value element by element. This computation can be interleaved between the different axis checks for the first 6 checks, only computed as needed. This may let an early out prevent needless computation.

The same idea can be used to derive this axis in the space of \(B\) by crossing \(B\)’s local z axis with \(A\)’s local x axis transformed to the space of \(B\). To do this it may be easy to use \(C^T\), which represents \(R_{b}^{T} * A\). More information about this can be found in my previous article about the properties of transposing a matrix concatenation.

Eventually all axes can be derived on paper and translated into code.

Manifold Generation

A collision manifold would be a small bit of information about the nature of a collision. This information is usually used to resolve the collision during physical simulation. Once an axis of least penetration is found the manifold must be generated.

For SAT manifold generation is quite straight forward. The entirety of the details are expressed very well by Dirk Gregorius’ great online tutorial from GDC 2013. The slides are openly available for free, and currently hosted here generously by Erin Catto. Readers are encouraged to familiarize themselves with the idea of reference and incident faces, as well as the idea of reference face side planes. For a 2D analogue I have provided my own slides on this topic.

There are only two different cases of collision that should be handled: edge to edge and something to face. Vertex to edge, vertex to face, and vertex to vertex resolutions are ill-defined. On top of this numerical imprecision makes detecting such cases difficult. It makes more sense in every way to skip these three cases and stick to the original two.

Something to Face

Something has hit the face of one of the OBBs. It can be anywhere from 1 to 4 vertices from the face of the opposing OBB. The entire intersecting volume between two OBBs does not need to be considered due to the nature of SAT and the axis of minimum penetration. Only two faces from each OBB need be considered for this case, and are by convention named the reference and incident faces, one from each OBB.

Finding the incident face given an axis of separation can be found in constant time with implicit geometry like an OBB; the problem reduces to that of finding a vector that most faces a given vector, or in other words the most negative dot product.

Once the incident face is identified it is clipped to the side planes of the reference face. There may be some tricks involving basis transformations here to simplify the clipping routine. In order to perform clipping the incident face vertices must be calculated. Given a face normal from an OBB all four face vertices can be computed with the normal and the half width extents of the OBB. In similar fashion the side planes of the reference face can also be directly computed.

Edge to Edge

The edge to edge case requires a clever constant-time computation of a supporting edge. A supporting edge is the edge most pointed to by a given axis of separation. The computation of this edge may be tricky to derive yourself, though there do exist online references. I suggest checking out Open Dynamics Engine (ODE) by Russell Smith, or the Cyclone Engine by Ian Millington for information on how to compute a supporting edge.

Once both supporting edges are calculated the closest two points between each edge can be calculated and used as a single contact point of which to resolve upon.

Numeric Stability

In all uses of OBB to OBB intersection cross products with nearly parallel vectors pose a problem. The best solution is to detect when any two axes from each OBB are parallel and skip all nine cross product axes. It can be visually shown that no false positives will come from skipping the cross product axes in this case.

A simpler solution, as proposed by Gottschalk ’00 in his paper “Collision Queries using Oriented Bounding Boxes”, which made its way into Ericson’s book “Real-Time Collision Detection”, is to bias all elements of \(C\) with a small epsilon to drive near-zero axes away from a degenerate case. For other axes the epsilon value should be tuned such that it doesn’t have much an impact upon results.

If using this collision routine to resolve collision during physical simulation certain tuning my be appropriate. If an iterative resolver is used it may be preferred to have slightly more consistent manifold generation, as opposed to exact manifold generation. For details I suggest researching works by Erin Catto and comments made by others around the Bullet Forums.

AABB to OBB

Hopefully by now readers realize that this whole time we’ve been simplifying the problem of OBB to OBB to the problem of AABB to OBB. When an actual AABB needs to be collided against an OBB simplifications can occur. Many simplifications revolve around the non-necessity to transform things into the basis of the AABB, since world coordinates should suffice.

One Axis Aligned (or Two?)

Some games have OBBs that only orient on some of their axes instead of all of them. One can immediately realize that no cross product checks need to be performed in any case, if two OBBs have orientation locked on the same axis. Other optimizations will occur as well, making some axis tests reduced to that of AABB overlap testing.

Conclusion

Collision detection is difficult. Writing robust collision detection code requires a good mathematical foundation as well as the ability to write efficient code. This slight mixture of fields of study may be difficult to learn all at once, but it all gets easier with time and practice and the rewards can be very fulfilling.

I’d like to thank my friend Nathan Carlson for teaching much of this information to me, and for his insightful discussions.

Simple and Efficient Singleton Pattern

For game engines the singleton pattern is pretty commonly used for various global accesses, like for the core engine or for specific systems. Often times things like texture managers, the graphics implementation or a physical simulator are represented as singular entity that can be accessed globally.

A singleton pattern can be used to help facilitate and assert the existence of only one of these systems at any given time. This is important for a lot of code that is created under the assumption that only one given instance will be alive at once.

Advantages of Singletons

The biggest reason singletons are used is for code clarity. Any programmer that realizes something is a singleton is instantly informed of how it should be used. Beyond conceptual aids a singleton can also be an efficient means of allowing global access of an object. Often times in games everything is owned by something, otherwise referred to as the “Ownership Pattern”.

If a game consists purely of things owning other things (except for the core Game or Engine object), retrieving different systems from global access might be hard. This is because the Engine would be the only global thing. Code like this:

is long and annoying and also inefficient; there are many unnecessary levels of indirection. Instead a singleton can be used to solve such a problem.

A Traditional Approach

The traditional approach to creating a singleton is to utilize some code like so:

This approach does in fact ensure that only a single instance of a given class is alive at any given time, and can be especially effective if the constructor and destructor are declared in the private section.

Drawbacks

There is a pretty big drawback to this traditional style: construction and destruction order. C++ makes no guarantee about the construction and destruction order of objects on global (file) scope. This means that the code run for each destructor of every singleton instance (despite construction time) can run in any order.

This poses a huge problem for systems that depend on one another. What if the TextureManager class contained a pointer to the Graphics class? What if the destructor of the TextureManager tries to access the Graphics pointer and the Graphics singleton has already destructed?

Such an issue does have workaround solutions, but it might be best to have some form of singleton that controls construction and destruction explicitly.

A Better Singleton

Here’s a simple way to implement a singleton and allow explicit construction and destruction:

This singleton is actually quite safe due to the simple assertion in the constructor. The nice thing about this assertion is that it can be compiled away during release builds. This might be considered an advantage against the traditional approach, as the traditional approach will often times have a boolean flag to test for previous construction (in the assembly of the compiled C++).

Since it is a slight inconvenience to add the instance handling to every class you wish to be a single the use of a template mixin can help. Making such a utility can be tricky due to multiple inheritance. I will leave solving the multiple inheritance issue as an exercise for the reader.

I myself use this sort of singleton (I don’t even have a utility) and enjoy it. I actually don’t even use a Get function but just have a global extern’d pointer in my header.

I’ve also seen this exact implementation in a few areas, one of which is in an article by Scott Bilas in the first Game Programming Gems book (he covers the multiple inheritance issue),

Automated Lua Binding

Capture

Welcome to the fifth post in a series of blog posts about how to implement a custom game engine in C++. As reference I’ll be using my own open source game engine SEL. Please refer to its source code for implementation details not covered in this article. The folder of interest would be LuaInterface.


Introduction

Binding things to Lua is twofold: objects and functions must be able to be sent to and retrieved from Lua. Functions can be either static C or struct/class methods. Objects can be sent “by value” or “by reference”. As you can imagine it is important to be able to unify and simplify the binding process as much as possible to reduce all manual dev-work and upkeep.

Generic C++ Functor

As with many things in a modern C++ game engine it is critical to have a generic C++ functor. Ideally this functor can wrap around class/struct methods (not only static functions). It is also possible have this functor able to refer to a function within Lua as well.

Please see my article and slides on C++ Function Binding for implementation details not covered here.

Prerequisites

This article is on the topic of automatic Lua binding; if you’re unfamiliar with how to bind simple C functions to Lua please do a little research and come back later. The deep end of the pool is actually pretty deep!

I also suggest a working knowledge of C++ templates before trying to implement these sort of features. A working knowledge of Lua is also essential.

Setting the Boundaries

With a scripting language it’s important to clearly define what you want to expose to script. Is the entire game in Lua? Are only specific parts accessible? What are the boundaries. It’s all too easy to get very caught up in what to send, what to implement, what not to do. Having clear boundaries of exactly what you want to do is the best way to start coding.

Passing Objects to Lua

Objects can be passed to Lua by reference and value. A reference would consist of 4 bytes of memory to contain a pointer to some C++ memory. This allows Lua to store a “reference” to an object in C++. Most of the work involved in this type of object binding is in allow Lua to call C++ methods or functions on the pointer its storing.

The benefits of this approach are such that: calling class methods is pretty fast and shouldn’t be a worry; fairly simple to implement as most of the work is finished by creating a generic functor in C++; no hassle or upkeep when wanting to send new types of objects to Lua -each object is just a 4 byte pointer.

Passing by Reference with lightuserdata

There are two ways I’d recommend to pass an object to Lua with: userdata and lightuserdata. A lightuserdata represents a void * in Lua and can hold a reference to an object in C++.

Here’s how one might send and retrieve lightuserdata from Lua:

This method is very fast, simple to implement and has very minimal memory overhead. Additionally lightuserdata can be compared to one another, and are equal if the underlying address is equivalent. However, one cannot attach metatables to lightuserdata and there is no sense of type safety what so ever. A lack of type safety means that if someone passes a lightuserdata into an incorrect C function the host program will likely crash.

With lightuserdata the following code is possible:

This solution will work for one, maybe two people working on a smaller project or minimal amount of code. I can imagine that the lack of type safety will be the biggest issue as time goes on.

Reflection for Type Safety

It is possible to implement type-safety in Lua. However this requires Lua code to be maintaining type information. Lua is a scripting language meaning it ought best be used to script things. Something so integral and common as type-safety might better be implemented in lower-level C++ code.

Implementing type safety on the C++ side has two benefits: efficiency of implementation; type-safety can optionally be compiled away in release mode.

I highly recommend building yourself a simple, custom introspection library in C++. All that is really needed to start is the ability to query a type’s size and name efficiently. Please see my older article on custom Introspection or the game engine SEL for examples on how to implement such a system.

With a simple macro-based registration system one can register and lookup type information via introspection like so:

After this is complete and working (if you don’t have an implementation of introspection yet this is fine, just think of it as a black box) a small generic Variable object ought to be created. Sample code of a functional Variable object is in this post.

A Variable can be used like so:

It is important to note that the Variable itself is not a templated type!

When passing an object to Lua we can send a pointer to a Variable. As long as the Variable exist in memory in C++ the lightuserdata within Lua will point to a valid Variable. Upon retrieval of the Lua object back to C++ a type assertion can be run:

Generic Static Function Binding

Bind C-style static functions in a generic way makes heavy use of custom introspection. The way I was originally taught was to just throw the entire binding function (in C++) at you all at once and let you suffer. Prepare to suffer as I did!

This function isn’t doing the bind, it’s what is bound. Every time a function in C++ is called from Lua, this function is called first.

An upvalue in Lua is akin to static variables in C. Using this we can attach a pointer to a generic functor to a bound C function within Lua. As Lua calls a C function this upvalue is retrieved and eventually used to actually call the C function.

The rest is just a matter of handling variables to/from Lua. In the above example the Variable object contains some helper functions call ToLua and FromLua. The nice thing about my implementation of this within SEL is that no heap memory is used during this entire process! All this code boils down to a very efficient method of generically calling C functions.

I will leave binding C++ methods as an exercise for the reader. By now you ought to have an idea of where to look for example implementation! The idea is to handle type information for the “this pointer” of the method, and pass around an actual “this pointer” to call the method.

Calling Methods from Lua

Lets say you have an implementation that allows Lua code like the following:

A few things need to happen here. The first is that the object in question should only call methods that are actually methods of that specific type of class; one cannot simply bind all C++ methods and place functions in Lua within the global scope. Any object type could call any method type making for a lack of type-safety and dangerous code.

At this point the lightuserdata will have to be upgraded to a full userdata. Full userdata in Lua enjoy benefits such as the ability to set and modify metatables. If you’re not familiar with Lua metatables please do a little research on the topic and come back later.

A full userdata allows us to place a copy of a Variable within Lua memory, instead of just a void *. This means a temporary Variable can be used to call ToLua, instead of requiring that the Variable sent stays valid in C++ for the duration of usage within Lua.

Currently a way to create metatables for all of our C++ types is required. Assuming a linked list of all TypeInfo objects from the introspection system is available:

This loop is just creating metatables given the string names of what each metatable should be called.

After the tables are created the actual C++ methods and functions should be bound. This turns out to be really simple! It is assumed that each function and method registered within the introspection system can be passed to the function at some point (perhaps during registration of the type information):

And that’s really all there is to it! The idea here is to make sure that a type with methods sent to Lua has its userdata fixed with a metatable containing the available methods to call. When the __index metamethod is called it will search within the metatable itself for an appropriate member. Members of the metatable are the functions we bound to Lua. After they are fetched they can be called. This is what happens behind the scenes when we do:

Passing Object by Value

Passing objects by value is actually much more difficult. The idea is to utilize tables to to store representations of the members associated with a class or struct. A table can be used to represent state of an object.

The __index and __newindex metamethods of a userdata should be set to look into the state table first. This lets users assign new values, and lets your ToLua and FromLua functions copy members from C++ to/from this Lua state table.

If a member is not found in the state table the metatable can then be searched by setting the __index metamethod of the state table to refer to the proper metatable.

All of this table indirection does incur significant overhead, however it allows objects in Lua to be used like so:

I myself have not implemented this type of Lua binding, though it is entirely possible and can be quite nice to work with. I reiterate that adding this many tables incurs both memory and performance overhead not seen with the other styles. This seems to be the only drawback.

Conclusion

Well this post turned out longer than I expected -over 2k words! Hopefully the information was clear. It’s really nice being able to refer people to a complete and working example such as the SEL engine; it makes writing articles much easier and simpler.

Hopefully this can help someone out there! As always feel free to ask questions or provide comments right here on this page.

Please see Game Programming Gems 6 ch. 4.2 for more information about binding C++ objects to Lua,

Simple Sprite Batching

Capture

Welcome to the fourth post in a series of blog posts about how to implement a custom game engine in C++. As reference I’ll be using my own open source game engine SEL. Please refer to its source code for implementation details not covered in this article. Files of interest are Graphics.cpp and Graphics.h.


Batching and Batches

Did I say “simple” sprite batching? I meant dead simple!

Modern graphics card drivers (except Mantle???) do a lot of stuff, and it makes for passing information over to the GPU (or retrieving it) really slow on the PC platform. Apparently this is more of a non-issue on consoles, but meh we’re not all working with consoles now are we?

The solution: only send data one time. It’s latency that kills performance and not so much the amount of data. This means that we’ll do better at utilizing a GPU if we send a lot of data all at once as opposed to sending many smaller chunks.

This is where “batching” comes in. A batch can be thought of as a function call. In OpenGL you’ll see something like DrawArrays, and in DirectX something else. These types of functions send chunks of data to the GPU in a “batch”

Sprites and 2D

Luckily it’s really easy to draw sprites in 2D: you can use a static quad buffer and instance by sending transforms to the GPU, or you can precompute transformed quads on the CPU and pass along a large vertex  buffer, or anything in-between.

However computing the batches is slightly trickier. For now lets assume we have sprites with different textures and different zOrders.

Computing Batches

In order to send a batch to the GPU we must only draw sprites with the same texture. This is because we can only render instances (or a large vertex array) with a given texture in order to lower draw calls. So we must gather up all sprites with the same texture and render in the correct order according to their zOrders.

If you are able to store your sprites in pod-structures then you’ll be in luck: you can in-place sort a large array of pods really easily using std::sort. If not, then you can at least make an array of pointers or handles and sort those. You’ll have extra indirection, but so be it.

Using std::sort requires STL compatible iterators, and you’ll want one with random access  (array index-style access). Here’s an example with a std::vector:

The sort implementation within your package of the STL is likely going to be quicksort.

This sort will sort by zOrder first, and if zOrders are matching then sort by texture. This packs a lot of the sprites with similar textures next to each other in the draw list.

From here it’s just a matter of looping over the sprite array and finding the beginning/end of each segment with the same texture.

Optimizations

A few simple operations can be done here to ensure that computing batches goes as fast as possible. The first is to get all of your sprites together in a single array and sort the array in-place. This is easily done if your sprites are mere pods. This ensures very high locality of reference when transforming the sprite array.

The second optimization is to only sort the sprite array when it has been modified. If a new sprite is added to the list you can sort the whole thing. However there is no need to sort the draw list (sprite array) every single frame if no changes are detected.

Conclusion

Like I said, sprite batching is super simple in 2D. It can get much more complex if you add in texture atlasing into the mix. If you wish to see an OpenGL example please see the SEL source code.

I was able to render well over 8k dynamic sprites on-screen in my own preliminary tests. I believe I actually ended up being fill-rate bound instead of anything else. This is much more than necessary for any game I plan on creating.

C++ Reflection Part 6: Lua Binding

Binding C/C++ functions to Lua is a tedious, error prone and time consuming task when done by hand. A custom C++ introspection system can aide in the automation of binding any callable C or C++ function or method a breeze. Once such a functor-like object exists the act of binding a function to Lua can look like this, as seen in a CPP file:

The advantage of such a scheme is that only a single CPP file would need to be modified in order to expose new functionality to Lua, allowing for efficient pipe-lining of development cycles.

Another advantage of this powerful functor is that communication and game logic can be quickly be created in a script, loaded from a text file, or even setup through a visual editor. Here is a quick example of what might be possible with a good scripting language:

In the above example a simple enemy is supposed to follow some target object. If the target is close enough then the enemy damages it. If the target dies, the enemy flashes a bright color and then acquires a new target.

The key here is the message subscription within the initialization routine. During run-time objects can subscribe to know about messages emitted by any other object!

So by now hopefully one would have seen enough explanation of function binding to understand how powerful it is. I’ve written some slides on the topic available in PDF format here (do note that these slides were originally made for a lecture at my university):

Download (PDF, Unknown)

Templates and Metaprogramming

Recently I’ve begun to take over a club called Game Engine Architecture Club at DigiPen. The founder of the club graduated and asked me to try running it. I’ve found it be to a lot of fun! The first lecture I did was on template metaprogramming in C++ (TMP).

The lecture was recorded live and is currently up on DigiPen’s youtube channel for public viewing. Though the slides are visible during the video, I don’t think I can share the slides themselves publicly.

Hope you enjoy the video!

Component Based Engine Design

What is Component Based Design?

Component based engine design was originally pioneered in order to avoid annoying class hierarchies that inheritance introduces. The idea is to package all functionality of game objects into separate objects. A single game object is just a collection of the components, as so the components of a game object define it’s behavior, appearance and functionality. This is perfectly fine, though there are plenty of resources out that talk about this topic. However I’d like to take a step back and start from the top.

It should be noted that the implementation presented here is just one way of going about things, and comes directly from my highly subjective opinion. Perhaps you as a reader can come up with or know of better solutions or designs.

Here is an example game object, note that the game object is generic and simply contains some components:


The Actual Engine

The engine of a game can be thought of as a manager of systems. As to what a system is, we’ll get to that later, for now think of a system as either Physics, Graphics or AI. The engine ought to have a main loop function, as well as an update function. The update function calls update on all contained systems in a specific order. The main loop function is just a small infinite loop that calls update.

Often times the main loop will deal with timestepping itself. Have a look at the linked article to learn more about proper timestepping.

It is important to have your engine expose the update function, as sometimes your engine will need to be compiled as a static library and linked to externally. In this case the main loop of your simulation may reside outside of the engine library altogether. A common usage I’ve seen for this sort of design choice is when creating an editor of some sort, perhaps a level or content editor. Often times these editors will have a veiwport to preview the game, and in order to do so access to some sort of engine update function is necessary.

Beyond containing and calling update, the Engine also forwards global messages to all the systems. More on messaging later.


Singletons?

Creating more than one instance of an engine should never happen. The question of “should I make this a singleton” will sometimes arise. In my experience the answer is often no. Unless you’re working on a large team of programmers where the chances of some moron making an instance of some Engine or System class, making things a singleton is just a waste of time. Especially if retrieving data from that singleton incurs a little bit of overhead.


Systems

Each system in the engine corresponds to one type of functionality. This idea is best shown by example, so here are the various systems I usually have in engines I work on:

  • Graphics
  • Physics
  • GameLogic
  • Windowing/Input
  • UI
  • Audio
  • ObjectFactory – Creates objects and components from string or integral ID

Each system’s primary functionality is to operate upon game objects. You can think of a system as a transform function: data is input, modified somehow, and then data is output. The data passed to each system should be a list of game objects (or of components). However a system only updates components on game objects, and the components to be updated are the ones related to that system. For example the graphics system would only update sprite or graphics related components.

Here’s an example header file for a system:

The naive approach to a system update would be to pass a list of game objects like so:

The above code is assuming the ObjectFactory contains all game objects, and can be accessed somehow (perhaps by pointer). This code will work, and it’s exactly what I started with when I wrote my first engine. However you’ll soon realize the folly involved here.


Cache is King

That’s right, those who have the gold makes the rules; the golden rule. In a more serious sense, cache is king due to processing speed related to memory access speed. The bottleneck in all engines I have ever seen or touched in the past couple years has been due to poor memory access patterns. Not a single serious bottleneck was due computation.

Currently modern hardware performs very very fast. Reaching for data in memory (RAM, not just hard disk) is orders of magnitude slower. So caches come the rescue. A cache can be thought of, in a simplified sense, as a small chunk of memory right next to the CPU (or GPU). Accessing the cache memory is way faster than going all the way out to main RAM. Whenever memory is fetched from RAM memory around that RAM location is also placed into the CPU cache. The idea here is that when you retrieve something from RAM the likelyhood of requiring to fetch something very nearby is high, so all the data in that area is grabbed all at once.

Long story short, if you place things that need to be accessed at around the same time next to each other in memory, huge performance benefits will be reaped. The best performance comes from traversing memory linearly, as if iterating over an array. This means that if we can stick things into arrays and traverse these arrays linearly, there will be no faster form of memory access.

Fetching memory that does not exist in the cache is called a cache miss.


Cache and Components

Lets revisit the naive approach to updating systems. Assuming a system is handed a generic game object, that system must then retrieve its corresponding component(s) to update, like so:

As this loop is run the memory of every game object and every component type that corresponds to the system is touched. A cache miss will likely be incurred over and over as the loop runs bouncing around in memory. This is even worse if the ObjectFactory is just creating random objects with new, as every memory access to every object and every component will likely incur cache misses.

What do? The solution to all these memory access problems is to simplify the data into arrays.


Arrays GameObjects + Components

I suggest having every game object exist within a single giant array. This array should probably be contained within the ObjectFactory. Usage of std::vector for such a task is recommended. This keeps game objects together in memory, and even though deletion of a game object is of O(n) complexity, that O(n) operation traverses an array, and usually will turn out to be unnoticeable. A custom vector or array class can be created that avoids the O(n) operation entirely by taking the element at the end, and placing it into the deleted slot. This can only be done if references into the array are translated handles (more on handles momentarily).

Every component type should be in a giant array too. Each component array should be stored within their respective systems (but can be “created” from the Factory). Again, an array like data structure would be ideal.

This simple setup allows for linear traversal of most memory in the entire engine, so long as the update function of each system is redesigned slightly. Instead of handing a list of game objects to each system, the system can just iterate over its related components directly, since the components are stored within the systems already.


So, how are Game Objects “handled” now?

Since components have been moved into large arrays, and the game objects themselves are in a big array, what exactly should the relationship between a game object and a component be? In the naive implementation some sort of map would have worked perfectly fine, as the memory location of each component could be anywhere due to the use of new calls. However the relation isn’t so carefree.

Since things are stored in arrays its time to switch from pointer-centric relationships to handle based relationships. A handle can be thought of in its simplest form an index into an array. Since game objects and components are stored in arrays, it is only natural that to access a game object you do so by index. This allows for these giant arrays to grow and shrink as necessary without obliterating dangling pointers in the rest of the program.

Here’s a code example:

As you can see, an array of handles is stored to represent the containment of components. There is one slot in the array for each type of component. By design this limits each component to be of unique type within a game object. Each handle is an index into a large array of components. This index is used to retrieve components that correspond to a game object. A special value (perhaps -1, or by some other mechanism) can be used to denote “no component”.

Handles can get quite a bit more versatile than just a plain ol’ integer. I myself created a HandleManager for translating an integer into a pointer. Here’s a great resource for creating your own handle manager.

The idea of translating a handle into a pointer is such that once the pointer is used it is not kept around. Just let it be reclaimed back into the stack. This makes it so that every time a pointer is required there is a translation of handle to a single pointer somewhere in memory. This constant translation allows for the actual pointer value to be translated to, to be swapped for another pointer at any time without fear of leaving dangling pointers.


Where does the Code go?

Code for update routines can be put into either systems or components. The choice is yours entirely. A more data oriented approach would put as much code into systems as possible, and just use components as buckets of data. I myself prefer this approach. However once you hit game logic components virtual functionality is likely to be desired, and so code will likely be attached directly to such components.

The last engine I built used the naive approach to component based design, and it worked wonderfully. I used a block allocator so cache misses weren’t as high as with raw new calls.

The point is, do what makes most sense and keep things simple. If you want to store routines within your components and use virtual function calls, then you’ll probably have trouble storing things in an array unless you place all memory in the base class. If you can externalize as much code from your components as possible, it may be simpler to keep all your components in an array.

There is a tradeoff between flexibility and efficiency. My personal preference is to store performance sensitive components in huge arrays, and keep AI and game logic related things together in memory as much as possible, but not really stress too much about it. Game logic and AI should probably just be rather flexible, and so details about memory locations aren’t so important. One might just allocate these types of components with a block allocator and call it good.


Messaging

The last major devil in an engine is messaging. Messaging is transferring data from one location to another. In this sense the most basic form of messaging is a simple function call.

Taking this a step further, we’d like to be able to send messages over a connection in a generic manner. It should not matter what type of message we send; all messages should be sent the same way to reduce code duplication and complexity. The most basic form of this is dynamic dispatch, or virtual function calls. An ID is introduced to the message so the internal data can be typecasted to the correct type.

Still, we can do better. Lets imagine we have a sort of GameLogic component or system. We need a way to send a message to this object that contains some data. Lets not focus much on memory access patterns, as simplicity and flexibility are key here. Take a look at this code:

This code highlights the usage of messaging quite well. Say the player emits some messages as it walks around, perhaps something like “I’m here” to all nearby things in a level. The player can blindly send these messages across the SendMessage function without caring about whether or not it will respond or do anything. As you can see, the implementation of the SendMessage function ignores most message types and responds to a few.

In this example when the player nears the treasure chest it will glimmer a bit. Perhaps when the player gets closer it glimmers brighter and brighter. In order to do so, the MSG object sent to the treasure chest ought to contain the player coordinates, and so it can be typecasted to the appropriate message type.

The eActivate message may be emitted by the player whenever they hit the “e” button. Anything that could possibly respond to an eActive message will do so, and the rest of the objects receiving the message will safely ignore it.

This type of messaging is simple and easy to implement, quite efficient (if a block allocator or stack memory is used for the messages), and rather powerful.

A more advanced version of messaging makes heavy use of C++ introspection. A future article will likely be devoted to this topic, as it’s a hefty topic altogether. Edit: Here’s a link to a slideshow I presented at my university.


Conclusion and Resources

This was a rather whirlwind tour through a lot of different information -I hope it all came out coherent. Please don’t hesitate to ask any questions or add comments; I may even just update this post with more information or clarifications!

Resources: