Cache Aware Components

Special thanks to Danny Frisbie for a nice discussion on the PODHandler implementation!

Let me start off by saying that optimizations really only need to be applied to bottlenecks. In order to know where a bottleneck might occur (especially cache related ones) you’ll probably need some experience. The experience not need be your own, but the experience will come from someone. In my (limited) experience the only bottlenecks I’ve ever seen in any piece of game related software (aside from N^2 loops with a high N) are always due to waiting on things to be placed into the cache. It’s really easy to write bad code, and bad code is usually cache oblivious. Even conceptually clear and understandable code can still be cache oblivious!

I’ve seen some very nice 2D and 3D games, made in C++, that used only rudimentary memory allocation schemes and naive component implementations. They ran at 60 fps just fine. If you’re a hobbyist or just trying to learn, then thinking about how to write the fastest component framework ever might be fun but don’t expect to do it correctly on your first try. Expect to fail, and then iterate.

So when the time comes to actually optimize something, having some sort of idea of where to look to learn how solve cache related problems will be valuable.

Data Lookups and Cache Lines

In general the cache line size for hardware nowadays seems to be mainly 64 bytes. A cache line is a 64 byte piece of memory that is on 64 byte boundaries. Whenever data is transferred from one cache to another (or to/from main memory (RAM)) the memory is transferred in a cache line. This keeps the memory bus busy. 16 32-bit integers would be the size of a single cache line, or 16 32-bit floating point numbers. This reduces to the size of a 4×4 matrix of 4 element floating point scalars.

How fast a cache line is transferred depends on which level it is being fetched from. In general terms: when a piece of memory is fetched by the CPU from a lower level cache it is hoisted up into the L1 data cache (L1 D, L1 I is the instruction cache for code). If this memory was not in the cache it will take forever (100 to 300 cycles, probably near the 300 range for PC). Here’s a nice diagram by Naughty Dog summarizing the common cache setup for PC CPUs:

memory

When something is loaded into the L1 cache whatever was there before has to be evicted, and will be pushed into the L2 cache (again using the same 64 byte cache line size). This will probably evict something from the L2 cache back down into the L3 cache, and so on and so forth.

The implication here is that whenever a cache line is read it is up to the programmer to try to use as much of that cache line as possible. Even though we might have 8 gigabytes of RAM, if we aren’t running in the cache the CPU will be sitting there waiting. Even if a single byte is read from main memory and entire cache line will be fetched. Reading a single byte from a random location in main memory is about the worst possible way to use memory.

This tends toward the idea of using very compact and concise data structures. If a data structure is packed together in memory it can be operated upon by the CPU very quickly once it arrives to the CPU’s cache.

The cache isn’t very big. Here’s a nice slide by Scott Meyers on the topic:

cache

32KB of L1 data cache is tiny. You don’t even get to use all of it as the operating system does need to do stuff too!

Prefetching

Prefetching exists to try to hide the latency of fetching memory. A prefetch is when a cache line is preemptively fetched and placed into the cache, such that when the memory is actually requested a cache hit occurs.

Hardware can detect patterns in real-memory accesses, but it can only detect pretty simple patterns like array traversals. Scott Meyers describes (see resources section) that the hardware is made in such a way that it can detect iterating over arrays forwards, backwards and with variable (but constant) element step size. It can also do this for all hardware threads simultaneously. However, if you’re not looping over an array you can’t count on any intelligent prefetching. It will take two or more cache misses in a recognizeable pattern to start automatic prefetching.

Usually compilers provide a specific keyword to hint to the run-time to grab a specific cache line from somewhere in memory. This can be used by programmers to ease out a final bit of performance, given a proper implementation to prefetch for.

Cache (Un)Aware Components

Hopefully by now readers are convinced that contiguous arrays of data are very friendly to the cache, and where performance matters this knowledge should be exploited.

In a component based game engine architecture looking up and operating on components is often the first bottleneck encountered. It might pay to learn a little about how this might be circumvented.

A memory naive implementation of components will look something like this:

Each component is allocated on the heap explicitly by the operating system, which is going to require a context switch and be very sensitive to memory fragmentation.

Ideally all data of a certain type will be packed together in a tight linear array. When this data needs to be operated upon, the fastest sort of transformation (without manual prefetching) will look something like this (for a generalized example):

Ideally the size of a given element will be below the size of a cache line, and often times this is possible if extraneous data can be removed from the Data type.

The explicit consumption of the data where a local copy is made is probably not necessary and will be compiled away. It is fairly easy to check the assembly by using a compiler to process to an assembly file to double check. However this kind of practice can be very helpful to let the compiler know that multiple pointers cannot possibly alias the same type. For more information search for “C++ aliasing Ericson” to find Christer’s old slides.

Now that the ideal computational situation for transforming a large data set has been described, lets look at a common (albeit contrived) data transformation that we’ve all been guilty of while first learning:

Conceptually this code is very concise and easy to reason about. Though the code readability and dynamic niceties aren’t very efficient. Random calls to delete occur, the inner loop contains a branch, and called Update on an object will go to who knows where in memory. All of these things are basically punching the cache right in the gut. Even the branch can be annoying for the CPU pipeline as it may have to eject code out of the L1 I cache if a branch is mispredicted!

How can this be solved? The first step is to make sure as much data is packed together in memory as possible. In the above code snippet the list can be changed to an array (perhaps std::vector). Okay, pretty trivial change, no big deal. Objects can perhaps just be placement new’d into the array and placement deleted. This will act like a memory pool.

The next step is to identify that the UpdateGameObjects function is performing two types of tasks (assuming the Update function performs a single task); deletion and update calling. This is a result of the container of objects not being sorted. It is a non-homogeneous collection of objects that are both alive and dead. If objects can be separated into sections of dead and alive, only the alive objects need to be looped over.

Cache Aware Components

One way to implement this would be to have the beginning of the array contain a contiguous line of live elements. The rest of the array can contain “deleted” or “not yet allocated” elements. In order to uphold this invariant it might be best to design objects placed into these sorts of arrays not care if they are moved in memory. Usually this means making your data a plain old data type (POD).

Deleting things from the array is going to be a nice feature to support. A game wouldn’t be interactive for very long if it could only consume more and more memory. A simple and very effective scheme is to move the last element of your array into the location of a deleted element.

However in order to refer to a unique element within an array simple pointers are no longer going to cut it. When an element is moved from the last index into a deleted slot any pointers to the old spot (the last and now empty element) will be dangling. Some form of translation from one data type into a pointer must occur in order to ensure that the correct pointer is retrieved for a unique entry in the array.

Usually this translation comes in the form of a handle. A handle can be implemented as an integer divided into two different sections. The details of how to implement a handle should be known by the reader before continuing on, so please view Llopis as a resource.

Lets create a simple abstract data type that grants access to an array of POD elements, of which supports handle translation, allocation and deletion:

In order to implement these two functions please do refer to the Llopis resource referenced in the last paragraph.

In order to implement Release things start to get tricky. How does the PODHandler update the handle of the element it moved? Somehow the location of the internal handle entry needs to be accessible just by knowing where the element was moved from. The easiest solution is to place a handle inside the type T within each element of the array. However, it would be great if types that are held inside of PODHandlers can fit within a single cache line. Adding a handle to every single element lowers the density of the data in the array. For certain situations this data bloat, though only 4 bytes per elements, will reduce the effectiveness of every single cache line from 64 bytes to 60.

Clearly an alternative could be used! The solution is to yet again separate different types of data into different arrays. The internal array of the PODHandler should consist of homogeous data! Rip out that intrusive handle and place them all in their own arrayPODHandler can now consist of 3 arrays in total: an array of type T, an array of Handles, and an array of integers.

The array of integers share their indices with the array of PODs. This means a POD in element 3 will correspond to the integer in element 3 of the integer array. The integer array contains indices that map to the handle associated with a given POD element in the handle array m_entries.

Though readers may by now be wondering “wouldn’t three different arrays potentially have worse locality of reference than just two arrays?”, and this would be a good thing to wonder. It is true that an intrusive handle would be preferable if handle translations are extremely frequent. If they are, the original intrusive handle implementation may be ideal.

If an engine is architected to focus on cache utilization for transforming large data sets with expensive operations, then a homogeneous array will be preferential. Or in other words if you want to loop over a lot of stuff and do expensive math on each element, that array better be dense. This means that handle translations are more infrequent since the code focuses on looping over the data array itself rather than picking out individual elements at random.

Open Source Implementation

The idea PODHandler represents is important. My implementation is just my own manifestation of the concepts described in this post. My implementation is not important! The concepts are important. Hopefully by allowing readers another piece of reference, in the form of some source code, the ideas presented here can be better realized.

PODHandler Source: Link (not up yet)

Additional References

TwitterRedditFacebookShare

4 thoughts on “Cache Aware Components

  1. Pingback: Component Based Engine Design | Randy Gaul's Game Programming Blog

  2. realazthat

    Just to note,  the term cache oblivious is used to mean something that is actually efficient wrt cache, as opposed to cache conscious which is an efficient algorithm that knows the sizes of the caches.

    Reply
  3. Jacmoe

    Perhaps SelHandleManager could be seen as a PodHandler implementation?

    Great article! Mike Actons talk at CppCon about data oriented design and C++ has inspired me to look into this. It takes a while to get away from traditional OO thinking, but it’s worth the effort. Thanks. :)

    Reply
    1. Randy Gaul Post author

      Yeah, it was one type of implementation! Exactly. Really what happens is some kind of handle indirection is used and this slows down retrieving specific components. The benefit is that we can pack components into very dense arrays.

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *