Category Archives: C++

A Star

Recently I’ve found a nice piece of pseudo code for implementing A Star after searching through a few lesser or incorrect pseudo code passages:

And here is the Pop function pseudo code (written by me, so it probably has a small error here or there):

My favorite thing about the pseudo code is that the closed list can be implemented with just a flag. The open list becomes the most complicated part of the algorithm. Should the open list be a sorted array, an unsorted array, a binary heap? The answer largely depends on much more memory you need to traverse.

If a small portion of memory needs to be searched all dynamic memory can be allocated up-front on one shot. Otherwise bits of memory should probably be allocated up-front, and more as necessary during the algorithms run.

Just yesterday I implemented AStar in C where my full header file looked like:

In my internal C file I only have < 150 lines of code, including some file-scope variables and math functions. Implementation was nearly a one to one transcription of the above pseudo code (so those who are recursion impaired like myself shouldn’t have any problems). This style may not be thread-safe, but hey, most AI related coded can only be done in serial anyway. My grid size was maxed out at 20×15, so pre-allocating memory for your entire search area may not be so practical as it was for me.

Still, I hope this post can provide some bits of intuition that are useful to someone.


Freelist Concept

A freelist is a way of retrieving some kind of resource in an efficient manner. Usually a freelist is used when a memory allocation is needed, but searching for a free block should be fast. Freelists can be used inside of general purpose allocators, or embedded directly into an optimized algorithm.

Lets say we have an array of elements, where each element is 16 bytes of memory. Our array has 32 elements. The program that arrays resides in needs to request 16 byte elements, use them, and later give them back; we have allocation of elements and deallocation of elements.

The order the elements are allocated is not related in any way to order of deallocation. In order for deallocation to be a fast operation the 16 byte element needs to be in a state such that a future allocation be handed this element to be reused.

A singly linked list can be used to hold onto all unused and free elements. Since the elements are 16 bytes each this is more than enough memory to store a pointer, or integer index, which points to the next block in the free list. We can use the null pointer, or a -1 index to signify the end of the freelist.

Allocating and deallocating can now look like:

Setting up the memory* will take some work. Each element needs to be linked together somehow, like through a pointer or integer index. If no more elements are available then more arrays of size 32 can be allocated — this means our memory is being managed with the style of a “paged allocator”, where each array can be thought of as a page.

The freelist is an important concept that can be embedded ad-hoc into more complex algorithms. Often times it is important for little pieces of software to expose a very tiny C-like interface, usually just a function or two. Having these softwares self-contain their own internal freelists is one way to achieve a simple interface.

Example of Hiding the Freelist

For example say we are computing the convex hull of a point-set through the Quick Hull algorithm. The hypothetical algorithm exposes an interface like this:

This QHull function does no explicit memory allocation and forces the user to allocate an appropriate amount of memory to work with. The bounds of this memory (how big it needs to be for the algorithm’s worst case scenario) is calculated by the ComputeMemoryBound function.

Inside of QHull often times the hull is expanded and many new faces are allocated. These faces are held on a free list. Once new faces are made, old ones are deleted. These deleted faces are pushed onto the free list. This continues until the algorithm concludes, and the user does not need to know about the details of the embedded memory management of the freelist.

Convex hull about to expand to point P. The white faces will be deleted. The see-through faces will be allocated.

A convex hull fully expanded to point P. All old faces were deleted.

The above images were found at this address:

Parsing C Style Expressions


Turns out that constructing a hand-written C-style parser has a few parts that were very difficult for me.

The first thing was realizing that Backus Naur Form (BNF) largely sucks if you want to hand-write your own parser. BNF is really verbose and expressing simple things like optional terminals or lists is difficult. BNF is also poor for expressing operator precedence, as many intermediate and redundant non-terminals are required to be evaluated during parse-tree derivation. As an alternative Extended Backus Naur Form is perfect for languages that plan to use hand-written parsers instead of parsers created by parser generators. Left-factoring a BNF for LL parsing is also not very useful since handling infinite recursion with hand-written code is trivial.

The second thing is that parsing expressions with various types of operators can be really difficult, especially if there’s a lack of confidence in recursion (like myself). Creating a parse tree given a string representing an expression is a very recursive problem.

In C expressions consist of atoms and operators. An atom would be a literal, constant value, identifier, or an expression wrapped in parentheses. Operators are the usual + or – kind of tokens.

If each operator has an associated precedence value there are a few different algorithms out there with references for learning. I ended up face in the dirt and in the end derived what is known as “precedence climbing“. According to Eli Bendersky precedence climbing is what is currently used by Clang to parse C++ expressions! That ought to instill some perceived merit. From what I can tell Lua 5.3 uses (well, very close to) the same method.

The idea of precedence climbing is to think of two major recursive operations:

  • Compute righthand-side node
  • Make a binary operator node and connect lefthand-side and righthand-side children nodes

The first point is the complex one (that is, conceptually complex). The algorithm starts given a lefthand-side node, however, righthand-side nodes do not come in through the input stream in tree format; the next token can represent a node that should be much deeper in the tree — this means that computing the righthand-side node ought to be the main recursive path.

Realizing that the righthand-side node computation is the recursive path led me to notice a key observation that tipped me off to a working algorithm.

Say we have the following input string as an expression: A 2 B 1 C 4 D 3 E 7 F

Numbers are operators, and the number itself is precedence (higher number is higher precedence), letters are atoms (like a const int variable). Here’s the valid parse tree:


The lowest leaves are evaluated first. It’s easy to see that the tree itself encodes the operator precedence.

If we begin parsing our input string the first atom is A, which will always be a lefthand-side node for most any parsing algorithm used, and will likely be the left-most node in the tree. The next token is the 2 operator followed by B. It’s easy enough to construct the subtree of node 2 pointing to A and B.

The next input token is the operator 1 and atom C. C is bound by operator precedence to the operator 4, though the current state of the algorithm has yet to even read in the token 4. Studying this scenario is what tipped me off to a working solution; C must be treated as a lefthand-side node, though at the current state is considered a potential righthand-side node.

Wikipedia, and this link from earlier, both show great pseudo code for the precedence climbing algorithm. The main difference between the two links is wikipedia includes a nested for-loop in favor of less overall recursive calls. My own code ended up looking something like after I cleaned it up from influences of previous links:

In the end I’m quite happy with the result, and even hooked up a nice ascii-tree printer courtesy of a random stack-overflow user. Here’s a dot product and initialization trees in ascii:

My favorite part about the operator precedence climbing algorithm is how it handles parentheses and prefix unary operators: parentheses can be considered an atom, and when the atom function finds a parentheses is just calls the expression parsing function directly and returns the result! The same can be done for prefix unary operators (if they have really high precedence). The algorithm also trivially handles right-associativity. I haven’t yet thought about unary postfix operators, so if any reader has thoughts on this topic please do comment!

Here’s psuedo-y atom code:


Capsule to Convex – SAT Edge Prune via Gauss Map

In 2013 Dirk Gregorius of Valve presented at GDC on the topic of the Separating Axis Theorem. In his talk he studied an useful property of the Minkowski Difference between two 3D convexes: edge pairs from each shape may or may not contribute to the convex hull of the Minkowski Difference.

It is possible to derive simple predicate functions to reduce the number of edge pair queries, and to simplify implementation, during collision detection via the Separating Axis Theorem. This article shows a derivation of this predicate for the Capsule to Convex case.

Here is the PDF containing the derivation:

Download (PDF, Unknown)

SIMD – Matrix3x3 Transpose

Just recently I finished implementing my own personal SIMD math library using SSE intrinsics. There are two major resources for learning how to write effective SIMD code:

While inspecting the DirectXMath source I came across the implementation of transposing a 4×4 matrix:

Lately I have been working only with 3×3 matrices and vectors. This is nice since often times 4×4 matrices store mostly useless data in the bottom row. In effect some kind of 3×4 matrix can be stored in memory to represent an affine transformation:

Depending on what the code is used for the rotation matrix can have scaling built in, or not. Often times only uniform scaling is desired so that chains of transformations can easily be reversed an decomposed freely.

Since I’m only dealing with 3×3 matrices I decided to cut down on the number of shuffles as best I could, and ended up with this implementation:

Only 5 shuffles are used here instead of the 8 from DirectXMath for the 4×4 transpose. I did not really take care of handling the w component of any of the __m128’s during the whole process. In general I just left the shuffles for w as 0.

I really don’t think another shuffle can be removed in the 3×3 case, so any further optimizations would probably be outside my realm of knowledge. If anyone knows of anything else interesting as far as transposition goes feel free to comment below.

Note: On Windows if anyone is wondering why my function does not incur a compiler error complaining about parameter alignment, be sure to lookup __vectorcall for Visual Studio 2013.

Circular Linked Lists and Branching

Since linked lists are such an essential topic I’ve taken some extra care to learn efficient ways of using them. The simplest kind of linked list to conceptualize is the singly linked list. There are tons of online resources for learning the basics about linked lists, so I’ll assume readers are familiar with the concept.

Here’s a quick mock header of some linked list nodes for reference:

In general singly linked lists are more complicated to manage once removal of nodes is required. Since no explicit prev pointer is stored in memory a temporary variable is often kept on the stack while traversing a singly linked list. This means more complicated code that clogs the user’s focus.

Even though a doubly linked list requires twice the memory they are usually still preferred over singly linked lists, even when a singly linked list could get the job done without any additional time complexity. Often times linked lists are useful in complex algorithms, and if there’s a chance to simplify the implementation of a complex algorithm by using a doubly linked list, then that chance is probably worth the taking.

When I first implemented a doubly linked list and tested its performance out against std::list I couldn’t quite get it to perform well.

Naive insertion and removal of list nodes often has to check for NULL pointers, which represent the front and back of the linked list. Here’s an example of what removal might look like to give you the idea of how many if-statements could be necessary (code not tested, I just typed it out here on the spot):

There are two if statements hit every single time this function is called. When the CPU comes across a branch is loads instructions based on which path of execution it deems most likely. This is called branch prediction. If this prediction is incorrect the loaded code must be unloaded, and then the appropriate code must be re-loaded.

This branch missing probably going to be a very fast CPU operation since executing code is almost definitely in the L-1 code cache. Despite it being fast modern CPU still operate through a pipeline, and branch misses can still garble up whatever pipelining is happening. In the end a branch miss is a performance hit, and should be avoided when appropriate.

A common linked list optimization is to use a dummy head and tail node. These nodes sit in memory along with the list data structure. Upon list initialization they point their next and previous pointers to one another, and NULL out the pointers to represent the front and back of the list.

With this optimization the only case that user nodes will ever encounter is the case in the first two if statements (assuming both were true). The removal code can now look something like (again, not tested):

This is one kind of optimization the std implements. After doing this myself my list performed evenly with the std’s implementation.

Intrusive Lists

Intrusively linked lists invert the definition of what a node is. Traditionally a linked list node contains some data. An intrusive list has the data contain the node:

This scheme is nice since now nodes do not need to be allocated separately from the data. If the number of data elements is known, then the exact number of nodes needed can also be known.

C++ templates can be used to create a generic intrusively linked list implementation, able to define nodes inside of any data type. C macros can also be used to the same effect. In this way an intrusively linked list can be used in pretty much the same way a normal linked list is.

One major downside to intrusively linked lists is that they add in extra memory to your data. This can be a big deal if some code is very performance sensitive. If cache line utilization is important, then the percentage of data actually used in each line becomes important. Sometimes these pointers get in the way and clutter the lines. This cluttering is something to be aware of.

On the flip side many algorithms can run on arrays of data. Instead of storing explicit pointers to represent prev and next connections, indices into an array can be used. This can make entire data structures memcpy-able, or serializable just by dumping bits to a stream. Additionally, the pointers stored directly within data will often be accessed as the exact same time (depending on the algorithm), which results in very high cache line utilization.

It all depends on the scenario.

Circular Lists (Sentinel)

When dealing with intrusive linked lists it can often be really weird to define where in memory dummy nodes would reside. Are we to create dummy pieces of Data? What if the algorithm needs lists to be constantly created and destroyed? What if the algorithm can have as many lists as there are nodes? Suddenly the algorithm might need twice as many dummy nodes as actual nodes!

For example imagine a hash table implementing with collision chaining. If we wish to use doubly linked lists dummy nodes are probably out of the question if you care about all the wasted memory. However, it does suck to take a performance hit constantly testing for NULL node connections.

It is possible to remove the dummy nodes in many cases. Data elements can be initialized to point to themselves. In this way each element is itself a doubly linked list with one node. To insert a second node is a matter of making both nodes point to each other. Inserting a third node should use the exact same code as inserting the second node (and not require any branching since NULL indices/pointers do not exist), and so on.

In many cases an intrusive circular doubly linked list (boy, isn’t that a mouthful) can be the perfect solution to a hard problem! I will leave it as an exercise to research or implement this circular style of linked list.

Another name for this type of list would be a “sentinel intrusive list”, where a sentinel node can be used to bound a list traversal. Since our linked lists are circular we can start at any node, traverse the list, and once we reach the node we started upon our traversal is complete.

C++ Keyword inline and .inl Files

While at the bar a group of friends jokingly mocked some of the more silly features of C++. The initial banter consisted of how the STL implemented everything including the kitchen sink, though forgot to implement std::girlfriend.

Wouldn’t std::girlfriend be great? We can plug in any type of girlfriend we want into the template parameters and the compiler will just generate one for us! Why in the world would std::girlfriend be omit from STL?

Oh of course, std::girlfriend was never implemented because everyone is just going to put in way too many specific template types (super hot, not crazy) and it’ll just end in a bunch of “failed to specialize template” error messages. And then the moment too many of the template parameters are removed we’ll just get a bunch of “multiple symbols defined” linker errors! Maybe it was a good idea to never implement std::girlfriend in the first place. After all, a girlfriend prefixed with std might make one thing of something other than C++…

Jokes aside I brought up the fact that inline is totally useless for inlining. The only real reason to use the inline keyword (in my opinion) is to able to define functions within a header. Well, I brought it up as a joke, but not really a joke, and that’s the joke.

The inline keyword and .inl files can actually be a pretty nice organizational tool for code, and I’ve found it helps users that didn’t write the implementation understand the code.

Say we are implementing some kind of algorithm that stores elements in an array. Elements need to refer to one another (perhaps to build intrusive linked lists), although these arrays ought to be relocated in memory without requiring any complex copy routines; a single memcpy should yield a new and valid copy.

One way to do so is to make use of array indices instead of pointers. Usually a myriad of small helper functions will arise to clean up all of the array indexing that usually ensues shortly after this kind of code crops up. It’s a huge pain to look into a .cpp and have to continually navigate passed a lot of tiny and trivial helper functions just to understand the algorithm.

These small helpers can be swept to the side into a .inl file. The .inl file signature immediately tells the user what kind of code resides within (either templates or inlined functions), and usually this kind of code isn’t very necessary to understand the more heavy duty code within the .cpp file.

Here’s a mock example:

Aren’t these example files pretty easy going to read? I’m sure you at least scanned the .inl file briefly, and will probably never really need to look at it again. Time will be well spent in the .cpp file with less code to clog your brain. And who knows, maybe the compiler (or perhaps the linker) actually cares a little bit when we type the inline keyword.

Computing AABB Trick (Loop Trick)

Lately I noticed a small trick that applies to loops when trying to find a minimum or maximum of some values. Usually I just apply the trick to loops where I need to compute an AABB over some geometry. I think I noticed this trick when reading a for loop Erin Catto wrote in some of Box2D’s internal code.

The trick is super simple: just process the first element outside of the loop to set up your initial conditions, then form your loop to skip the first element. An assumption would be made that there’s at least one element in the array to process. Here’s an example for computing an AABB:

Usually I myself would have written this kind of code like so and not given any more thought to it:

This second code chunk is arguably just slightly more esoteric and is definitely a little less efficient for no good reason.

One could also skip the first element when finding the min/max of any sort of array, like for example: dot product results. Though simple it’s pretty nice to find small ways to write slightly better code.

Small C++ Reflection Demo

I created a small demonstration program that explains the core ideas behind implementing a custom reflection system for C++. More might be written in this post in the future — for now I’m just storing the demo right here on this webpage:


Memory Management

Any competent software engineer will have spent significant time working with low level memory management. Even though the operating system code is written for will often provide some kind of allocation and deallocation mechanism, application specific assumptions can be made to increase memory related performance.

For example certain hardware doesn’t have virtual memory support, or the virtual memory support can be quite lacking. A lack of virtual memory means raw allocations from the OS return real addresses to the hardware RAM. Usually virtual memory can alleviate some effects of memory fragmentation through a level of indirection, though when dealing with physical memory yourself no such alleviation exists.

This is just one example of how a software memory manager can be written and used to control memory fragmentation in a way that makes sense for the application.

Types of Allocators

There are a few main types of allocators that I myself have found pretty useful: paging, stack and heap based allocations. Each one makes specific assumptions about the types of allocations and how the memory ought be used. Due to these assumptions significant performance boosts can be reaped in ways that may not have been realistic with raw operating system allocations.

Stack Based Allocation

My favorite type of allocation involves the use of a simple stack. The idea is to make one large call to malloc or new and hold this piece of memory. The Stack itself just holds a pointer to this large chunk of memory, and an integer representing an index into the stack with an element size in bytes.

Here is what a Stack implementation might look like (in pseudo code):

Allocation can work by moving the m_memory pointer forward in the stack. Deallocation can work by moving the m_memory pointer backwards in the stack. Notice that the Free function requires the user to pass back in the size of the allocation! This can be avoided by storing this size parameter from Allocate inside of the m_memory array itself, just before the location of the returned address. Upon deallocation this value can be retrieved by moving the data parameter of Free back in memory by 4 bytes.

The advantage of the stack allocator is that it’s extremely fast and dubiously simple to implement. The limitation is that deallocations must be performed in the reverse order of allocations, since the stack itself is in LIFO order. This makes the use cases for the stack allocator pretty limited. Usually resources, like images, level files, sounds, models, etc. can be loaded into memory with a stack based allocator. Anything that has a very clear and non-variable lifespan should be able to be allocated on a stack.

One last trick is that the last allocation can be trivially resized! Often times an algorithm will require a lot of temporary scratch memory to perform some calculations, or store some state. An initial guess as to how much memory is needed can often be calculated as the worst-case scenario. Once an algorithm finishes this scratch memory can be reduced to the size actually used, if it is the last allocation on the stack. Resizing the last stack allocation involves moving the index backwards in memory.

Heap Allocation

Implementing your own heaps is pretty similar to the stack based allocator. A heap allocator will use the operating system to allocate a large chunk of memory. Subsequent calls to the heap’s Allocate and Free methods will just dip into this chunk and fetch a piece.

The heap is more versatile and general purpose than a stack allocator. The heap can be implemented with a linked list of nodes. Each node represents a piece of memory. A node can either be allocated or free. To keep track of these linked list pointers, allocation state, and size of the memory block some memory itself is required! This stuff can be stored in a separate array, or right inside the large raw chunk of memory (just like with the stack allocator).

Usually it is preferential to add a small header to each allocation to store this information. A heap node might look something like this:

When the heap is first constructed it will contain a linked list of HeapHeader structs, but only a single header will be present, and it holds the entire piece of raw memory originally allocated by the OS upon the Heap allocator’s construction.

Allocating from the heap involves splitting a free HeapHeader into an allocated piece, and a new HeapHeader for the leftover space. The details of this lay mostly in the linked list implementation, and is not the focus of this article.

In order to reduce memory fragmentation it is a good idea to merge adjacent free HeapHeader links into a single link. This ought to be handled in the Heap::Free function. The details of merging free links lay mostly in the linked list implementation, and is not the focus of this article.

Here’s an example of what the Heap may look like in implementation:

When Heap::Allocate is called a free link of appropriate size must be searched for. This has the time complexity of O( N ), and a lot of memory must be fetched into the cache upon allocation as the list itself is traversed. There are tricks to improve allocation performance of heaps, and a simple one would be the cache a single pointer to a free block in the heap itself. This pointer can be cached in Heap::FreeHeap::Allocate, or both. Once a new call to Heap::Allocate is made this cached pointer can be tested first to see it is an appropriate size.

There are two common ways to search through the links for an allocation: first fit and best fit. First fit will return the user with the first piece of memory large enough to hold the allocation. Best fit will return a chunk of memory that came from a HeapHeader with the smallest size that is still large enough to hold the requested allocation size.

First fit can be preferential for cache coherency, as it may prefer to allocate from the beginning of the heap and try to keep things closer together in memory. Best fit may be preferential for keeping the heap as un-fragmented as possible.

Paged Allocation

The heap based allocator intends to fight memory fragmentation through fitting links to allocation sizes, and by merging adjacent free memory blocks. This type of fragmentation is called external fragmentation. Another type of memory fragmentation is called internal fragmentation.

An internal memory fragmentation is when an allocated piece of memory is given to the user that actually holds more memory than the user requested. The user is assumed to not know about this extra piece of memory. This can provide an advantage to the allocator: all allocations can be of a fixed size, and any allocation larger than this fixed size is denied.

This lets the allocator act like an array. When an allocation is requested an empty element can be returned to the user. Upon freeing a piece of memory, the element is simply marked as free and placed into a free list.

The free list is a linked list of array elements. The memory in the free elements themselves should be used to store the pointer of each subsequent free element.

Allocation and deallocation become constant in time complexity and there is zero external memory fragmentation. In this way internal memory fragmentation is traded for external memory fragmentation.


The term “pages” comes into play when the array is filled up. Once an array is full of allocated elements another array can be allocated. Once this array is filled up, another one is allocated. Each array (aka page) can be stored in a singly linked list of pages.

The free list itself can pointer across multiple pages without any problems.

A page containing only free elements can be deleted entirely, though this feature might not need to be supported.

A paged allocator can also hold an array of singly linked lists of pages. Each element of this array can hold a list of pages that corresponds to a different element size. This can allow the paged allocator to fit different allocation requests into the most appropriate page list. A common tactic is to have pages that represent arrays with an element size of 2^N bytes, where N is usually at least 2, and smaller than some value K.

The biggest advantage of a paged allocator is zero external fragmentation. The internal fragmentation does make memory more non-homogeneous. This type of allocator will probably lower your cache line utilization. Cache line utilization would be how much memory in each cache line fetched from main memory to the CPU cache is actually used. Since internal fragmentation is a feature of a paged allocator, cache line utilization will probably suffer.

The unused memory in the pages can be reduced drastically on a per-application basis; if the users of the allocator are able to specify the element sizes of different page lists, then zero internal fragmentation can be achieved.

Handle-Based Array

Instead of thinking of a paged allocator in terms of separate arrays, one might think of a simpler allocator that holds just a single array. If the elements within this array of of POD nature the array elements can be referenced by index. This lets the array grow or shrink in size as necessary, as new sized arrays can still be accessed by an old index.

Whenever the user wants a pointer to an element they first give the array an index, and a pointer is returned. This pointer is never stored anywhere! Continuous translation from index to pointer occurs -this allows the internal array itself to moved around in memory as necessary.

Users might need a little more power to refer to elements than a simple integer. Some type of handle might be needed to translate from index to pointer. Read more about handles here.


Given these three types of allocators an application should have all the variety of memory allocation necessary to run with pretty good performance. More advance allocation techniques definitely exist, and some are just combinations of the three basic allocators presented in this article.

Each allocator can be quite simple in isolation! I myself implemented a stack in about 100 lines, a paged allocator in 150, and a heap in about 250 lines of C++ code.

Further reading might include topics such as: cache coherency, memory alignment, garbage collection, virtual memory, page files (operating system pages).