Parsing C Style Expressions

Turns out that constructing a hand-written C-style parser has a few parts that were very difficult for me.

The first thing was realizing that Backus Naur Form (BNF) largely sucks if you want to hand-write your own parser. BNF is really verbose and expressing simple things like optional terminals or lists is difficult. BNF is also poor for expressing operator precedence, as many intermediate and redundant non-terminals are required to be evaluated during parse-tree derivation. As an alternative Extended Backus Naur Form is perfect for languages that plan to use hand-written parsers instead of parsers created by parser generators. Left-factoring a BNF for LL parsing is also not very useful since handling infinite recursion with hand-written code is trivial.

The second thing is that parsing expressions with various types of operators can be really difficult, especially if there’s a lack of confidence in recursion (like myself). Creating a parse tree given a string representing an expression is a very recursive problem.

In C expressions consist of atoms and operators. An atom would be a literal, constant value, identifier, or an expression wrapped in parentheses. Operators are the usual + or – kind of tokens.

If each operator has an associated precedence value there are a few different algorithms out there with references for learning. I ended up face in the dirt and in the end derived what is known as “precedence climbing“. According to Eli Bendersky precedence climbing is what is currently used by Clang to parse C++ expressions! That ought to instill some perceived merit. From what I can tell Lua 5.3 uses (well, very close to) the same method.

The idea of precedence climbing is to think of two major recursive operations:

  • Compute righthand-side node
  • Make a binary operator node and connect lefthand-side and righthand-side children nodes

The first point is the complex one (that is, conceptually complex). The algorithm starts given a lefthand-side node, however, righthand-side nodes do not come in through the input stream in tree format; the next token can represent a node that should be much deeper in the tree — this means that computing the righthand-side node ought to be the main recursive path.

Realizing that the righthand-side node computation is the recursive path led me to notice a key observation that tipped me off to a working algorithm.

Say we have the following input string as an expression: A 2 B 1 C 4 D 3 E 7 F

Numbers are operators, and the number itself is precedence (higher number is higher precedence), letters are atoms (like a const int variable). Here’s the valid parse tree:


The lowest leaves are evaluated first. It’s easy to see that the tree itself encodes the operator precedence.

If we begin parsing our input string the first atom is A, which will always be a lefthand-side node for most any parsing algorithm used, and will likely be the left-most node in the tree. The next token is the 2 operator followed by B. It’s easy enough to construct the subtree of node 2 pointing to A and B.

The next input token is the operator 1 and atom C. C is bound by operator precedence to the operator 4, though the current state of the algorithm has yet to even read in the token 4. Studying this scenario is what tipped me off to a working solution; C must be treated as a lefthand-side node, though at the current state is considered a potential righthand-side node.

Wikipedia, and this link from earlier, both show great pseudo code for the precedence climbing algorithm. The main difference between the two links is wikipedia includes a nested for-loop in favor of less overall recursive calls. My own code ended up looking something like after I cleaned it up from influences of previous links:

In the end I’m quite happy with the result, and even hooked up a nice ascii-tree printer courtesy of a random stack-overflow user. Here’s a dot product and initialization trees in ascii:

My favorite part about the operator precedence climbing algorithm is how it handles parentheses and prefix unary operators: parentheses can be considered an atom, and when the atom function finds a parentheses is just calls the expression parsing function directly and returns the result! The same can be done for prefix unary operators (if they have really high precedence). The algorithm also trivially handles right-associativity. I haven’t yet thought about unary postfix operators, so if any reader has thoughts on this topic please do comment!

Here’s psuedo-y atom code:



Interactive Cubic Bezier Splines

Cubic Bezier splines look rather beautiful. Splines in general look great and are super useful for all sorts of cool stuff.

Creating splines usually involves defining some control points and moving those around until the desired effect is achieved. However, clicking on or interacting with the spline itself turns out to be a lot harder than just messing with control points.

In particular I want to talk about cubic Bezier splines. This kind of spline is defined as 3 * n + 1 control points, where n is the number of Bezier curves in the spline. The first four points represent the first curve, then the last point and the next three represent the next curve, and so on.

The resulting spline can look like the curved arrows in this graph:

Actually interacting with these splines would be pretty difficult since cartesian coordinates are not in the same basis as the spline itself. What is needed is the ability to express a relation between these two spaces.

Since I haven’t studied curves in great detail there’s much math that goes beyond me (for the time being) in solving this problem. However in Graphics Gems I there’s a section called “Solving The Nearest-Point-On-Curve Problem” where some public domain C code is given that computes the closest point on a cubic Bezier curve to a given input point. The C code is really old and actually calls malloc in a recursive function, presumably to avoid stack overflows on old machines with minimal memory. This can be converted to use a stack data structure on the process stack space, all without much work for the user.

After a few minutes I was able to set up a cool demo:


This tool would be great for some kind of interactive spline editor. It would also be perfect for colliding circles (or spheres in 3D), making the curve itself a potential candidate for a physics engine collider type. Since the curve is defined by control points, the curve can actually deform at run-time without any big complications.

Approximating the Curve

In reality a spline like this would probably just be “rasterized” to some kind of approximate shape that is easier for a physics engine to collide, rather than dealing with the exact spline itself. This is likely due to there not being a straightforward way to compute the closest point on the spline to a plane (at least, I sure don’t know how to write that code).

One tactic would be to interpolate a few points out of the curve (and by few I mean as many as needed), and then use these points as an approximate representation of the curve. This could be like converting the curve to a line strip, which might make interactivity much simpler in many cases.

Generating these points can be a piece of cake. Personally I’d just write out some lerps and call it good. If we look at the simplest Bezier curve with 2 control points it’s easy to see it’s just a single lerp (these animations were taken from the Wikipedia page for Bezier curves):



If we add a third point we can perform two lerps, and then lerp the results together. Intuitively this gives us a quadratic Bezier curve, and looks like:



Following suit we can use 4 control points and create a cubic Bezier curve:



An easy way to convert a cubic Bezier curve to a line strip involves sampling points on curve at time t, where t is on the interval of 0 to 1. Here’s some example code:

For example, Box2D defines a “chain shape” as a line strip of segments. Generally this strip is used to approximate curved, non-moving surfaces. Splines can very easily be converted to strips and sent to Box2D to define a chain shape.

Some Spliney Links


Capsule to Convex – SAT Edge Prune via Gauss Map

In 2013 Dirk Gregorius of Valve presented at GDC on the topic of the Separating Axis Theorem. In his talk he studied an useful property of the Minkowski Difference between two 3D convexes: edge pairs from each shape may or may not contribute to the convex hull of the Minkowski Difference.

It is possible to derive simple predicate functions to reduce the number of edge pair queries, and to simplify implementation, during collision detection via the Separating Axis Theorem. This article shows a derivation of this predicate for the Capsule to Convex case.

Here is the PDF containing the derivation:

Download (PDF, Unknown)

SIMD – Matrix3x3 Transpose

Just recently I finished implementing my own personal SIMD math library using SSE intrinsics. There are two major resources for learning how to write effective SIMD code:

While inspecting the DirectXMath source I came across the implementation of transposing a 4×4 matrix:

Lately I have been working only with 3×3 matrices and vectors. This is nice since often times 4×4 matrices store mostly useless data in the bottom row. In effect some kind of 3×4 matrix can be stored in memory to represent an affine transformation:

Depending on what the code is used for the rotation matrix can have scaling built in, or not. Often times only uniform scaling is desired so that chains of transformations can easily be reversed an decomposed freely.

Since I’m only dealing with 3×3 matrices I decided to cut down on the number of shuffles as best I could, and ended up with this implementation:

Only 5 shuffles are used here instead of the 8 from DirectXMath for the 4×4 transpose. I did not really take care of handling the w component of any of the __m128’s during the whole process. In general I just left the shuffles for w as 0.

I really don’t think another shuffle can be removed in the 3×3 case, so any further optimizations would probably be outside my realm of knowledge. If anyone knows of anything else interesting as far as transposition goes feel free to comment below.

Note: On Windows if anyone is wondering why my function does not incur a compiler error complaining about parameter alignment, be sure to lookup __vectorcall for Visual Studio 2013.

Minimum Bounding Sphere

Today I was reading this article by Ericson. The article is on the derivation involved in writing a function to calculate the minimum bounding sphere of a set of three or four points. The topic itself isn’t very interesting (to me) when compared to the algebra itself. Ericson skips the algebra I had trouble with, so my article here will serve the purpose to record the algebra he skipped (in case other readers had trouble, or I ever want to reference the math again). The article I am writing here directly follows Ericson’s article, so I will assume readers have read and understood Ericson’s article.

Say we have a set of three points S = {A, B, C}, which can be thought of as a triangle. The problem of calculating the minimum bounding sphere of S involves checking to see if the circumcenter of S lies within or outside the triangle S. Lets name the circumcenter P. One immediate implementation involves computing P, followed by the computation of the barycentric coordinates of P with respect to S. These coordinates can be used check if P lay within S.

However computing P of S followed by the barycentric coordinates of P with respect to S involves a lot of redundant computations. Furthermore, if P does not lay within S then P itself does not need to be computed at all, since only the barycentric coordinates were needed.

It would be nice to compute barycentric coordinates of P with respect to S directly, and only if necessary construct P.

P in relation to S can be defined as:

P = A + s*(B – A) + t*(C – A)

Where \(s\) and \(t\) are barycentric coordinates of S such that \(s – t – (1.0 – s – t) = 0\) if P is within S. Since P is equidistant from A, B and C, we can express P in relation to S with the following:

dot(P – B, P – B) = dot(P – A, P – A)

dot(P – C, P – C) = dot(P – A, P – A)

The interesting part involves plugging \eqref{eq1} into \eqref{eq2} and \eqref{eq3} followed by a collection of terms. Since I myself am a noob when it comes to algebra I had to go look up algebraic properties of the dot product to make sure I did not screw anything up. In particular these few rules are useful:

dot(u, v + w) = dot(u, v) + dot(u, w) \\
dot(s * u, v) = s*dot(u, v) \\
-dot(u – v, u – w) = dot(v – u, u – w)

Lets go over substituting \eqref{eq1} into \eqref{eq2} directly, however lets only consider the first part of \eqref{eq2} \(Dot(P – B, P – B)\) to keep it simple:

dot(A + s*(B – A) + t*(C – A) – B, A + s*(B – A) + t*(C – A) – B)

Since things immediately get really long some substitutions help to prevent errors:

u = A – B\\
v = s*(B – A) + t*(C – A)\\
w = u + v \\

Dot(w, u + v)

Dot(w, u) + Dot(w, v)

Dot(u + v, u) + Dot(u + v, v)

Dot(u, u) + Dot(v, u) + Dot(u, v) + Dot(v, v)

If I were to expand \eqref{eq10} on this webpage it would not fit. Instead we will make use of a few more substitutions and then arrive in the final form:

x = s*(B – A) \\
y = t*(C – A) \\
x + y = v

Dot(A – B, A – B) + 2*Dot(A – B, x + y) + Dot(x + y, x + y)

By following the same process we can finish the substitution and notice that:

Dot(P – A, P – A) = Dot(x + y, x + y)

The final form of the substitution would be:

Dot(A – B, A – B) + \\ 2*Dot(A – B, x + y) + \\ Dot(x + y, x + y) = Dot(x + y, x + y)

Dot(A – B, A – B) + 2*Dot(A – B, x + y) = 0

Dot(A – B, A – B) + \\ 2*Dot(A – B, s*(B – A)) + \\ 2*Dot(A – B, t*(C – A)) = 0

s*Dot(B – A, B – A) + t*Dot(B – A, C – A) = \\ (1/2)*Dot(B – A, B – A)

This final equation \eqref{eq17} matches exactly what Ericson came up with on his own blog. Through a similar process \eqref{eq1} can be substituted into \eqref{eq3}, which would result in:

s*Dot(C – A, B – A) + t*dot(C – A, C – A) = \\ (1/2)*dot(C – A, C – A)

Circular Linked Lists and Branching

Since linked lists are such an essential topic I’ve taken some extra care to learn efficient ways of using them. The simplest kind of linked list to conceptualize is the singly linked list. There are tons of online resources for learning the basics about linked lists, so I’ll assume readers are familiar with the concept.

Here’s a quick mock header of some linked list nodes for reference:

In general singly linked lists are more complicated to manage once removal of nodes is required. Since no explicit prev pointer is stored in memory a temporary variable is often kept on the stack while traversing a singly linked list. This means more complicated code that clogs the user’s focus.

Even though a doubly linked list requires twice the memory they are usually still preferred over singly linked lists, even when a singly linked list could get the job done without any additional time complexity. Often times linked lists are useful in complex algorithms, and if there’s a chance to simplify the implementation of a complex algorithm by using a doubly linked list, then that chance is probably worth the taking.

When I first implemented a doubly linked list and tested its performance out against std::list I couldn’t quite get it to perform well.

Naive insertion and removal of list nodes often has to check for NULL pointers, which represent the front and back of the linked list. Here’s an example of what removal might look like to give you the idea of how many if-statements could be necessary (code not tested, I just typed it out here on the spot):

There are two if statements hit every single time this function is called. When the CPU comes across a branch is loads instructions based on which path of execution it deems most likely. This is called branch prediction. If this prediction is incorrect the loaded code must be unloaded, and then the appropriate code must be re-loaded.

This branch missing probably going to be a very fast CPU operation since executing code is almost definitely in the L-1 code cache. Despite it being fast modern CPU still operate through a pipeline, and branch misses can still garble up whatever pipelining is happening. In the end a branch miss is a performance hit, and should be avoided when appropriate.

A common linked list optimization is to use a dummy head and tail node. These nodes sit in memory along with the list data structure. Upon list initialization they point their next and previous pointers to one another, and NULL out the pointers to represent the front and back of the list.

With this optimization the only case that user nodes will ever encounter is the case in the first two if statements (assuming both were true). The removal code can now look something like (again, not tested):

This is one kind of optimization the std implements. After doing this myself my list performed evenly with the std’s implementation.

Intrusive Lists

Intrusively linked lists invert the definition of what a node is. Traditionally a linked list node contains some data. An intrusive list has the data contain the node:

This scheme is nice since now nodes do not need to be allocated separately from the data. If the number of data elements is known, then the exact number of nodes needed can also be known.

C++ templates can be used to create a generic intrusively linked list implementation, able to define nodes inside of any data type. C macros can also be used to the same effect. In this way an intrusively linked list can be used in pretty much the same way a normal linked list is.

One major downside to intrusively linked lists is that they add in extra memory to your data. This can be a big deal if some code is very performance sensitive. If cache line utilization is important, then the percentage of data actually used in each line becomes important. Sometimes these pointers get in the way and clutter the lines. This cluttering is something to be aware of.

On the flip side many algorithms can run on arrays of data. Instead of storing explicit pointers to represent prev and next connections, indices into an array can be used. This can make entire data structures memcpy-able, or serializable just by dumping bits to a stream. Additionally, the pointers stored directly within data will often be accessed as the exact same time (depending on the algorithm), which results in very high cache line utilization.

It all depends on the scenario.

Circular Lists (Sentinel)

When dealing with intrusive linked lists it can often be really weird to define where in memory dummy nodes would reside. Are we to create dummy pieces of Data? What if the algorithm needs lists to be constantly created and destroyed? What if the algorithm can have as many lists as there are nodes? Suddenly the algorithm might need twice as many dummy nodes as actual nodes!

For example imagine a hash table implementing with collision chaining. If we wish to use doubly linked lists dummy nodes are probably out of the question if you care about all the wasted memory. However, it does suck to take a performance hit constantly testing for NULL node connections.

It is possible to remove the dummy nodes in many cases. Data elements can be initialized to point to themselves. In this way each element is itself a doubly linked list with one node. To insert a second node is a matter of making both nodes point to each other. Inserting a third node should use the exact same code as inserting the second node (and not require any branching since NULL indices/pointers do not exist), and so on.

In many cases an intrusive circular doubly linked list (boy, isn’t that a mouthful) can be the perfect solution to a hard problem! I will leave it as an exercise to research or implement this circular style of linked list.

Another name for this type of list would be a “sentinel intrusive list”, where a sentinel node can be used to bound a list traversal. Since our linked lists are circular we can start at any node, traverse the list, and once we reach the node we started upon our traversal is complete.

C++ Keyword inline and .inl Files

While at the bar a group of friends jokingly mocked some of the more silly features of C++. The initial banter consisted of how the STL implemented everything including the kitchen sink, though forgot to implement std::girlfriend.

Wouldn’t std::girlfriend be great? We can plug in any type of girlfriend we want into the template parameters and the compiler will just generate one for us! Why in the world would std::girlfriend be omit from STL?

Oh of course, std::girlfriend was never implemented because everyone is just going to put in way too many specific template types (super hot, not crazy) and it’ll just end in a bunch of “failed to specialize template” error messages. And then the moment too many of the template parameters are removed we’ll just get a bunch of “multiple symbols defined” linker errors! Maybe it was a good idea to never implement std::girlfriend in the first place. After all, a girlfriend prefixed with std might make one thing of something other than C++…

Jokes aside I brought up the fact that inline is totally useless for inlining. The only real reason to use the inline keyword (in my opinion) is to able to define functions within a header. Well, I brought it up as a joke, but not really a joke, and that’s the joke.

The inline keyword and .inl files can actually be a pretty nice organizational tool for code, and I’ve found it helps users that didn’t write the implementation understand the code.

Say we are implementing some kind of algorithm that stores elements in an array. Elements need to refer to one another (perhaps to build intrusive linked lists), although these arrays ought to be relocated in memory without requiring any complex copy routines; a single memcpy should yield a new and valid copy.

One way to do so is to make use of array indices instead of pointers. Usually a myriad of small helper functions will arise to clean up all of the array indexing that usually ensues shortly after this kind of code crops up. It’s a huge pain to look into a .cpp and have to continually navigate passed a lot of tiny and trivial helper functions just to understand the algorithm.

These small helpers can be swept to the side into a .inl file. The .inl file signature immediately tells the user what kind of code resides within (either templates or inlined functions), and usually this kind of code isn’t very necessary to understand the more heavy duty code within the .cpp file.

Here’s a mock example:

Aren’t these example files pretty easy going to read? I’m sure you at least scanned the .inl file briefly, and will probably never really need to look at it again. Time will be well spent in the .cpp file with less code to clog your brain. And who knows, maybe the compiler (or perhaps the linker) actually cares a little bit when we type the inline keyword.

Computing AABB Trick (Loop Trick)

Lately I noticed a small trick that applies to loops when trying to find a minimum or maximum of some values. Usually I just apply the trick to loops where I need to compute an AABB over some geometry. I think I noticed this trick when reading a for loop Erin Catto wrote in some of Box2D’s internal code.

The trick is super simple: just process the first element outside of the loop to set up your initial conditions, then form your loop to skip the first element. An assumption would be made that there’s at least one element in the array to process. Here’s an example for computing an AABB:

Usually I myself would have written this kind of code like so and not given any more thought to it:

This second code chunk is arguably just slightly more esoteric and is definitely a little less efficient for no good reason.

One could also skip the first element when finding the min/max of any sort of array, like for example: dot product results. Though simple it’s pretty nice to find small ways to write slightly better code.

Small C++ Reflection Demo

I created a small demonstration program that explains the core ideas behind implementing a custom reflection system for C++. More might be written in this post in the future — for now I’m just storing the demo right here on this webpage:


Robust Parallel Vector Test

In an effort to try to find some robust tests for parallel vectors I’ve decided to create some slides talking about the solutions I’ve come across. Hopefully one day some reader can comment or email about some alternatives or better solutions! I decided to write this blog post in slide format since Microsoft’s Powerpoint is pretty snazzy for making quick diagrams and whatnot :)

Here’s the link to Ericson’s blog post on tolerances: Tolerances Revisited. Please note there’s a little bit of ambiguity about whether the test should care if the vectors point in the same direction or not. In general it doesn’t really matter since the point of the slides is numeric robustness.

The gist of the slides is that the scale of the vectors in question matters for certain applications. Computing a good relative epsilon seems difficult, and maybe different epsilon calculations would be good for different applications. I’m not sure!

Here’s a demo you can try out yourself to test two of the solutions from the slides (tests returning true should be considered false positives):

Output of the program is:

Download (PDF, Unknown)