Author Archives: Randy Gaul

SIMD – Matrix3x3 Transpose

Just recently I finished implementing my own personal SIMD math library using SSE intrinsics. There are two major resources for learning how to write effective SIMD code:

While inspecting the DirectXMath source I came across the implementation of transposing a 4×4 matrix:

Lately I have been working only with 3×3 matrices and vectors. This is nice since often times 4×4 matrices store mostly useless data in the bottom row. In effect some kind of 3×4 matrix can be stored in memory to represent an affine transformation:

Depending on what the code is used for the rotation matrix can have scaling built in, or not. Often times only uniform scaling is desired so that chains of transformations can easily be reversed an decomposed freely.

Since I’m only dealing with 3×3 matrices I decided to cut down on the number of shuffles as best I could, and ended up with this implementation:

Only 5 shuffles are used here instead of the 8 from DirectXMath for the 4×4 transpose. I did not really take care of handling the w component of any of the __m128′s during the whole process. In general I just left the shuffles for w as 0.

I really don’t think another shuffle can be removed in the 3×3 case, so any further optimizations would probably be outside my realm of knowledge. If anyone knows of anything else interesting as far as transposition goes feel free to comment below.

Note: On Windows if anyone is wondering why my function does not incur a compiler error complaining about parameter alignment, be sure to lookup __vectorcall for Visual Studio 2013.

TwitterRedditFacebookShare

Minimum Bounding Sphere

Today I was reading this article by Ericson. The article is on the derivation involved in writing a function to calculate the minimum bounding sphere of a set of three or four points. The topic itself isn’t very interesting (to me) when compared to the algebra itself. Ericson skips the algebra I had trouble with, so my article here will serve the purpose to record the algebra he skipped (in case other readers had trouble, or I ever want to reference the math again). The article I am writing here directly follows Ericson’s article, so I will assume readers have read and understood Ericson’s article.

Say we have a set of three points S = {A, B, C}, which can be thought of as a triangle. The problem of calculating the minimum bounding sphere of S involves checking to see if the circumcenter of S lies within or outside the triangle S. Lets name the circumcenter P. One immediate implementation involves computing P, followed by the computation of the barycentric coordinates of P with respect to S. These coordinates can be used check if P lay within S.

However computing P of S followed by the barycentric coordinates of P with respect to S involves a lot of redundant computations. Furthermore, if P does not lay within S then P itself does not need to be computed at all, since only the barycentric coordinates were needed.

It would be nice to compute barycentric coordinates of P with respect to S directly, and only if necessary construct P.

P in relation to S can be defined as:

\begin{equation}
\label{eq1}
P = A + s*(B – A) + t*(C – A)
\end{equation}

Where \(s\) and \(t\) are barycentric coordinates of S such that \(s – t – (1.0 – s – t) = 0\) if P is within S. Since P is equidistant from A, B and C, we can express P in relation to S with the following:

\begin{equation}
\label{eq2}
dot(P – B, P – B) = dot(P – A, P – A)
\end{equation}

\begin{equation}
\label{eq3}
dot(P – C, P – C) = dot(P – A, P – A)
\end{equation}

The interesting part involves plugging \eqref{eq1} into \eqref{eq2} and \eqref{eq3} followed by a collection of terms. Since I myself am a noob when it comes to algebra I had to go look up algebraic properties of the dot product to make sure I did not screw anything up. In particular these few rules are useful:

\begin{equation}
\label{eq4}
dot(u, v + w) = dot(u, v) + dot(u, w) \\
dot(s * u, v) = s*dot(u, v) \\
-dot(u – v, u – w) = dot(v – u, u – w)
\end{equation}

Lets go over substituting \eqref{eq1} into \eqref{eq2} directly, however lets only consider the first part of \eqref{eq2} \(Dot(P – B, P – B)\) to keep it simple:

\begin{equation}
\label{eq5}
dot(A + s*(B – A) + t*(C – A) – B, A + s*(B – A) + t*(C – A) – B)
\end{equation}

Since things immediately get really long some substitutions help to prevent errors:

\begin{equation}
\label{eq6}
u = A – B\\
v = s*(B – A) + t*(C – A)\\
w = u + v \\
\end{equation}

\begin{equation}
\label{eq7}
Dot(w, u + v)
\end{equation}

\begin{equation}
\label{eq8}
Dot(w, u) + Dot(w, v)
\end{equation}

\begin{equation}
\label{eq9}
Dot(u + v, u) + Dot(u + v, v)
\end{equation}

\begin{equation}
\label{eq10}
Dot(u, u) + Dot(v, u) + Dot(u, v) + Dot(v, v)
\end{equation}

If I were to expand \eqref{eq10} on this webpage it would not fit. Instead we will make use of a few more substitutions and then arrive in the final form:

\begin{equation}
\label{eq11}
x = s*(B – A) \\
y = t*(C – A) \\
x + y = v
\end{equation}

\begin{equation}
\label{eq12}
Dot(A – B, A – B) + 2*Dot(A – B, x + y) + Dot(x + y, x + y)
\end{equation}

By following the same process we can finish the substitution and notice that:

\begin{equation}
\label{eq13}
Dot(P – A, P – A) = Dot(x + y, x + y)
\end{equation}

The final form of the substitution would be:

\begin{equation}
\label{eq14}
Dot(A – B, A – B) + \\ 2*Dot(A – B, x + y) + \\ Dot(x + y, x + y) = Dot(x + y, x + y)
\end{equation}

\begin{equation}
\label{eq15}
Dot(A – B, A – B) + 2*Dot(A – B, x + y) = 0
\end{equation}

\begin{equation}
\label{eq16}
Dot(A – B, A – B) + \\ 2*Dot(A – B, s*(B – A)) + \\ 2*Dot(A – B, t*(C – A)) = 0
\end{equation}

\begin{equation}
\label{eq17}
s*Dot(B – A, B – A) + t*Dot(B – A, C – A) = \\ (1/2)*Dot(B – A, B – A)
\end{equation}

This final equation \eqref{eq17} matches exactly what Ericson came up with on his own blog. Through a similar process \eqref{eq1} can be substituted into \eqref{eq3}, which would result in:

\begin{equation}
\label{eq18}
s*Dot(C – A, B – A) + t*dot(C – A, C – A) = \\ (1/2)*dot(C – A, C – A)
\end{equation}

Circular Linked Lists and Branching

Since linked lists are such an essential topic I’ve taken some extra care to learn efficient ways of using them. The simplest kind of linked list to conceptualize is the singly linked list. There are tons of online resources for learning the basics about linked lists, so I’ll assume readers are familiar with the concept.

Here’s a quick mock header of some linked list nodes for reference:

In general singly linked lists are more complicated to manage once removal of nodes is required. Since no explicit prev pointer is stored in memory a temporary variable is often kept on the stack while traversing a singly linked list. This means more complicated code that clogs the user’s focus.

Even though a doubly linked list requires twice the memory they are usually still preferred over singly linked lists, even when a singly linked list could get the job done without any additional time complexity. Often times linked lists are useful in complex algorithms, and if there’s a chance to simplify the implementation of a complex algorithm by using a doubly linked list, then that chance is probably worth the taking.

When I first implemented a doubly linked list and tested its performance out against std::list I couldn’t quite get it to perform well.

Naive insertion and removal of list nodes often has to check for NULL pointers, which represent the front and back of the linked list. Here’s an example of what removal might look like to give you the idea of how many if-statements could be necessary (code not tested, I just typed it out here on the spot):

There are two if statements hit every single time this function is called. When the CPU comes across a branch is loads instructions based on which path of execution it deems most likely. This is called branch prediction. If this prediction is incorrect the loaded code must be unloaded, and then the appropriate code must be re-loaded.

This branch missing probably going to be a very fast CPU operation since executing code is almost definitely in the L-1 code cache. Despite it being fast modern CPU still operate through a pipeline, and branch misses can still garble up whatever pipelining is happening. In the end a branch miss is a performance hit, and should be avoided when appropriate.

A common linked list optimization is to use a dummy head and tail node. These nodes sit in memory along with the list data structure. Upon list initialization they point their next and previous pointers to one another, and NULL out the pointers to represent the front and back of the list.

With this optimization the only case that user nodes will ever encounter is the case in the first two if statements (assuming both were true). The removal code can now look something like (again, not tested):

This is one kind of optimization the std implements. After doing this myself my list performed evenly with the std’s implementation.

Intrusive Lists

Intrusively linked lists invert the definition of what a node is. Traditionally a linked list node contains some data. An intrusive list has the data contain the node:

This scheme is nice since now nodes do not need to be allocated separately from the data. If the number of data elements is known, then the exact number of nodes needed can also be known.

C++ templates can be used to create a generic intrusively linked list implementation, able to define nodes inside of any data type. C macros can also be used to the same effect. In this way an intrusively linked list can be used in pretty much the same way a normal linked list is.

One major downside to intrusively linked lists is that they add in extra memory to your data. This can be a big deal if some code is very performance sensitive. If cache line utilization is important, then the percentage of data actually used in each line becomes important. Sometimes these pointers get in the way and clutter the lines. This cluttering is something to be aware of.

On the flip side many algorithms can run on arrays of data. Instead of storing explicit pointers to represent prev and next connections, indices into an array can be used. This can make entire data structures memcpy-able, or serializable just by dumping bits to a stream. Additionally, the pointers stored directly within data will often be accessed as the exact same time (depending on the algorithm), which results in very high cache line utilization.

It all depends on the scenario.

Circular Lists (Sentinel)

When dealing with intrusive linked lists it can often be really weird to define where in memory dummy nodes would reside. Are we to create dummy pieces of Data? What if the algorithm needs lists to be constantly created and destroyed? What if the algorithm can have as many lists as there are nodes? Suddenly the algorithm might need twice as many dummy nodes as actual nodes!

For example imagine a hash table implementing with collision chaining. If we wish to use doubly linked lists dummy nodes are probably out of the question if you care about all the wasted memory. However, it does suck to take a performance hit constantly testing for NULL node connections.

It is possible to remove the dummy nodes in many cases. Data elements can be initialized to point to themselves. In this way each element is itself a doubly linked list with one node. To insert a second node is a matter of making both nodes point to each other. Inserting a third node should use the exact same code as inserting the second node (and not require any branching since NULL indices/pointers do not exist), and so on.

In many cases an intrusive circular doubly linked list (boy, isn’t that a mouthful) can be the perfect solution to a hard problem! I will leave it as an exercise to research or implement this circular style of linked list.

Another name for this type of list would be a “sentinel intrusive list”, where a sentinel node can be used to bound a list traversal. Since our linked lists are circular we can start at any node, traverse the list, and once we reach the node we started upon our traversal is complete.

C++ Keyword inline and .inl Files

While at the bar a group of friends jokingly mocked some of the more silly features of C++. The initial banter consisted of how the STL implemented everything including the kitchen sink, though forgot to implement std::girlfriend.

Wouldn’t std::girlfriend be great? We can plug in any type of girlfriend we want into the template parameters and the compiler will just generate one for us! Why in the world would std::girlfriend be omit from STL?

Oh of course, std::girlfriend was never implemented because everyone is just going to put in way too many specific template types (super hot, not crazy) and it’ll just end in a bunch of “failed to specialize template” error messages. And then the moment too many of the template parameters are removed we’ll just get a bunch of “multiple symbols defined” linker errors! Maybe it was a good idea to never implement std::girlfriend in the first place. After all, a girlfriend prefixed with std might make one thing of something other than C++…

Jokes aside I brought up the fact that inline is totally useless for inlining. The only real reason to use the inline keyword (in my opinion) is to able to define functions within a header. Well, I brought it up as a joke, but not really a joke, and that’s the joke.

The inline keyword and .inl files can actually be a pretty nice organizational tool for code, and I’ve found it helps users that didn’t write the implementation understand the code.

Say we are implementing some kind of algorithm that stores elements in an array. Elements need to refer to one another (perhaps to build intrusive linked lists), although these arrays ought to be relocated in memory without requiring any complex copy routines; a single memcpy should yield a new and valid copy.

One way to do so is to make use of array indices instead of pointers. Usually a myriad of small helper functions will arise to clean up all of the array indexing that usually ensues shortly after this kind of code crops up. It’s a huge pain to look into a .cpp and have to continually navigate passed a lot of tiny and trivial helper functions just to understand the algorithm.

These small helpers can be swept to the side into a .inl file. The .inl file signature immediately tells the user what kind of code resides within (either templates or inlined functions), and usually this kind of code isn’t very necessary to understand the more heavy duty code within the .cpp file.

Here’s a mock example:

Aren’t these example files pretty easy going to read? I’m sure you at least scanned the .inl file briefly, and will probably never really need to look at it again. Time will be well spent in the .cpp file with less code to clog your brain. And who knows, maybe the compiler (or perhaps the linker) actually cares a little bit when we type the inline keyword.

Computing AABB Trick (Loop Trick)

Lately I noticed a small trick that applies to loops when trying to find a minimum or maximum of some values. Usually I just apply the trick to loops where I need to compute an AABB over some geometry. I think I noticed this trick when reading a for loop Erin Catto wrote in some of Box2D’s internal code.

The trick is super simple: just process the first element outside of the loop to set up your initial conditions, then form your loop to skip the first element. An assumption would be made that there’s at least one element in the array to process. Here’s an example for computing an AABB:

Usually I myself would have written this kind of code like so and not given any more thought to it:

This second code chunk is arguably just slightly more esoteric and is definitely a little less efficient for no good reason.

One could also skip the first element when finding the min/max of any sort of array, like for example: dot product results. Though simple it’s pretty nice to find small ways to write slightly better code.

Small C++ Reflection Demo

I created a small demonstration program that explains the core ideas behind implementing a custom reflection system for C++. More might be written in this post in the future — for now I’m just storing the demo right here on this webpage:

 

Robust Parallel Vector Test

In an effort to try to find some robust tests for parallel vectors I’ve decided to create some slides talking about the solutions I’ve come across. Hopefully one day some reader can comment or email about some alternatives or better solutions! I decided to write this blog post in slide format since Microsoft’s Powerpoint is pretty snazzy for making quick diagrams and whatnot :)

Here’s the link to Ericson’s blog post on tolerances: Tolerances Revisited. Please note there’s a little bit of ambiguity about whether the test should care if the vectors point in the same direction or not. In general it doesn’t really matter since the point of the slides is numeric robustness.

The gist of the slides is that the scale of the vectors in question matters for certain applications. Computing a good relative epsilon seems difficult, and maybe different epsilon calculations would be good for different applications. I’m not sure!

Here’s a demo you can try out yourself to test two of the solutions from the slides (tests returning true should be considered false positives):

Output of the program is:

Download (PDF, Unknown)

Nearly Geodesic Sphere – Mesh Creation

Capture

This post will let me store some useful code and possibly share it with others. I had originally found this wonderful explanation on stack overflow on how to generate triangles in a super simple manner for a nice sphere mesh.

The idea is to take an octohedron, or icosahedron, and subdivide the triangles on the surface of the mesh, such that each triangle creates 4 new triangles (each triangle creates a Triforce symbol).

One thing the stack overflow page didn’t describe is intermittent normalization between each subdivision. If you imagine subdividing an octohedron over and over without any normalization, the final resulting sphere will have triangles that vary in size quite a bit. However, if after each subdivision every single vertex is normalized then the vertices will snap to the unit sphere more often. This results in a final mesh that has triangles of closer to uniform geodesic area.

The final mesh isn’t purely geodesic and there will be variation in the size of the triangles, but it will be hardly noticeable. Sphere meshes will look super nice and also behave well when simulating soft bodies with Matyka’s pressure volume.

Here’s an example program you can use to perform some subdivisions upon an octohedron (click here to view the program’s output):