I recently gave a talk at DigiPen about some core mathematics I’ve found useful for game development and am happy to share the slides here:

# Category Archives: 2D

# Distance Point to Line Segment

Understanding the dot product will enable one to come up with an algorithm to compute the distance between a point and line in two or three dimensions without much fuss. This is a math problem but discussion and focus is on writing a function to compute distance.

## Dot Product Intuition

The dot product comes from the law of cosines. Here’s the formula:

\begin{equation}

c^2 = a^2 + b^2 – 2ab * cos\;\gamma

\label{eq1}

\end{equation}

This is just an equation that relates the cosine of an angle within a triangle to its various side lengths *a*, *b* and *c*. The Wikipedia page (link above) does a nice job of explaining this. Equation \eqref{eq1} can be rewritten as:

\begin{equation}

c^2 – a^2 – b^2 = -2ab * cos\;\gamma

\label{eq2}

\end{equation}

The right hand side equation \eqref{eq2} is interesting! Lets say that instead of writing the equation with side lengths *a*, *b* and *c*, it is written with two vectors: *u* and *v*. The third side can be represented as *u - v*. Re-writing equation \eqref{eq2} in vector notation yields:

\begin{equation}

|u\;-\;v|^2 – |u|^2 – |v|^2 = -2|u||v| * cos\;\gamma

\label{eq3}

\end{equation}

Which can be expressed in scalar form as:

\begin{equation}

(u_x\;-\;v_x)^2 + (u_y\;-\;v_y)^2 + (u_z\;-\;v_z)^2\;- \\

(u_{x}^2\;+\;u_{y}^2\;+\;u_{z}^2) – (v_{x}^2\;+\;v_{y}^2\;+\;v_{z}^2)\;= \\

-2|u||v| * cos\;\gamma

\label{eq4}

\end{equation}

By crossing out some redundant terms, and getting rid of the -2 on each side of the equation, this ugly equation can be turned into a much more approachable version:

\begin{equation}

u_x v_x + u_y v_y + u_w v_w = |u||v| * cos\;\gamma

\label{eq5}

\end{equation}

Equation \eqref{eq5} is the equation for the dot product. If both *u* and *v* are unit vectors then the equation will simplify to:

\begin{equation}

dot(\;\hat{u},\;\hat{v}\;) = cos\;\gamma

\label{eq6}

\end{equation}

If *u* and *v* are **not** unit vectors equation \eqref{eq5} says that the dot product between both vectors is equal to *cos( γ )* that has been scaled by the lengths of *u* and *v*. This is a nice thing to know! For example: the squared length of a vector is just itself dotted with itself.

If *u* is a unit vector and *v* is not, then *dot( u, v )* will return the distance in which *v* travels in the *u* direction. This is useful for understanding the plane equation in three dimensions (or any other dimension):

\begin{equation}

ax\;+\;by\;+\;cz\;-\;d\;=\;0

\end{equation}

The normal of a plane would be the vector: { *a, b, c* }. If this normal is a unit vector, then *d* represents the distance to the plane from the origin. If the normal is **not** a unit vector then *d* is scaled by the length of the normal.

To compute the distance of a point to this plane any point can be substituted into the plane equation, assuming the normal of the plane equation is of unit length. This operation is computing the distance along the normal a given point travels. The subtraction by *d* can be viewed as “translating the plane to the origin” in order to convert the distance along the normal, to a distance to the plane.

## Writing the Function: Distance Point to Line

The simplest function for computing the distance to a plane (or line in 2D) would be to place a point into the plane equation. This means that we’ll have to either compute the plane equation in 2D if all we have are two points to represent the plane, and in 3D find a new tactic altogether since planes in 3D are not lines.

In my own experience I’ve found it most common to have a line in the form of two points in order to represent the parametric equation of a line. Two points can come from a triangle, a mesh edge, or two pieces of world geometry.

To setup the problem lets outline the function to be created as so:

1 2 3 |
float DistancePtLine( Vec2 a, Vec2 b, Vec2 p ) { } |

The two parameters *a* and *b* are used to define the line segment itself. The direction of the line would be the vector *b – a*.

After a brief visit to the Wikipedia page for this exact problem I quickly wrote down my own derivation of the formula they have on their page. Take a look at this image I drew:

The problem of finding the distance of a point to a line makes use of finding the vector *d *that points from *p* to the closest point on the line *ab*. From the above picture: a simple way to calculate this vector would be to subtract away the portion of *a – p* that travels along the vector *ab*.

The part of *a – p* that travels along *ab* can be calculated by projecting *a – p* onto *ab*. This projection is described in the previous section about the dot product intuition.

Given the vector *d* the distance from *p* to *ab* is just *sqrt( **dot( d, d ) )*. The *sqrt* operation can be omit entirely to compute a distance squared. Our function may now look like:

1 2 3 4 5 6 7 8 |
float DistancePtLine( Vec2 a, Vec2 b, Vec2 p ) { Vec2 n = b - a; Vec2 pa = a - p; Vec2 c = n * (Dot( pa, n ) / Dot( n, n )); Vec2 d = pa - c; return sqrt( Dot( d, d ) ); } |

This function is quite nice because it will never return a negative number. There is a popular version of this function that performs a division operation. Given a very small line segment as input for *ab* it is entirely possible to have the following function return a negative number:

1 2 3 4 5 6 7 8 |
float SqDistancePtLine( Vec2 a, Vec2 b, Vec2 p ) { Vec2 ab = b - a, ap = p - a, bp = p - b; float e = Dot( ap, ab ); return Dot( ap, ap ) - e * e / Dot( ab, ab ); } |

It’s very misleading to have a function called “square distance” or “distance” to return a negative number. Passing in the result of this function to a *sqrt* function call can result in *NaN*s and be really nasty to deal with.

## Barycentric Coordinates – Segments

A full discussion of barycentric coordinates is way out of scope here. However, they can be used to compute distance from a point to line *segment*. The segment portion of the code just clamps a point projected into the line within the bounds of *a* and *b*.

Assuming readers are a little more comfortable with the dot product than I was when I first started programming, the following function should make sense:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
float SqDistancePtSegment( Vec2 a, Vec2 b, Vec2 p ) { Vec2 n = b - a; Vec2 pa = a - p; float c = Dot( n, pa ); // Closest point is a if ( c > 0.0f ) return Dot( pa, pa ); Vec2 bp = p - b; // Closest point is b if ( Dot( n, bp ) > 0.0f ) return Dot( bp, bp ); // Closest point is between a and b Vec2 e = pa - n * (c / Dot( n, n )); return Dot( e, e ); } |

This function can be adapted pretty easily to compute the closest point on the line segment to *p* instead of returning a scalar. The idea is to use the vector from *p* to the closest position on *ab* to project *p* onto the segment *ab*.

The above function works by computing barycentric coordinates of *p* relative to *ab*. The coordinates are scaled by the length of *ab* so the second if statement must be adapted slightly. If the direction *ab* were normalized then the second if statement would be a comparison with the value 1, which should make sense for barycentric coordinates.

## Sample Program

Here’s a sample program you can try out yourself:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 |
#include <cstdio> struct Vec2 { float x; float y; const Vec2 operator-( const Vec2& a ) { Vec2 v; v.x = x - a.x; v.y = y - a.y; return v; } const Vec2 operator*( float a ) const { Vec2 v; v.x = x * a; v.y = y * a; return v; } }; float Dot( const Vec2& a, const Vec2& b ) { return a.x * b.x + a.y * b.y; } int main( ) { Vec2 a, b, p; a.x = 1.0f; a.y = 1.0f; b.x = 5.0f; b.y = 2.0f; p.x = 3.0f; p.y = 3.0f; Vec2 n = b - a; Vec2 pa = a - p; Vec2 c = n * (Dot( n, pa ) / Dot( n, n )); Vec2 d = pa - c; float d2 = Dot( d, d ); printf( "Distance squared: %f\n", d2 ); } |

The output is: “Distance squared: 2.117647″.

# Deriving OBB to OBB Intersection and Manifold Generation

The Oriented Bounding Box (OBB) is commonly used all over the place in terms of simulation and game logic. Although often times OBBs aren’t actually bounding anything, and really just represent an implicitly defined box.

OBBs commonly come in 2D and 3D form, though there aren’t all that many resources on the internet describing how to derive the 3D case. The ability to write robust collision detection, and more importantly manifold generation (contact data needed to resolve the collision), relies on a good understanding of linear algebra and geometry.

Readers will need to know about affine and linear transformations, and have intuition about the dot and cross products. To learn these preliminaries I highly recommend the book “Essential Mathematics” by Jim Van Verth. I will also be assuming readers are familiar with the Separating Axis Theorem (SAT). If you aren’t familiar with SAT please do a short preliminary research on this topic.

The fastest test I’m aware of for detection collision between two OBBs, in either 2 or 3 dimensions, makes use of the Separating Axis Theorem. In both cases it is beneficial to transform one OBB into the space of the other in order to simplify particular calculations.

Usually an OBB can be defined something like this in C++:

1 2 3 4 5 6 7 |
// Example of a 3D OBB struct OBB { Mat3 r; // world space x, y and z axes (local axes of this OBB) Vec3 c; // center of the OBB Vec3 e; // half widths along x, y and z axes }; |

## 6 Face Axes

As with collision detection against two AABBs, OBBs have x y and z axes that correspond to three different potential axes of separation. Since an OBB has local oriented axes they will most of the time not line up with the standard Euclidean basis. To simplify the problem of projecting OBB \(A\) onto the the axes of OBB \(B\), all computations ought to be done within the space of \(A\). This is just a convention, and the entire derivation could have been done in the space of \(B\), or even in world space (world space would have more complex, and slow, computations).

The translation vector \(t\) is defined as \(t = R_{a}^{T} * (C_b – C_a)\), where \(C_i\) denotes the center of an OBB, and \(R\) denotes the rotation of an OBB. \(t\) points from \(A\)’s center to \(B\)’s center, within the space of \(A\). The relative rotation matrix to transform from B’s frame to A’s frame is defined as \(C = R_{a}^{T} * R_b\). \(C\) can be used to transform \(B\)’s local axes into the frame of \(A\). In \(A\)’s own frame it’s local axes line up with the standard Euclidean basis, and can be directly computed with the half width extents \(e_a\).

To calculate the separation \(s\) along the local axis \(l\) from an OBB examine the following diagram and matching equation:

\begin{equation}

s = |t \cdot l| – (|a \cdot l| + |(C * b) \cdot l|)

\label{eq1}

\end{equation}

Due to symmetry the absolute value of a projection can be used in order to gather a value that can be used to represent a projection along a given direction. In our case we aren’t really interested in the sign of the projection and always want the projection towards the other OBB. I understand that newer programmers may have some trouble translating math into code, so here’s an example of eq.\eqref{eq1} as C++:

1 2 |
// Using the OBB definition from earlier in this post float s = dot( t, l ) - (e.x + dot( C.Column0( ), l )); |

## 9 Edge Axes

In 2D eq.\eqref{eq1} is all that is needed to detect overlap upon any given possible axis of separation. In 3D a new topological feature arises on the surface of geometry: edges. Because edges form entirely separate Voronoi regions from face surfaces they can also provide possible axes of separation. With OBBs it is fast to test all possible unique edge axes of separation.

Given two edges upon two OBBs in 3D space the vector perpendicular to both would be the possible axis of separation, and can be computed with a cross product. A possible 9 axes of separation arise from crossing each local axis of \(A\) with the each local axis of \(B\).

Since \(C\) is composed of a concatenation between \(R_{a}^{T}\) and \(R_b\), the rows of this matrix can be thought of as the local axes of \(B\) in the space of \(A\). When a cross product between two axes is required to test separation, the axis can be pulled directly out of \(C\). This is much faster than computing a cross product in world space between two arbitrary axis vectors.

Lets derive a cross product between \(A\)’s x axis and \(B\)’s z axis, all within the space of \(A\). In \(A\)’s space \(A\)’s x axis is \(\begin{Bmatrix}1 & 0 & 0\end{Bmatrix}\). In \(A\)’s space \(B\)’s z axis is \(\begin{Bmatrix}C_{20} & C_{21} & C_{22}\end{Bmatrix}\). The axis \(l\) is just the cross product of these two axes. Since \(A\)’s x axis is composed largely of the number \(0\) elements of \(C\) can be used to directly create the result of the cross product, without actually running a cross product function. We arrive to the conclusion that for our current case \(l = \begin{Bmatrix}0 & -C_{22} & C_{21}\end{Bmatrix}\). This particular \(l\) axis can be used in eq.\eqref{eq1} to solve for overlap.

Since the matrix \(C\) is used to derive cross product results the sign of the elements will be dependent on the orientation of the two OBBs. Additionally, the cross product will negate on of the terms in the final axis. Since eq.\eqref{eq1} requires only absolute values it may be a good idea to precompute *abs*(\(C\)), which takes the absolute value element by element. This computation can be interleaved between the different axis checks for the first 6 checks, only computed as needed. This may let an early out prevent needless computation.

The same idea can be used to derive this axis in the space of \(B\) by crossing \(B\)’s local z axis with \(A\)’s local x axis transformed to the space of \(B\). To do this it may be easy to use \(C^T\), which represents \(R_{b}^{T} * A\). More information about this can be found in my previous article about the properties of transposing a matrix concatenation.

Eventually all axes can be derived on paper and translated into code.

## Manifold Generation

A collision manifold would be a small bit of information about the nature of a collision. This information is usually used to resolve the collision during physical simulation. Once an axis of least penetration is found the manifold must be generated.

For SAT manifold generation is quite straight forward. The entirety of the details are expressed very well by Dirk Gregorius’ great online tutorial from GDC 2013. The slides are openly available for free, and currently hosted here generously by Erin Catto. Readers are encouraged to familiarize themselves with the idea of reference and incident faces, as well as the idea of reference face side planes. For a 2D analogue I have provided my own slides on this topic.

There are only two different cases of collision that should be handled: edge to edge and something to face. Vertex to edge, vertex to face, and vertex to vertex resolutions are ill-defined. On top of this numerical imprecision makes detecting such cases difficult. It makes more sense in every way to skip these three cases and stick to the original two.

## Something to Face

Something has hit the face of one of the OBBs. It can be anywhere from 1 to 4 vertices from the face of the opposing OBB. The entire intersecting volume between two OBBs does not need to be considered due to the nature of SAT and the axis of minimum penetration. Only two faces from each OBB need be considered for this case, and are by convention named the reference and incident faces, one from each OBB.

Finding the incident face given an axis of separation can be found in constant time with implicit geometry like an OBB; the problem reduces to that of finding a vector that most faces a given vector, or in other words the most negative dot product.

Once the incident face is identified it is clipped to the side planes of the reference face. There may be some tricks involving basis transformations here to simplify the clipping routine. In order to perform clipping the incident face vertices must be calculated. Given a face normal from an OBB all four face vertices can be computed with the normal and the half width extents of the OBB. In similar fashion the side planes of the reference face can also be directly computed.

## Edge to Edge

The edge to edge case requires a clever constant-time computation of a *supporting edge.* A supporting edge is the edge most pointed to by a given axis of separation. The computation of this edge may be tricky to derive yourself, though there do exist online references. I suggest checking out Open Dynamics Engine (ODE) by Russell Smith, or the Cyclone Engine by Ian Millington for information on how to compute a supporting edge.

Once both supporting edges are calculated the closest two points between each edge can be calculated and used as a single contact point of which to resolve upon.

## Numeric Stability

In all uses of OBB to OBB intersection cross products with nearly parallel vectors pose a problem. The best solution is to detect when *any* two axes from each OBB are parallel and skip all nine cross product axes. It can be visually shown that no false positives will come from skipping the cross product axes in this case.

A simpler solution, as proposed by Gottschalk ’00 in his paper “Collision Queries using Oriented Bounding Boxes”, which made its way into Ericson’s book “Real-Time Collision Detection”, is to bias all elements of \(C\) with a small epsilon to drive near-zero axes away from a degenerate case. For other axes the epsilon value should be tuned such that it doesn’t have much an impact upon results.

If using this collision routine to resolve collision during physical simulation certain tuning my be appropriate. If an iterative resolver is used it may be preferred to have slightly more consistent manifold generation, as opposed to exact manifold generation. For details I suggest researching works by Erin Catto and comments made by others around the Bullet Forums.

## AABB to OBB

Hopefully by now readers realize that this whole time we’ve been simplifying the problem of OBB to OBB to the problem of AABB to OBB. When an actual AABB needs to be collided against an OBB simplifications can occur. Many simplifications revolve around the non-necessity to transform things into the basis of the AABB, since world coordinates should suffice.

## One Axis Aligned (or Two?)

Some games have OBBs that only orient on some of their axes instead of all of them. One can immediately realize that no cross product checks need to be performed in any case, if two OBBs have orientation locked on the same axis. Other optimizations will occur as well, making some axis tests reduced to that of AABB overlap testing.

## Conclusion

Collision detection is difficult. Writing robust collision detection code requires a good mathematical foundation as well as the ability to write efficient code. This slight mixture of fields of study may be difficult to learn all at once, but it all gets easier with time and practice and the rewards can be very fulfilling.

I’d like to thank my friend Nathan Carlson for teaching much of this information to me, and for his insightful discussions.

# Buoyancy, Rigid Bodies and Water Surface

I’ve spent the last couple weeks researching eigenvalue decomposition and solving cubic polynomials in order to simulate a liquid surface and polyhedral buoyancy! I gave a lecture about this topic at my university and hope the slides can be of interest or help to someone. I’ve attached a couple demo GIFs to the bottom of this post.

# Dynamically Slicing Shapes

Just this morning I finished up a small open source project, along with a huge article at gamedev.tutsplus. The open source project is an implementation of Sutherland-Hodgman clipping which can be used in many advanced shape and mesh manipulation techniques.

*Gif of the open source project in action. Slicing a quad interactively.*

# Sprite Batching and Texture Atlases

Recently I gave a small lecture on sprite batching and texture atlases. Hopefully these resources here can help somebody in the future!

# Simple Sprite Batching

Welcome to the fourth post in a series of blog posts about how to implement a custom game engine in C++. As reference I’ll be using my own open source game engine SEL. Please refer to its source code for implementation details not covered in this article. Files of interest are `Graphics.cpp`

and `Graphics.h`

.

## Batching and Batches

Did I say “simple” sprite batching? I meant dead simple!

Modern graphics card drivers (except Mantle???) do a lot of stuff, and it makes for passing information over to the GPU (or retrieving it) really slow on the PC platform. Apparently this is more of a non-issue on consoles, but meh we’re not all working with consoles now are we?

The solution: only send data one time. It’s latency that kills performance and not so much the amount of data. This means that we’ll do better at utilizing a GPU if we send a lot of data all at once as opposed to sending many smaller chunks.

This is where “batching” comes in. A batch can be thought of as a function call. In OpenGL you’ll see something like `DrawArrays`

, and in DirectX something else. These types of functions send chunks of data to the GPU in a “batch”

## Sprites and 2D

Luckily it’s really easy to draw sprites in 2D: you can use a static quad buffer and instance by sending transforms to the GPU, or you can precompute transformed quads on the CPU and pass along a large vertex buffer, or anything in-between.

However computing the batches is slightly trickier. For now lets assume we have sprites with different textures and different zOrders.

## Computing Batches

In order to send a batch to the GPU we must only draw sprites with the same texture. This is because we can only render instances (or a large vertex array) with a given texture in order to lower draw calls. So we must gather up all sprites with the same texture and render in the correct order according to their zOrders.

If you are able to store your sprites in pod-structures then you’ll be in luck: you can in-place sort a large array of pods really easily using `std::sort`

. If not, then you can at least make an array of pointers or handles and sort those. You’ll have extra indirection, but so be it.

Using `std::sort`

requires STL compatible iterators, and you’ll want one with random access (array index-style access). Here’s an example with a std::vector:

1 2 3 4 5 6 7 8 9 10 11 |
bool SpriteSort( const Sprite& A, const Sprite& B ) { if(A.zOrder == B.zOrder) return A.Texture < B.Texture; else return A.zOrder < B.zOrder; } std::vector<Sprite> sprites; std::sort( sprites.begin( ), sprites.end( ), SpriteSort ); |

The sort implementation within your package of the STL is likely going to be quicksort.

This sort will sort by zOrder first, and if zOrders are matching then sort by texture. This packs a lot of the sprites with similar textures next to each other in the draw list.

From here it’s just a matter of looping over the sprite array and finding the beginning/end of each segment with the same texture.

## Optimizations

A few simple operations can be done here to ensure that computing batches goes as fast as possible. The first is to get all of your sprites together in a single array and sort the array in-place. This is easily done if your sprites are mere pods. This ensures very high locality of reference when transforming the sprite array.

The second optimization is to only sort the sprite array when it has been modified. If a new sprite is added to the list you can sort the whole thing. However there is no need to sort the draw list (sprite array) every single frame if no changes are detected.

## Conclusion

Like I said, sprite batching is super simple in 2D. It can get much more complex if you add in texture atlasing into the mix. If you wish to see an OpenGL example please see the SEL source code.

I was able to render well over 8k dynamic sprites on-screen in my own preliminary tests. I believe I actually ended up being fill-rate bound instead of anything else. This is much more than necessary for any game I plan on creating.

# Convex Hull Generation

I created a presentation to give at my university and luckily can post them up here. The slides are mostly on Quick Hull, but also about half edge mesh format. There is a demo video in the pdf slides that will not play, but is at the bottom of this post. Hope this helps someone!

# Separating Axis Test and Support Points in 2D

I’ve created some slides for Physics Club at DigiPen. Currently the slides were made at my own house with my own resources, so DigiPen doesn’t own them. Thus, I can share this version for public viewing!

The Separating Axis Test (SAT) is a highly versatile and robust collision detection algorithm, and can be implemented in an extremely efficient manner in 2D without too much trouble. I hope these slides can help others out with their collision detection and manifold generation problems.

# Custom Physics Engine – Part 2: Manifold Generation

# Introduction

During the previous article in this Custom Physics Engine series we talked about impulse resolution. Though understanding the mathematics and physics presented there are important, not much could be put into practice without both collision detection and manifold generation.

Collision detection is a pretty widely documented area, and so I won’t go into too much detail on how to achieve collision detection in this article series. Instead I’ll focus on manifold generation, which is in my opinion much more difficult and less-documented compared to collision detection.

In general collision detection is useful for retrieving a boolean result of “are these two things colliding”. The usefulness of such a result ends when this collision needs to be resolved. This is where manifold generation comes in.

# Manifold Generation – Summary

A manifold, in context of physics engines, is a small structure that contains data about the details of a collision between two objects. The two bodies are commonly referred to as A and B. Whenever referring to a “collision” as a system, A is usually the reference object, as in the problem is viewed from A’s orthonormal basis.

I am not actually sure why this structure is called “the manifold”, and I do not know anyone that actually knows. So don’t ask! Either way this structure should be passed around by reference or pointer to avoid unnecessary copies. I also pool all my manifolds, and intrusively link them in order to keep a list of active manifolds during a scene’s step (the term scene is defined in the previous article).

Manifold generation involves gathering three pieces of information:

- Points of contact
- Penetration depth
- Vector of resolution, or collision normal

The points of contact are 2D points (or 3D for a 3D engine) that mark where one shape overlaps another. Usually these contact points are placed onto the vertex of one shape that resides within another.

The penetration depth is the depth of which the two shapes are intersecting. This is found using the Separating Axis Test (SAT). There are lots of resources around that talk about SAT, so I suggest googling for them. The penetration depth is defined as the axis of least penetration. In this case (assuming blue’s frame of reference and vertical as y axis) the y axis is the axis of least penetration.

The collision normal is used to describe in which direction to press both objects away from one another. The collision normal will be a face normal, and in this case it would be a normal pointing towards the brown box, where the normal corresponds to the blue box’s top face.

Generating these three pieces of information can be quite a challenge. Now lets view what a possible setup of the manifold structure might look like:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
// Represents a single point of contact during a collision struct Contact { Vec position; Vec normal; float penetration; }; struct Manifold { Contact contacts[2]; unsigned contactCount; Body *A; Body *B; } |

Note that there can be a variable amount of contact points. I suggest having a contactCount of zero signify “no collision”. I also suggest having only two possible points of contact for a 2D simulation, and to start just use a single possible point of contact. More than one point of contact isn’t necessary until advanced constraint resolution is used (a future article).

It is important to just keep an array of contacts within this data structure as to keep strong cache coherency. There’s no reason to dynamically allocate the array of contacts.

# Circle to Circle

I’ll be covering how to gather manifold information for specialized cases of couple different types of shapes. Lets first go over circle to circle first. Here is what the definition of a circle would look like:

1 2 3 4 5 |
struct Circle { float r; Vec position; }; |

The first thing to do is to see if they are colliding or not. Again, throughout this article I’m going to mostly blaze through the collision detection and focus just on gathering important manifold information. Feel free to google or ask specific questions in the comments about collision detection.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
Manifold m; // translation vector between two shapes Vec t = B->pos - A->pos // Cumulative radius float radius = A->r + B->r // Early out condition if(t.LengthSquared( ) > radius * radius) return non-intersecting // Perform sqrt with pythagorean theorem float d = t.Length( ) Contact *c = m.contact1 // Right on top of each other if(d == 0.0f) { // Choose random (but consistent) values c->penetration = A->r; c->normal = Vec( 1, 0 ) c->position = A->pos m.contactCount = 1 } else { c->penetration = radius - d // Utilize our d since we performed sqrt on it already c->normal = t / d // Take A's position and move it along the contact normal // the distance of A's radius c->position = c->normal * A->r + a->pos m.contactCount = 1 } return m |

The above code is really quite simple. The important thing to note is that our contact normal will always be a vector from A to B. In order to create a vector from one point to another you take your endpoint minus your starting pointer. In this case B’s position subtracted by A’s position. This results in a vector from A to B. This vector normalized will be the collision normal, or in other words the direction in which to resolve the collision.

It is important to note that no square root functions are called before the early out condition is checked. Most of the time your shapes are probably not colliding, and so there’s no reason to use a square rooted value.

The last tricky thing is to check if the two shapes are right on top of each other. Though this is unlikely in a dynamic environment, sometimes shapes can be placed directly upon one another through an editor. It is important to, in all collision detection functions, to handle all special cases, even if the handling is bad. Whatever you do just make sure you are consistent. I just chose a random vector to resolve in the direction of.

The nice thing about cirlce to circle collision is that only one collision point is really needed. Just be sure to be consistent in how you choose your collision point in order to reduce simulation jitter.

# AABB to AABB

Collision detection between two AABBs is a bit more complicated than two circles but still quite simple. The idea is to make use of min and max. Lets assume we’re storing our AABBs in a structure like so:

1 2 3 4 5 |
struct AABB { Vec min; // lower x and y coordinate position Vec max; // higher x and y coordinate position }; |

This allows a very simple algorithm to find points of contact. For convex polygons that are not axis aligned often times Sutherland-Hodgman clipping will need to be performed. In our case we can implicitly deduce our contact area due to the nature of AABBs.

First determine if the two AABBs are overlaping at all. When an axis of least penetration is found the collision area can then be deduced.

The idea is to perform the SAT while storing each overlap value. The least overlap is your axis of separation. To get the contact area and two points of intersection you can min your maxes and max your mins (I’m talking about the extents of each AABB).

1 2 3 4 5 |
float a_extent = (abox.max.x - abox.min.x) / 2 float b_extent = (bbox.max.x - bbox.min.x) / 2 // Calculate overlap on x axis (t is translation vector from A to B) float x_overlap = a_extent + b_extent - abs( t.x ) |

This sounds silly, but that’s how you do it. I suggest drawing it out. Here’s how to find your collision area given by two points (intersection points of the AABBs):

1 2 3 4 |
c1.position.x = max( abox.min.x, bbox.min.x ) c1.position.y = max( abox.min.y, bbox.min.y ) c2.position.x = min( abox.max.x, bbox.max.x ) c2.position.y = min( abox.max.y, bbox.max.y ) |

The last bit of info required would be to record the penetration and contact normal. Penetration is your axis of least overlap, so after you’ve found your axis of least overlap you can just assign a vector value as your contact normal. If you have found the axis of least penetration to be on the x axis, you want to point towards object B along the x axis. If the y axis is the axis of least penetration, you want to point towards object B along the y axis.

1 2 3 4 5 6 7 8 9 10 11 12 |
if(x_overlap > y_overlap) { // Point towards B knowing that t points from A to B c->normal = t.x < 0 ? Vec( 1, 0 ) : Vec( -1, 0 ) c->penetration = x_overlap; } else { // Point toward B knowing that t points from A to B c->normal = t.y < 0 ? Vec( 0, 1 ) : Vec( 0, -1 ); c->penetration = y_overlap; } |

That’s all there is to the AABB to AABB intersection. Be sure to properly record the number of contacts found (if any), and if neither axis x or y are actually overlapping, then that means there is no intersection.

# AABB to Circle

I will leave AABB to Circle collision an exercise for the reader, though I will quickly provide an explanation paragraph behind the idea. What needs to be done is to first determine if the shapes are overlapping at all. I have a previous post on my blog that explains the Circle to AABB intersection, and more information about such an intersection check can be found around the internet.

Lets assume A is the AABB and Circle is B and we have a collision. The collision normal will again be the vector from A to B, except slightly modified. The early out condition involves finding the closest point on the AABB to the Circle. The collision normal is the translation vector from A to B subtracted by a vector to the closest point on the AABB. This will represent a vector from the circle’s center to the closest point.

The contact point will be residing on the circle’s radius in the direction of the contact normal. This should be easy to perform if you understood the Circle vs Circle collision detailed above. The penetration depth will be the length of the collision normal before it is normalized.

There is one special case that must be properly handled: if the center of the circle is within the AABB. This is quite simple to handle; clamp the center of the circle to the edge of the AABB along the edge closest to the circle’s center. Then flip the collision normal (so it points away from the AABB instead of to the center) and normalize it.

# OBBs and Orientation

Now lets start talking about adding in some manifold generation for some more complex oriented shapes! The first thing that must be learned is how to properly change from one orthonomormal basis to another (that is shift from one frame of reference to another). This will vastly simplify collision detection involving OBB shapes.

Changing a basis involves taking the orientation and translation of an OBB and applying the inverse of these two it to another shape. In this way you can then treat the OBB as an AABB as long as you are still referring to your transformed object. Lets go over this in some more detail with some pictures.

Here is what an OBB is like in the OBB’s frame of reference (left), and the OBB in model space (right).

The important thing to realize is that in order to place an object into an OBB’s frame of reference it must have inverse translation and rotation of the OBB’s translation and rotation applied to it. This takes the OBB’s position to the origin, and the OBB can then be treated as an AABB.

If the inverse rotation of the OBB, in this case -45 degrees, is applied to both the OBB and an object near it, this is what happens:

As you can visually see, once the circle has been inversely transformed into the OBB’s frame of reference the OBB can be viewed as a simple AABB centered at the origin. The extents of the OBB can be used to mimic an AABB, and the OBB to Circle intersection and manifold generation can be treated identically to the AABB to Circle intersection, if a proper inverse transformation is performed. Again, this inverse transformation is called a “change of basis”. It means you transform the Circle into the OBB’s frame of reference.

# Mat2 Rotation Matrices

Lets go over rotations in 2D extremely quickly. I won’t go over derivation here for brevity’s sake (as you will see, brevity is a close friend of mine in these Physics Engine articles haha). Instead I will just show you how to create your own 2 by 2 matrix and use it as apart of whatever math library you currently have (which you should hand-code yourself!). Really the only useful thing about having a 2 by 2 matrix is to do rotation operations.

For those using C++ you’re in luck for I know how to use unions.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
class Mat2 { public: // Contiguous memory in various access formats union { struct { float m00, m01; float m10, m11; }; float m[2][2]; float v[4]; }; // ... }; |

The above is a proper usage of the unnamed union trick. The elements of the 2 by 2 array can be accessed as if they are a two dimensional array, single dimensional array, or separate floating point values. Additionally you can stick two vectors into your union for column or row access, if you so wish.

I want to briefly hit all the important methods without writing an entire book, so get ready for code snippets to be thrown at you.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
Mat2::Mat2( ) { } Mat2::Mat2( real radians ) { real c = std::cos( radians ); real s = std::sin( radians ); m00 = c; m01 = -s; m10 = s; m11 = c; } Mat2::Mat2( real a, real b, real c, real d ) : m00( a ), m01( b ) , m10( c ), m11( d ) { } void Mat2::Set( real a, real b, real c, real d ) { m00 = a; m01 = b; m10 = c; m11 = d; } // Imagine there's assignment operators, SetIdentity, Zero, Transpose, // Inversion (which I won't cover here), Abs (absolute values all elements) // Scalar multiplication/division/addition/subtraction, determinant // negative operator, etc. Don't forget vector/matrix multiplication void Mat2::SetRotation( real radians ) { real c( std::cos( radians ) ); real s( std::sin( radians ) ); m00 = c; m01 = -s; m10 = s; m11 = c; } |

The first thing you should realize is that the default constructor does nothing. This is important. Often times you will create a matrix only to briefly thereafter assign some value to it. Do not default construct your matrix to zero values as an optimization. Force the user to use the Set function like so: mat.Set( 0, 0, 0, 0 ).

The interesting functions here are the rotation constructor and SetRotation functions. Each one computes cosine and sine from a given radian value and caches the result. Caching the results prevents unneeded additional calls to cosine and sine. Note the format in which sine and cosine are stored. It is also important to realize that m00 and m10 represent a transformation of the x axis, and m01 and m11 represent a transformation of the y axis. Each of these two are columns, both columns can be viewed as unit vectors.

Multiplying a Mat2 with a vector will rotate the vector’s x and y components around the origin. It is important to realize where your origin is before you apply a rotation. If you want to jump into an OBB’s frame of reference you must do an inverse translation to set the OBB as the origin. This allows you to then apply the OBB’s inverse rotation (perhaps with the inverse operator of your Mat2, see Box2D if you don’t know how to inverse a Mat2) and rotate about the origin (which is about the OBB).

# OBB Representation

Every oriented shape will need to store its orientation somehow. I suggest the following:

1 2 3 4 5 6 |
struct OBB { float radians; Mat2 u; // anything else needed here } |

The OBB should store its current orientation in both a single floating point value along with a matrix to represent that radian value as a rotation matrix. When you need to rotate the OBB during integration, you can just add or subtract a little bit to the radians value, and then call u.SetRotate( radians ) to update the rotation matrix. This makes use of a simple and organized way to cache results from sine and cosine calls, and minimizes the amount of calls to these functions that you require.

# OBB to OBB

Now lets talk about the *big one*. How in the world can you see if two OBBs intersect? Both boxes are oriented, so the problem would involve a lot of complex calculations involving trigonometric computations.

Lets make things easier: transform one OBB into the other OBB’s frame of reference, and treat the transformed object as an AABB. Now the problem becomes much simpler.

First perform a separating axis check and find the axis of least penetration. In order to perform the SAT you must find a projected radius onto the axis you are currently testing.

If the sums of the projected radii from both OBBs are larger than the distance between the center of each OBB (along your respective axis), then they are intersecting on that axis. This method works for all convex polygons in 2D.

The way I perform this check is by taking the translation vector from A to B, lets call it T. Then I rotate T into A’s frame of reference and subtract A’s extent vector, and subtract that entire result by B’s extent vector rotated into A’s frame of reference. This results in a vector holding the overlap along the x and y axis for object A. Due to symmetry only two axes need to be tested. The same operation can be done for object B to find B’s separating axis. If no separating axis is found the shapes are intersecting.

This is where things start to get difficult. Since we just performed an early out test, now we need to find the axis of least separation. However you cannot just blindly perform floating point comparisons due to floating point error during rotation of the translation vector from A to B. You must bias your comparisons to favor one axis over another in a consistent manner. This is important for stability!

In the above picture, which set of contact points/normal is the “correct” one? Each axis of separation is very close to the same distance, so floating point error could account for which axis is chosen. If your simulation flops between the two you’ll end up with strange jitter and your simulation will be less believable. The solution is to just favor one axis over another using an error EPSILON value.

1 2 3 4 5 6 7 8 9 10 11 |
// Compares two floats to see if a is greater than b by a slight margin of error. Small // samples from both a and b are used in the comparison to bias the result in one direction // in order to stray away from mixed results due to floating point error. inline bool BiasGreaterThan( real a, real b ) { const real k_biasRelative = 0.95f; const real k_biasAbsolute = 0.01f; // >= instead of > for NaN comparison safety return a >= b * k_biasRelative + a * k_biasAbsolute; } |

Here’s a function (you’re lucky I just gave it to you!) that will check which value is greater than the other. Each value is modified slightly, and a small bias is fed into the comparison based off of how large each value passed in is. This can be used to favor one axis of separation over another, until a threshold larger than floating point error is breached.

Carefully record which direction your normal goes (from A to B) depending on what axis is separating. This is a similar operation to the one found in AABB to AABB as seen above.

Once an axis is found two line segments must be identified: the reference face and incident face. The reference face corresponds to your normal you recorded. So the reference face is the face that corresponds to your axis of least penetration. If your axis of least penetration is on A, then your reference face is on A. The incident face is the one with the information we need to generate our manifold.

The incident face it the face on the other object that the reference face has hit. We must compute it. All that needs be done is find out which face is most facing the normal (has the most negative dot product). Looping through all faces performing dot products is the simplest way to achieve this. A more optimized algorithm is the follow the sign of the flipped normal. Your normal will have an x and y component (and z in 3D), and each component will be positive or negative. This gives you a total of four combinations of positive or negative.

First check to see if the normal is point more towards the x axis or y axis (after you transform it into the incident face’s frame of reference). Then check the sign of the y axis. You know know to which face your normal is most pointing.

Think of it this way: if the normal is pointing more towards the x axis (absolute value of n.x is greater than absolute value of n.y), then you are going to be pointing more towards either the left or right face on your OBB (in the OBB’s frame of reference). All that you need to know from there, is if you’re pointing left or right on the x axis, which is denoted by the sign of the normal.

Your incident face segment endpoints are the extents of the x axis of the OBB, which are aligned with the x axis in the OBB’s frame of reference. You can then take the OBB x half-extent and use the positive and negative version of it to form two points: (-x, 0) and (x, 0) where x is the half-extent of the OBB on its x axis. Rotate these points with the OBB’s rotation matrix, and then translate them into world space with the OBB’s position vector, and you now have your endpoints for your incident face in world space.

All that is left is the clip the incident face to the reference face side planes using Sutherland-Hodgman clipping. Here’s a diagram showing this:

This is a fairly difficult thing to do unless you know your math fairly well. Each side plane can be simply computed once you know your reference normal. Here’s the process for getting two side planes:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
// Line representation in 2D is ax + by + c = 0 // a and b form the normal of the line, and c is the distance to the origin struct Line { // Must be normalized Vec n; // Vec( a, b ) real c; // Distance from origin to line }; // Create our reference face line Line referenceFace; Line posPlane, negPlane; // two side planes for reference face // Assuming our axis of least penetration was on B's Y axis // normal is the collision normal // dot product is to compute c (ax + by) referenceFace.Set( -normal, Vec2::DotProduct( b_pos, -normal ) + b_ext.x ); Vec planeNormal = bu.AxisY( ); // grab second column from Mat2 real c = Vec2::DotProduct( b_pos, planeNormal ); // ax + by posPlane.Set( planeNormal, c + b_ext.y ); negPlane.Set( -planeNormal, -c + b_ext.y ); |

The above code was hand-derived by myself, but you’ll find something very similar within Box2D Lite (where I originally learned the math from). If this is confusing to you I suggest reading up on the various types of representations of lines in 2D.

Here’s another diagram I just found in my own source files:

1 2 3 4 5 6 7 8 9 10 11 12 |
// y // ^ ->n ^ // +---+ ------posPlane-- // x < | i |\ // +---+ c-----negPlane-- // \ v // r // // r : reference face // i : incident box // c : clipped point // n : incident normal |

You might have noticed I’m storing the c value. This is important as the c value stored within the Line structure can be used to find the distance of a point to the line like so:

1 2 3 4 5 6 |
// Assuming n is normalized simplifies the equation that is // found here: http://mathworld.wolfram.com/Point-LineDistance2-Dimensional.html real Line::DistanceToLine( const Vec2& point ) const { return Vec2::DotProduct( point, n ) - c; } |

I’ll be nice and provide you my clipping notes I created for Sutherland-Hodgman clipping :)

1 2 3 4 5 6 7 8 9 10 11 |
// Sutherland-Hodgman clipping algorithm // out in out in out in out in // s | | s s | | s // \ | | / \ | | / // \ | |/ \| | / // \ | i i | / // \ | /| |\ | / // \ | / | | \ | / // e | e | | e | e // // none push i push i push e push e |

However since we are clipping a line to a single plane the algorithm will need to be slightly modified. In some cases you need to push extra points, since Sutherland-Hodgman assumes to be clipping two polygons in a loop. See Box2D Lite for a good implementation of the incident to line clipping. I however use my own hand-derived algorithm that works in a very similar way. I’ll share some pseudo code for clipping a segment to a Line, assuming the Line is in the format of offset c and a normal n:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
Segment::Clip( Line line ) { // Retrieve distances from each endpoint to the line d1 = line.DistanceToLine( v1 ) d2 = line.DistanceToLine( v2 ) // If negative (behind plane) clip the point // Add in points in front of plane to output array if(d1 <= 0) outVecArray += v1 if(d2 <= 0) outVecArray += v2 // If the points are on different sides of the plane if(d1 * d2 < 0) // less than to ignore -0.0f { // Push interesection point with simple interpolation alpha = d1 / (d1 - d2) outVecArray += v1 + alpha * (v2 - v1) } } |

After clipping to the side planes floating point error must be accounted for. If our clipping process went as expected we must have two resulting points. If we end up with less than two output points that means floating point error has screwed us over, and we must treat the entire process as if the two OBBs are non-intersecting.

Assuming we have two points output from our clipping we then need to only consider points that are behind the reference face. Use the DistanceToLine function I provided above to check this. Record each point behind the reference face as a contact point!

# OBB to AABB

If you’ve been reading along diligently and deriving your own understanding, you should be able to figure out how to perform such a check. This test is the exact same as OBB to OBB, except with less rotating from one basis to another. You can recall that the OBB to OBB test rotated one OBB into the frame of the other completely, turning the test into an AABB to OBB test. The same thing can be done here without the preliminary change of basis, and perhaps some other small optimizations. I will leave the section out as a challenge for the reader.

# Conclusion

I hope you have a solid understanding of various types of hand-crafted intersection tests! Feel free to email me or comment here with questions or comments. I refer you to Box2D Lite’s source code as a reference to the material in this article, along with all of Erin Catto’s GDC slides, especially the one from 2007. The next article in this series is likely to talk about creating a simple O(N^2) broadphase, and how to cull out duplicate contact pairs. Until then this article along with the previous one is more than enough to create a simple physics engine. Be aware that with impulse resolution only one point of contact should be necessary. This article goes over grabbing multiple contact points, and this is because more advanced collision resolution techniques can make use of more contact points.