I’ve spent the last couple weeks researching eigenvalue decomposition and solving cubic polynomials in order to simulate a liquid surface and polyhedral buoyancy! I gave a lecture about this topic at my university and hope the slides can be of interest or help to someone. I’ve attached a couple demo GIFs to the bottom of this post.

# Category Archives: Physics

# Dynamically Slicing Shapes

Just this morning I finished up a small open source project, along with a huge article at gamedev.tutsplus. The open source project is an implementation of Sutherland-Hodgman clipping which can be used in many advanced shape and mesh manipulation techniques.

*Gif of the open source project in action. Slicing a quad interactively.*

# Convex Hull Generation

I created a presentation to give at my university and luckily can post them up here. The slides are mostly on Quick Hull, but also about half edge mesh format. There is a demo video in the pdf slides that will not play, but is at the bottom of this post. Hope this helps someone!

# Orientation Constraint Derivation

I decided today that I wanted to create an orientation constraint. This will let me build more complex joints that make use of constraining relative angles between rigid bodies. I learned the details of implementing such a constraint from Bart Barenbrug’s thesis.

The position constraint is quite simple. The idea is to take a basis of vectors (u, v, w) into local space of two rigid bodies A and B. This yields xa, xb, ya, yb, za and zb. Each frame the local basis vectors are transformed into world space. These two world space sets should be aligned at all times in order to satisfy the constraint.

The dot product of xa and yb provides information about the error along u. The dot product of xa and zb provides information about the error along v. The dot product of ya and zb provides information about the error along w. Knowing this four vectors can be used instead of six, as the only ones needed are: xa, ya, yb, zb.

\begin{equation}

C = \theta_a – \theta_b = 0

\label{eq1}

\end{equation}

Deriving the equation yields:

\begin{equation}

\dot{C} = \omega_a – \omega_b = 0

\label{eq2}

\end{equation}

Resulting in the following Jacobian:

\begin{equation}

J = \begin{bmatrix} 0 & -t & 0 & t \end{bmatrix}

\label{eq3}

\end{equation}

Where *t* is an axis of rotation within a basis matrix. Three of these constraints can be used to restrict each axis of rotation.

The interesting part of this constraint, for myself at least, was writing down on paper the mass matrix used to solve all three constraints simultaneously. All three constraints meaning restricting the relative rotation between two rigid bodies, aka the orientation joint (or angle joint). As a quick refresher the mass matrix is within the following equation:

\begin{equation}

\lambda = \frac{-(JV + B)}{J * M^{-1} * J^T}

\label{eq4}

\end{equation}

The bottom portion *J * M ^{-1} * J^{T}* would be the constraint mass. This is actually a tensor and must be transformed from constraint space by using the tensor transformation formula. This is the same sort of transformation used on inertia tensors to transform them from model to world space.

Once this mass matrix is calculated it can be inverted and used to solve for lambda, where lambda will satisfy all constraint equations used in the mass matrix.

\begin{equation}

M^{-1} = \begin{bmatrix}

M^{-1}_a & 0 & 0 & 0 \\

0 & I^{-1}_a & 0 & 0 \\

0 & 0 & M^{-1}_b & 0 \\

0 & 0 & 0 & I^{-1}_b

\end{bmatrix}

\label{eq5}

\end{equation}

Knowing this we can setup our constraint mass like so:

\begin{equation}

\begin{bmatrix}

0 & -t_1 & 0 & t_1 \\

0 & -t_2 & 0 & t_2 \\

0 & -t_3 & 0 & t_3 \\

\end{bmatrix}

\begin{bmatrix}

M^{-1}_a & 0 & 0 & 0 \\

0 & I^{-1}_a & 0 & 0 \\

0 & 0 & M^{-1}_b & 0 \\

0 & 0 & 0 & I^{-1}_b

\end{bmatrix}

\begin{bmatrix}

0 & 0 & 0 \\

-t_1 & -t_2 & -t_3 \\

0 & 0 & 0 \\

t_1 & t_2 & t_3 \\

\end{bmatrix}

\label{eq6}

\end{equation}

Multiplying the two equations on the left gives:

\begin{equation}

\begin{bmatrix}

0 & I^{-1}_a * -t_1 & 0 & I^{-1}_b * t_1 \\

0 & I^{-1}_a * -t_2 & 0 & I^{-1}_b * t_2 \\

0 & I^{-1}_a * -t_3 & 0 & I^{-1}_b * t_3 \\

\end{bmatrix}

\begin{bmatrix}

0 & 0 & 0 \\

-t_1 & -t_2 & -t_3 \\

0 & 0 & 0 \\

t_1 & t_2 & t_3 \\

\end{bmatrix}

\label{eq7}

\end{equation}

Multiplying the last two equations leaves the final constraint mass matrix:

\begin{equation}

\begin{bmatrix}

\scriptsize -t_1 \cdot I^{-1}_a -t_1 + t_1 \cdot I^{-1}_b t_1 &

\scriptsize -t_2 \cdot I^{-1}_a -t_1 + t_2 \cdot I^{-1}_b t_1 &

\scriptsize -t_3 \cdot I^{-1}_a -t_1 + t_3 \cdot I^{-1}_b t_1 \\

\scriptsize -t_1 \cdot I^{-1}_a -t_2 + t_1 \cdot I^{-1}_b t_2 &

\scriptsize -t_2 \cdot I^{-1}_a -t_2 + t_2 \cdot I^{-1}_b t_2 &

\scriptsize -t_3 \cdot I^{-1}_a -t_2 + t_3 \cdot I^{-1}_b t_2 \\

\scriptsize -t_1 \cdot I^{-1}_a -t_3 + t_1 \cdot I^{-1}_b t_3 &

\scriptsize -t_2 \cdot I^{-1}_a -t_3 + t_2 \cdot I^{-1}_b t_3 &

\scriptsize -t_3 \cdot I^{-1}_a -t_3 + t_3 \cdot I^{-1}_b t_3 \\

\end{bmatrix}

\label{eq8}

\end{equation}

I apologize for the tiny text! Using this mass matrix like the following can be done to implement an orientation constraint to restrict the relative rotation of two rigid bodies:

1 2 3 4 5 6 7 8 9 10 |
// M is the constraint mass M = M.Inverse( ); Vec3 lambda = M * -(JV + b); // Implement lambda * JV // Apply impulse on body A and B // t1, t2 and t3 are basis to align to (the Jacobian elements) Vec3 P = t1 * lambda.x + t2 * lambda.y + t3 * lambda.z; A->angularVel -= iA * P; B->angularVel += iB * P; |

# Constraints Introduction

This post has been moved here.

# Protip: How to be a Physics Dev – Debug Rendering

Today I figured out the single most important tip I could ever give anyone in regards to being a competent physics engine developer. I will share this tip right here on my humble blog:

Your skills as a physics developer will grow at a rate proportional to how good your debug draw information is. I’d actually argue that your skills as a physics developer *are* proportional to how good your debug draw is.

I feel I’m at a decent place in my skills, as I feel my debug draw has become decent. Lets get some screenshots going!

I digress. I didn’t figure out this simple fact on my own; I gleaned this wisdom off of my friend Nathan Carlson. He pointed out that the reason a lot of students at DigiPen have trouble with computational geometry and general physics engine development due to a lack of debug rendering.

Just take a look at Dirk Gregorius’s sample demo from his GDC 2013 lecture. He fully renders Gauss maps, a minkoswki difference, and has glorius camera controls. I actually copied his camera design and implemented it myself. You can see that he has Ant Tweakbar integrated as well, for various tweaking tasks. Erin Catto has lots and lots of different tests, asserts, and most important of all tons of debug drawing for his Box2D library. The Zero team on the third floor of DigiPen definitely has the craziest debug draw I’ve ever heard of!

Hopefully I can achieve something at least in the likeness of all these other developers, and I recommend readers do the same; intuition is often best built visually when it comes to geometry.

# Dynamic AABB Tree

After studying Box2D, Bullet and some slides an alumnus (thanks to Nathan Carlson!) from DigiPen created, I put together a dynamic AABB (axis aligned bounding box) tree broadphase. A Dynamic AABB Tree is a binary search tree for spatial partitioning. Special thanks to Nathanael Presson for the original authoring of the tree within the Bullet engine source code (see comments of this post).

A broadphase in a physics simulation has the job of reporting when bodies are potentially colliding. Usually this is done through cheap intersection tests between simple bounding volumes (AABBs in this case).

There are some interesting properties of a dynamic AABB tree that make the data structure rather simple in terms of implementation.

There are a couple main functions to implement outlined here:

- Insert
- Remove
- Update

The underlying data structure ought to be a huge array of nodes. This is much more optimal in terms of cache performance than many loose heap-allocated nodes. This is very important as the entire array of nodes is going to be fetched from memory every single time the broadphase is updated.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
struct Node { bool IsLeaf( void ) const { // The right leaf does not use the same memory as the userdata, // and will always be Null (no children) return right == Null; } // Fat AABB for leafs, bounding AABB for branches AABB aabb; union { int32 parent; int32 next; // free list }; union { // Child indices struct { int32 left; int32 right; }; // Since only leaf nodes hold userdata, we can use the // same memory used for left/right indices to store // the userdata void pointer void *userData; }; // leaf = 0, free nodes = -1 int32 height; static const int32 Null = -1; }; |

The node of the AABB tree can be carefully constructed to take up minimal space, as nodes are always in one of two states: branches and leaves. Since nodes are stored in an array this allows for the nodes to be referenced by integral index instead of pointer. This allows the internal array to grow or shrink as necessary without fear of leaving dangling pointers anywhere.

The idea of the tree is to allow user data to be stored only within leaf nodes. All branch nodes contain just a single AABB enveloping both of its children. This leads to a short description of invariants for the data structure:

- All branches must have two valid children
- Only leaves contain user data

The first rule allows for some simplification of operations like insert/remove. No additional code branching is required to check for NULL children, increases code performance.

## Insert

Insertion involves creating a fat AABB to bound the userdata associated with the new leaf node. A fat AABB is just an AABB that is slightly larger than a tight fitting AABB. Once a new leaf node is created with its fat AABB a tree traversal is required in order to find a sibling to pair up with, and a new parent branch is constructed.

Traversing the tree should be done by following some sort of cost heuristic. The best seems to be a cost involving the surface area of involved nodes. See Box2D for a resource in cost heuristics.

After a new parent is created and a sibling found, the bounding volume hierarchy of all parent nodes are invalid. A traversal up the tree, correcting all the bounding AABBs and node heights is required. This hierarchy correction can be abstracted into a simple function:

1 2 3 4 5 6 7 8 9 10 11 12 13 |
void DynamicAABBTree::SyncHeirarchy( int32 index ) { while(index != Null) { int32 left = m_nodes[index].left; int32 right = m_nodes[index].right; m_nodes[index].height = 1 + Max( m_nodes[left].height, m_nodes[right].height ); m_nodes[index].aabb = Combine( m_nodes[left].aabb, m_nodes[right].aabb ); index = m_nodes[index].parent; } } |

## Remove

Removing nodes from binary search trees can involve a stack to trace back a traversal path, however due to the few invariants of the dynamic AABB tree removal quite simple. Since all branches must contain two valid children, and only leaves contain userdata, the only nodes that require deletion are leaves.

Remove the leaf’s parent and replace it with the leaf’s sibling. After this operation the parent hierarchy needs to be recalculated (just as in insert).

## Update

Since this tree is dynamic it needs a way to handle moving objects, and not just static level geometry. Since fat AABBs are created upon insertion, objects can move around a little bit before the AABB within the tree is invalid. The fatness factor, or how much to fatten each AABB, can be fine-tuned for performance. I myself use an arbitrary distance (about 5% of the average scale of objects). It is also possible to fatten the AABB based upon the object’s previous frame’s displacement (see Box2D for details on displacement predictions).

The update function of the tree checks to make sure that the current tight-fitting AABB of a shape is still contained by the AABB within the tree. In order for this operation to be constant time the node index in the tree needs to be intrusively stored within the shape itself. This will allow a shape to provide its node index along with an up to date tight fitting AABB to the tree, in order to see if its AABB is still within the fat AABB’s bounds.

If the tight AABB is not contained by the fat AABB the shape needs to be removed and reinserted into the tree.

## Balancing the Tree

Since this sort of spatial partitioning involves a binary search tree, the tree will perform optimally (in terms of collision queries) so long as the tree is balanced.

However, how you balance the tree matters. I’ve heard (through rumors) that dynamic AABB trees ought to be balanced in terms of surface area and not tree height. Though this makes sense conceptually, I was not sure how to balance the tree based of surface area. Knowing this, I just went with something I’m more comfortable with, which is balancing based upon tree height.

Balancing a single node involves taking a look at its two children. The height of each child should be compared, and if one is higher than the other by a height of two or more a rotation should be performed. In other words, the child with a larger height ought to be promoted, wherein promotion is a way of rotating a node up the tree hierarchy.

This sort of balancing (AVL balancing) can be performed at the beginning of the hierarchy synchronization loop. This allows the tree to self-balance itself upon insertion and removal of leaves.

## Free List

The tree needs to maintain an internal free list of unused nodes. Constructing a free list involves looping over all the nodes of the tree, upon initial tree construction, linking each node to the next with indices. This process is really simple, though readers may be unfamiliar with lists of indices as opposed to lists of pointers.

Here is a helper function I’ve constructed for myself to setup free lists (useful upon construction of the tree, and growing of the internal array):

1 2 3 4 5 6 7 8 9 10 11 12 |
inline void DynamicAABBTree::AddToFreeList( int32 index ) { for(int32 i = index; i < m_capacity - 1; ++i) { m_nodes[i].next = i + 1; m_nodes[i].height = Node::Null; } m_nodes[m_capacity - 1].next = Node::Null; m_nodes[m_capacity - 1].height = Node::Null; m_freeList = index; } |

Taking nodes from the free list involves setting the current m_freeList index to the next node in the list. Inserting nodes is just as trivial. You can think of this sort of list as a singly linked list, except the links are through indices rather than pointers.

## Queries

Querying volumes against the tree is highly efficient, so long as collision checks against an AABB are fast. Sphere, AABB and raycasts are all very fast.

Querying the tree involves detecting collisions with each node in the tree, starting with the root, traversing to all children until the tree is exhausted. Traversal to a child should be performed if there is a collision with the parent.

This sort of query can easily be done with recursion (note: the callback returns bool, signifying continue or termination of the query):

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
template <typename T> inline void DynamicAABBTree::Query( T *cb, const AABB& aabb, int32 id ) const { const Node *n = m_nodes + id; if(Prim::AABBtoAABB( aabb, n->aabb )) { if(n->IsLeaf( )) { // Report, via callback, collision with a leaf if(!cb->TreeCallBack( id )) return; } else { Query( cb, aabb, n->left ); Query( cb, aabb, n->right ); } } } |

However it may be more favorable to do so with an iterative algorithm. Iterative algorithms are generally a little harder to implement, but take much less memory (and thus are more efficient):

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
template <typename T> inline void DynamicAABBTree::Query( T *cb, const AABB& aabb ) const { const int32 k_stackCapacity = 256; int32 stack[k_stackCapacity]; uint32 sp = 1; *stack = m_root; while(sp) { assert( sp < k_stackCapacity ); // k_stackCapacity too small! int32 id = stack[--sp]; const Node *n = m_nodes + id; if(TestAABBOverlap( aabb, n->aabb )) { if(n->IsLeaf( )) { // Report, via callback, a collision with leaf if(!cb->TreeCallBack( id )) return; } else { stack[sp++] = n->left; stack[sp++] = n->right; } } } } |

## Culling Queries Further

It might be a good idea to only query rigid bodies that moved last frame. This is much better than querying all dynamic bodies unconditionally. If you have a contact graph or constraint graph, or any sort of islanding implementation then you can easily cull objects from queries. The idea is that if a rigid body is static or not placed within an island, then it didn’t move at all the previous frame. This is very simple and elegant.

Usually only active (awake) dynamic bodies are placed into an island (and any objects they have contact constraints with). This is because objects that are sleeping don’t need to be considered at all when forming new islands, as they are asleep. Knowing this a small flag can be set within a shape when it is placed into an island. This flag can later be checked to know whether or not it needs to be queried at all within the broadphase.

Though this step is not specific to the dynamic AABB tree, it is a nice trick to know. One caveat is that when contact constraints are updated, they must first have each involved shape’s bounding volume checked against each other. This will allow old contact constraints to be removed once there is no potential for the narrow phase to return true.

## Conclusion

The dynamic AABB tree is an extremely fast spatial partitioning data structure, and is ideal for both large unbounded worlds, and lots of dynamic rigid bodies. Querying the tree has the time complexity of a binary search. I can’t imagine much better of a broadphase for rigid body simulation. There’s an old saying of “no broadphase is perfect, and neither is this one.” when speaking of broadphases. However the dynamic AABB tree has completely impressed me.

In terms of performance, my simulation would grind to a halt and bottleneck with an N^2 broadphase at just over 100 dynamic bodies. Now I can easily simulate around 5000 rigid bodies with the broadphase taking about 5% of the simulation time. My current bottleneck is cache misses during constraint solving.

## Additional Resources

# Separating Axis Test and Support Points in 2D

I’ve created some slides for Physics Club at DigiPen. Currently the slides were made at my own house with my own resources, so DigiPen doesn’t own them. Thus, I can share this version for public viewing!

The Separating Axis Test (SAT) is a highly versatile and robust collision detection algorithm, and can be implemented in an extremely efficient manner in 2D without too much trouble. I hope these slides can help others out with their collision detection and manifold generation problems.

# Custom Physics Engine – Part 2: Manifold Generation

# Introduction

During the previous article in this Custom Physics Engine series we talked about impulse resolution. Though understanding the mathematics and physics presented there are important, not much could be put into practice without both collision detection and manifold generation.

Collision detection is a pretty widely documented area, and so I won’t go into too much detail on how to achieve collision detection in this article series. Instead I’ll focus on manifold generation, which is in my opinion much more difficult and less-documented compared to collision detection.

In general collision detection is useful for retrieving a boolean result of “are these two things colliding”. The usefulness of such a result ends when this collision needs to be resolved. This is where manifold generation comes in.

# Manifold Generation – Summary

A manifold, in context of physics engines, is a small structure that contains data about the details of a collision between two objects. The two bodies are commonly referred to as A and B. Whenever referring to a “collision” as a system, A is usually the reference object, as in the problem is viewed from A’s orthonormal basis.

I am not actually sure why this structure is called “the manifold”, and I do not know anyone that actually knows. So don’t ask! Either way this structure should be passed around by reference or pointer to avoid unnecessary copies. I also pool all my manifolds, and intrusively link them in order to keep a list of active manifolds during a scene’s step (the term scene is defined in the previous article).

Manifold generation involves gathering three pieces of information:

- Points of contact
- Penetration depth
- Vector of resolution, or collision normal

The points of contact are 2D points (or 3D for a 3D engine) that mark where one shape overlaps another. Usually these contact points are placed onto the vertex of one shape that resides within another.

The penetration depth is the depth of which the two shapes are intersecting. This is found using the Separating Axis Test (SAT). There are lots of resources around that talk about SAT, so I suggest googling for them. The penetration depth is defined as the axis of least penetration. In this case (assuming blue’s frame of reference and vertical as y axis) the y axis is the axis of least penetration.

The collision normal is used to describe in which direction to press both objects away from one another. The collision normal will be a face normal, and in this case it would be a normal pointing towards the brown box, where the normal corresponds to the blue box’s top face.

Generating these three pieces of information can be quite a challenge. Now lets view what a possible setup of the manifold structure might look like:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
// Represents a single point of contact during a collision struct Contact { Vec position; Vec normal; float penetration; }; struct Manifold { Contact contacts[2]; unsigned contactCount; Body *A; Body *B; } |

Note that there can be a variable amount of contact points. I suggest having a contactCount of zero signify “no collision”. I also suggest having only two possible points of contact for a 2D simulation, and to start just use a single possible point of contact. More than one point of contact isn’t necessary until advanced constraint resolution is used (a future article).

It is important to just keep an array of contacts within this data structure as to keep strong cache coherency. There’s no reason to dynamically allocate the array of contacts.

# Circle to Circle

I’ll be covering how to gather manifold information for specialized cases of couple different types of shapes. Lets first go over circle to circle first. Here is what the definition of a circle would look like:

1 2 3 4 5 |
struct Circle { float r; Vec position; }; |

The first thing to do is to see if they are colliding or not. Again, throughout this article I’m going to mostly blaze through the collision detection and focus just on gathering important manifold information. Feel free to google or ask specific questions in the comments about collision detection.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
Manifold m; // translation vector between two shapes Vec t = B->pos - A->pos // Cumulative radius float radius = A->r + B->r // Early out condition if(t.LengthSquared( ) > radius * radius) return non-intersecting // Perform sqrt with pythagorean theorem float d = t.Length( ) Contact *c = m.contact1 // Right on top of each other if(d == 0.0f) { // Choose random (but consistent) values c->penetration = A->r; c->normal = Vec( 1, 0 ) c->position = A->pos m.contactCount = 1 } else { c->penetration = radius - d // Utilize our d since we performed sqrt on it already c->normal = t / d // Take A's position and move it along the contact normal // the distance of A's radius c->position = c->normal * A->r + a->pos m.contactCount = 1 } return m |

The above code is really quite simple. The important thing to note is that our contact normal will always be a vector from A to B. In order to create a vector from one point to another you take your endpoint minus your starting pointer. In this case B’s position subtracted by A’s position. This results in a vector from A to B. This vector normalized will be the collision normal, or in other words the direction in which to resolve the collision.

It is important to note that no square root functions are called before the early out condition is checked. Most of the time your shapes are probably not colliding, and so there’s no reason to use a square rooted value.

The last tricky thing is to check if the two shapes are right on top of each other. Though this is unlikely in a dynamic environment, sometimes shapes can be placed directly upon one another through an editor. It is important to, in all collision detection functions, to handle all special cases, even if the handling is bad. Whatever you do just make sure you are consistent. I just chose a random vector to resolve in the direction of.

The nice thing about cirlce to circle collision is that only one collision point is really needed. Just be sure to be consistent in how you choose your collision point in order to reduce simulation jitter.

# AABB to AABB

Collision detection between two AABBs is a bit more complicated than two circles but still quite simple. The idea is to make use of min and max. Lets assume we’re storing our AABBs in a structure like so:

1 2 3 4 5 |
struct AABB { Vec min; // lower x and y coordinate position Vec max; // higher x and y coordinate position }; |

This allows a very simple algorithm to find points of contact. For convex polygons that are not axis aligned often times Sutherland-Hodgman clipping will need to be performed. In our case we can implicitly deduce our contact area due to the nature of AABBs.

First determine if the two AABBs are overlaping at all. When an axis of least penetration is found the collision area can then be deduced.

The idea is to perform the SAT while storing each overlap value. The least overlap is your axis of separation. To get the contact area and two points of intersection you can min your maxes and max your mins (I’m talking about the extents of each AABB).

1 2 3 4 5 |
float a_extent = (abox.max.x - abox.min.x) / 2 float b_extent = (bbox.max.x - bbox.min.x) / 2 // Calculate overlap on x axis (t is translation vector from A to B) float x_overlap = a_extent + b_extent - abs( t.x ) |

This sounds silly, but that’s how you do it. I suggest drawing it out. Here’s how to find your collision area given by two points (intersection points of the AABBs):

1 2 3 4 |
c1.position.x = max( abox.min.x, bbox.min.x ) c1.position.y = max( abox.min.y, bbox.min.y ) c2.position.x = min( abox.max.x, bbox.max.x ) c2.position.y = min( abox.max.y, bbox.max.y ) |

The last bit of info required would be to record the penetration and contact normal. Penetration is your axis of least overlap, so after you’ve found your axis of least overlap you can just assign a vector value as your contact normal. If you have found the axis of least penetration to be on the x axis, you want to point towards object B along the x axis. If the y axis is the axis of least penetration, you want to point towards object B along the y axis.

1 2 3 4 5 6 7 8 9 10 11 12 |
if(x_overlap > y_overlap) { // Point towards B knowing that t points from A to B c->normal = t.x < 0 ? Vec( 1, 0 ) : Vec( -1, 0 ) c->penetration = x_overlap; } else { // Point toward B knowing that t points from A to B c->normal = t.y < 0 ? Vec( 0, 1 ) : Vec( 0, -1 ); c->penetration = y_overlap; } |

That’s all there is to the AABB to AABB intersection. Be sure to properly record the number of contacts found (if any), and if neither axis x or y are actually overlapping, then that means there is no intersection.

# AABB to Circle

I will leave AABB to Circle collision an exercise for the reader, though I will quickly provide an explanation paragraph behind the idea. What needs to be done is to first determine if the shapes are overlapping at all. I have a previous post on my blog that explains the Circle to AABB intersection, and more information about such an intersection check can be found around the internet.

Lets assume A is the AABB and Circle is B and we have a collision. The collision normal will again be the vector from A to B, except slightly modified. The early out condition involves finding the closest point on the AABB to the Circle. The collision normal is the translation vector from A to B subtracted by a vector to the closest point on the AABB. This will represent a vector from the circle’s center to the closest point.

The contact point will be residing on the circle’s radius in the direction of the contact normal. This should be easy to perform if you understood the Circle vs Circle collision detailed above. The penetration depth will be the length of the collision normal before it is normalized.

There is one special case that must be properly handled: if the center of the circle is within the AABB. This is quite simple to handle; clamp the center of the circle to the edge of the AABB along the edge closest to the circle’s center. Then flip the collision normal (so it points away from the AABB instead of to the center) and normalize it.

# OBBs and Orientation

Now lets start talking about adding in some manifold generation for some more complex oriented shapes! The first thing that must be learned is how to properly change from one orthonomormal basis to another (that is shift from one frame of reference to another). This will vastly simplify collision detection involving OBB shapes.

Changing a basis involves taking the orientation and translation of an OBB and applying the inverse of these two it to another shape. In this way you can then treat the OBB as an AABB as long as you are still referring to your transformed object. Lets go over this in some more detail with some pictures.

Here is what an OBB is like in the OBB’s frame of reference (left), and the OBB in model space (right).

The important thing to realize is that in order to place an object into an OBB’s frame of reference it must have inverse translation and rotation of the OBB’s translation and rotation applied to it. This takes the OBB’s position to the origin, and the OBB can then be treated as an AABB.

If the inverse rotation of the OBB, in this case -45 degrees, is applied to both the OBB and an object near it, this is what happens:

As you can visually see, once the circle has been inversely transformed into the OBB’s frame of reference the OBB can be viewed as a simple AABB centered at the origin. The extents of the OBB can be used to mimic an AABB, and the OBB to Circle intersection and manifold generation can be treated identically to the AABB to Circle intersection, if a proper inverse transformation is performed. Again, this inverse transformation is called a “change of basis”. It means you transform the Circle into the OBB’s frame of reference.

# Mat2 Rotation Matrices

Lets go over rotations in 2D extremely quickly. I won’t go over derivation here for brevity’s sake (as you will see, brevity is a close friend of mine in these Physics Engine articles haha). Instead I will just show you how to create your own 2 by 2 matrix and use it as apart of whatever math library you currently have (which you should hand-code yourself!). Really the only useful thing about having a 2 by 2 matrix is to do rotation operations.

For those using C++ you’re in luck for I know how to use unions.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
class Mat2 { public: // Contiguous memory in various access formats union { struct { float m00, m01; float m10, m11; }; float m[2][2]; float v[4]; }; // ... }; |

The above is a proper usage of the unnamed union trick. The elements of the 2 by 2 array can be accessed as if they are a two dimensional array, single dimensional array, or separate floating point values. Additionally you can stick two vectors into your union for column or row access, if you so wish.

I want to briefly hit all the important methods without writing an entire book, so get ready for code snippets to be thrown at you.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
Mat2::Mat2( ) { } Mat2::Mat2( real radians ) { real c = std::cos( radians ); real s = std::sin( radians ); m00 = c; m01 = -s; m10 = s; m11 = c; } Mat2::Mat2( real a, real b, real c, real d ) : m00( a ), m01( b ) , m10( c ), m11( d ) { } void Mat2::Set( real a, real b, real c, real d ) { m00 = a; m01 = b; m10 = c; m11 = d; } // Imagine there's assignment operators, SetIdentity, Zero, Transpose, // Inversion (which I won't cover here), Abs (absolute values all elements) // Scalar multiplication/division/addition/subtraction, determinant // negative operator, etc. Don't forget vector/matrix multiplication void Mat2::SetRotation( real radians ) { real c( std::cos( radians ) ); real s( std::sin( radians ) ); m00 = c; m01 = -s; m10 = s; m11 = c; } |

The first thing you should realize is that the default constructor does nothing. This is important. Often times you will create a matrix only to briefly thereafter assign some value to it. Do not default construct your matrix to zero values as an optimization. Force the user to use the Set function like so: mat.Set( 0, 0, 0, 0 ).

The interesting functions here are the rotation constructor and SetRotation functions. Each one computes cosine and sine from a given radian value and caches the result. Caching the results prevents unneeded additional calls to cosine and sine. Note the format in which sine and cosine are stored. It is also important to realize that m00 and m10 represent a transformation of the x axis, and m01 and m11 represent a transformation of the y axis. Each of these two are columns, both columns can be viewed as unit vectors.

Multiplying a Mat2 with a vector will rotate the vector’s x and y components around the origin. It is important to realize where your origin is before you apply a rotation. If you want to jump into an OBB’s frame of reference you must do an inverse translation to set the OBB as the origin. This allows you to then apply the OBB’s inverse rotation (perhaps with the inverse operator of your Mat2, see Box2D if you don’t know how to inverse a Mat2) and rotate about the origin (which is about the OBB).

# OBB Representation

Every oriented shape will need to store its orientation somehow. I suggest the following:

1 2 3 4 5 6 |
struct OBB { float radians; Mat2 u; // anything else needed here } |

The OBB should store its current orientation in both a single floating point value along with a matrix to represent that radian value as a rotation matrix. When you need to rotate the OBB during integration, you can just add or subtract a little bit to the radians value, and then call u.SetRotate( radians ) to update the rotation matrix. This makes use of a simple and organized way to cache results from sine and cosine calls, and minimizes the amount of calls to these functions that you require.

# OBB to OBB

Now lets talk about the *big one*. How in the world can you see if two OBBs intersect? Both boxes are oriented, so the problem would involve a lot of complex calculations involving trigonometric computations.

Lets make things easier: transform one OBB into the other OBB’s frame of reference, and treat the transformed object as an AABB. Now the problem becomes much simpler.

First perform a separating axis check and find the axis of least penetration. In order to perform the SAT you must find a projected radius onto the axis you are currently testing.

If the sums of the projected radii from both OBBs are larger than the distance between the center of each OBB (along your respective axis), then they are intersecting on that axis. This method works for all convex polygons in 2D.

The way I perform this check is by taking the translation vector from A to B, lets call it T. Then I rotate T into A’s frame of reference and subtract A’s extent vector, and subtract that entire result by B’s extent vector rotated into A’s frame of reference. This results in a vector holding the overlap along the x and y axis for object A. Due to symmetry only two axes need to be tested. The same operation can be done for object B to find B’s separating axis. If no separating axis is found the shapes are intersecting.

This is where things start to get difficult. Since we just performed an early out test, now we need to find the axis of least separation. However you cannot just blindly perform floating point comparisons due to floating point error during rotation of the translation vector from A to B. You must bias your comparisons to favor one axis over another in a consistent manner. This is important for stability!

In the above picture, which set of contact points/normal is the “correct” one? Each axis of separation is very close to the same distance, so floating point error could account for which axis is chosen. If your simulation flops between the two you’ll end up with strange jitter and your simulation will be less believable. The solution is to just favor one axis over another using an error EPSILON value.

1 2 3 4 5 6 7 8 9 10 11 |
// Compares two floats to see if a is greater than b by a slight margin of error. Small // samples from both a and b are used in the comparison to bias the result in one direction // in order to stray away from mixed results due to floating point error. inline bool BiasGreaterThan( real a, real b ) { const real k_biasRelative = 0.95f; const real k_biasAbsolute = 0.01f; // >= instead of > for NaN comparison safety return a >= b * k_biasRelative + a * k_biasAbsolute; } |

Here’s a function (you’re lucky I just gave it to you!) that will check which value is greater than the other. Each value is modified slightly, and a small bias is fed into the comparison based off of how large each value passed in is. This can be used to favor one axis of separation over another, until a threshold larger than floating point error is breached.

Carefully record which direction your normal goes (from A to B) depending on what axis is separating. This is a similar operation to the one found in AABB to AABB as seen above.

Once an axis is found two line segments must be identified: the reference face and incident face. The reference face corresponds to your normal you recorded. So the reference face is the face that corresponds to your axis of least penetration. If your axis of least penetration is on A, then your reference face is on A. The incident face is the one with the information we need to generate our manifold.

The incident face it the face on the other object that the reference face has hit. We must compute it. All that needs be done is find out which face is most facing the normal (has the most negative dot product). Looping through all faces performing dot products is the simplest way to achieve this. A more optimized algorithm is the follow the sign of the flipped normal. Your normal will have an x and y component (and z in 3D), and each component will be positive or negative. This gives you a total of four combinations of positive or negative.

First check to see if the normal is point more towards the x axis or y axis (after you transform it into the incident face’s frame of reference). Then check the sign of the y axis. You know know to which face your normal is most pointing.

Think of it this way: if the normal is pointing more towards the x axis (absolute value of n.x is greater than absolute value of n.y), then you are going to be pointing more towards either the left or right face on your OBB (in the OBB’s frame of reference). All that you need to know from there, is if you’re pointing left or right on the x axis, which is denoted by the sign of the normal.

Your incident face segment endpoints are the extents of the x axis of the OBB, which are aligned with the x axis in the OBB’s frame of reference. You can then take the OBB x half-extent and use the positive and negative version of it to form two points: (-x, 0) and (x, 0) where x is the half-extent of the OBB on its x axis. Rotate these points with the OBB’s rotation matrix, and then translate them into world space with the OBB’s position vector, and you now have your endpoints for your incident face in world space.

All that is left is the clip the incident face to the reference face side planes using Sutherland-Hodgman clipping. Here’s a diagram showing this:

This is a fairly difficult thing to do unless you know your math fairly well. Each side plane can be simply computed once you know your reference normal. Here’s the process for getting two side planes:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
// Line representation in 2D is ax + by + c = 0 // a and b form the normal of the line, and c is the distance to the origin struct Line { // Must be normalized Vec n; // Vec( a, b ) real c; // Distance from origin to line }; // Create our reference face line Line referenceFace; Line posPlane, negPlane; // two side planes for reference face // Assuming our axis of least penetration was on B's Y axis // normal is the collision normal // dot product is to compute c (ax + by) referenceFace.Set( -normal, Vec2::DotProduct( b_pos, -normal ) + b_ext.x ); Vec planeNormal = bu.AxisY( ); // grab second column from Mat2 real c = Vec2::DotProduct( b_pos, planeNormal ); // ax + by posPlane.Set( planeNormal, c + b_ext.y ); negPlane.Set( -planeNormal, -c + b_ext.y ); |

The above code was hand-derived by myself, but you’ll find something very similar within Box2D Lite (where I originally learned the math from). If this is confusing to you I suggest reading up on the various types of representations of lines in 2D.

Here’s another diagram I just found in my own source files:

1 2 3 4 5 6 7 8 9 10 11 12 |
// y // ^ ->n ^ // +---+ ------posPlane-- // x < | i |\ // +---+ c-----negPlane-- // \ v // r // // r : reference face // i : incident box // c : clipped point // n : incident normal |

You might have noticed I’m storing the c value. This is important as the c value stored within the Line structure can be used to find the distance of a point to the line like so:

1 2 3 4 5 6 |
// Assuming n is normalized simplifies the equation that is // found here: http://mathworld.wolfram.com/Point-LineDistance2-Dimensional.html real Line::DistanceToLine( const Vec2& point ) const { return Vec2::DotProduct( point, n ) - c; } |

I’ll be nice and provide you my clipping notes I created for Sutherland-Hodgman clipping :)

1 2 3 4 5 6 7 8 9 10 11 |
// Sutherland-Hodgman clipping algorithm // out in out in out in out in // s | | s s | | s // \ | | / \ | | / // \ | |/ \| | / // \ | i i | / // \ | /| |\ | / // \ | / | | \ | / // e | e | | e | e // // none push i push i push e push e |

However since we are clipping a line to a single plane the algorithm will need to be slightly modified. In some cases you need to push extra points, since Sutherland-Hodgman assumes to be clipping two polygons in a loop. See Box2D Lite for a good implementation of the incident to line clipping. I however use my own hand-derived algorithm that works in a very similar way. I’ll share some pseudo code for clipping a segment to a Line, assuming the Line is in the format of offset c and a normal n:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
Segment::Clip( Line line ) { // Retrieve distances from each endpoint to the line d1 = line.DistanceToLine( v1 ) d2 = line.DistanceToLine( v2 ) // If negative (behind plane) clip the point // Add in points in front of plane to output array if(d1 <= 0) outVecArray += v1 if(d2 <= 0) outVecArray += v2 // If the points are on different sides of the plane if(d1 * d2 < 0) // less than to ignore -0.0f { // Push interesection point with simple interpolation alpha = d1 / (d1 - d2) outVecArray += v1 + alpha * (v2 - v1) } } |

After clipping to the side planes floating point error must be accounted for. If our clipping process went as expected we must have two resulting points. If we end up with less than two output points that means floating point error has screwed us over, and we must treat the entire process as if the two OBBs are non-intersecting.

Assuming we have two points output from our clipping we then need to only consider points that are behind the reference face. Use the DistanceToLine function I provided above to check this. Record each point behind the reference face as a contact point!

# OBB to AABB

If you’ve been reading along diligently and deriving your own understanding, you should be able to figure out how to perform such a check. This test is the exact same as OBB to OBB, except with less rotating from one basis to another. You can recall that the OBB to OBB test rotated one OBB into the frame of the other completely, turning the test into an AABB to OBB test. The same thing can be done here without the preliminary change of basis, and perhaps some other small optimizations. I will leave the section out as a challenge for the reader.

# Conclusion

I hope you have a solid understanding of various types of hand-crafted intersection tests! Feel free to email me or comment here with questions or comments. I refer you to Box2D Lite’s source code as a reference to the material in this article, along with all of Erin Catto’s GDC slides, especially the one from 2007. The next article in this series is likely to talk about creating a simple O(N^2) broadphase, and how to cull out duplicate contact pairs. Until then this article along with the previous one is more than enough to create a simple physics engine. Be aware that with impulse resolution only one point of contact should be necessary. This article goes over grabbing multiple contact points, and this is because more advanced collision resolution techniques can make use of more contact points.

# Custom Physics Engine – Part 1: Impulse Resolution

# EDIT:

This series ended up on Tuts+, and these are sort of deprecated. Please visit Tuts+ to see the finalized articles.

# Personal Update

Beyond C++ reflection I’ve taken a huge liking to physics programming for games. Reflection in C++ is a largely “solved” area of study. Though resources on the internet may be few and far between for creating custom introspection, people that know how to create such systems have explored much of what is to be accomplished.

However physics is another story entirely. Up until just a few years ago physics for games was in a highly primitive state; techniques just hadn’t been created due to the inability of hardware to perform necessary computation. There are a lot of new and exciting problems to solve.

So this next year while attending DigiPen I’ll be focusing my studies around: engine architecture, data oriented design and optimization, C++ introspection and game physics. This last topic being the topic I’m going to start an article series on.

# Game Physics High Level Detail

A game physics engine performs some functionality for ensuring that shapes behave in a particular way. To state this one could say: a physics engine removes degrees of freedom from moving bodies, and enforces these imposed rules.

From here on out, until specified otherwise I’ll be talking specifically about convex rigid bodies within a physics engine. A rigid body is just a shape that is implicitly rigid; the engine does not naturally support deformation of a set of points that make up a shape. Using this implicit definition the mathematics behind manipulating such shapes can be known intuitively and implemented efficiently.

Here’s a short feature list of some features a custom physics system may provide, assuming the engine uses rigid bodies:

- Collision detection
- Collision resolution
- Linear movement, angular rotation and integration
- Raycasting
- Islanding/sleeping/portals
- Friction simulation
- Spatial partitioning/bounding volumes
- Fracturing/splitting
- Multi-part bodies
- Various types of shapes
- Advanced mathematical constraints (joints, motors, contact constraints, etc.)

In this article series I will attempt to talk about all of the above topics in my own time. I’ve learned a lot about physics and physics engines in a short amount of time and by documenting what I have learned I hope to solidify my understanding, as well as help out others to achieve what I have. I would not have learned nearly as much as I currently know without help from others so it is only natural to want to do the same.

The best example of an open source physics engine that employs much of the feature list shown above would be Box2D by Erin Catto. The rest of this article series will be detailing the physics engine that I myself have written. There are of course other methods I choose not to talk about, or just don’t know about.

# Architecture

There are two main objects that make up a physics engine: shapes and bodies. A body is a small package of data that defines intrinsic properties of a physics object. Properties such as density, restitution (elasticity), friction coefficients, inertia tensors, along with any other flags or settings should be stored within the body. These bits of data are properties that can be isolated away from the shape definition. The shape itself is contained with the body through a pointer.

The shape definition defines what sort of shape a physics object represents. Some physics engines can attach multiple shapes to a body, which are referred to as fixtures. A shape stores any data required to define the shape itself, and provides various functions to operate on the shape, such as testing for line or point intersection, or generating a bounding volume.

Together the body and shape represent a physics object, which by the end of this article series will be able to perform a lot of interesting interactions with other physics objects.

All bodies should be contained within what is known as a scene or world. I refer to this object as a scene. The scene should contain a list of all live objects, as well as functionality inserting or removing bodies from the scene. The scene should also have callbacks for processing shape or ray queries. A query just checks to see if any bodies intersect with something like a point or ray.

The scene has one particular function called step, which steps the scene forward in time by a single delta time (dt). This step function steps all objects forward in time by integration. The integration step just moves the objects forward by using their velocity, position and acceleration to determine their next position.

During the step collisions are detected and then resolved. Often times a broadphase of some sort is used to detect possible collisions, and expensive collisions operations are only used when really needed.

All of this organization allows the user of your physics engine to worry about three main operations: creating and removing bodies, and stepping the scene. The rest is automated and taken care of within the physic system’s internals.

The last major isolated system would be the broadphase. There are two major phases in collision detection: the broad and narrow phases. The narrow phase is the one which determines if two shapes intersect or not. The broad phase determines if two shapes can possibly be intersecting or not. The broadphase should be constructed such that intersection queries are very very fast and simple. An axis-aligned bounding box (AABB) will suffice perfectly.

Once the broadphase chooses which objects could perhaps be colliding, it sends them off to the narrow phase. The narrow phase performs much more intensive operations to determine if two objects are colliding or not. The whole point of this is to reduce the amount of times the narrow phase has to be used, as it is expensive to operate with.

Once the narrow phase finds a pair of bodies to be colliding information about the collision is gathered and placed into a small container called the manifold. Do not ask why it is called a manifold, for I have no idea and neither does anyone else seem to! The manifold contains three important pieces of information:

- Collision penetration depth
- Direction to resolve in
- Points of contact

These pieces of information are used to fill in formulas that are used to resolve the penetration and solve for a single unknown: the magnitude of the resolution vector. Here’s a small diagram:

It is also useful to to store pointers or handles to the two objects that formed this collision info. This allows some useful functions to placed into the manifold object directly: solve and resolve. Solve would be the function to collect the three pieces of collision information. If no contact points are found, then there is no collision. If contact points are found, then resolving performs a resolution operation to force both objects to not be intersecting after integration.

# Velocity

Complex physics manipulation is performed on the velocities of objects. Trying to manipulate the positions of objects directly is very difficult, as it poses a problem that isn’t linear. By using derived position equations for velocity, the problem is thus simplified. Most of the time we will be only dealing with velocity manipulation.

# Impulse Resolution in 2D (No Rotation or Friction)

The act of resolving collisions is something that isn’t covered very well on the internet. There are some scattered documents and details, though the information isn’t always easy to find. Most of what I know I learned by talking with other people, but I know most people will not have such an opportunity. Hopefully this article series can provide a strong resource for understanding and constructing a simple physics engine.

The toughest place to code is in my opinion resolution of collision. There exists tons of information on collision detection and broadphase, and thus creating these portions of a physics engine is in my opinion not too difficult. Some resources for collision detection are: Real-Time Collision Detection by Christer Ericson, and Game Physics Engine Developement by Ian Millington. I have both of these books right next to me as I write this :)

Generating contact manifolds and resolving such manifolds are what most programmers get caught up in. So lets hit the ground running and tackle a portion of code that will bring your entire physics system to life: contact resolution.

The best type of contact resolution to start with is impulse resolution. The idea behind impulse resolution is to take your contact manifold and solve for a single velocity vector that can be used to add into an object’s current velocity, in order to make it not penetrating the other object during the next frame. The reason for starting with impulse resolution is that it’s quite simple and easy to get up and running, and more complicated and advanced techniques require you to understand impulse resolution anyway.

Now the contact manifold is assumed to contain the direction of our velocity vector we are solving for. I will cover how to generate the contact manifold in another article in this series. The only unknown left to solve for is the magnitude of this vector. It so happens that it’s possible to solve for this magnitude in relation to both objects. If this magnitude is known, you add velocity along the normal scaled by the magnitude for one object, and you do the same operation to the other object in the direction opposite to the manifold normal. Lets start from the top.

We have two objects moving along in our scene, and there exists a relative velocity between the two, with object A as the reference object at the origin:

1 2 |
Eq 1: VelocityRelative = VelocityA - VelocityB |

The relative velocity can be expressed in terms of the collision normal (from the collision manifold) with a dot product:

1 2 |
Eq 2: VelocityRelative dot ManifoldNormal = (VelocityA - VelocityB) dot ManifoldNormal |

This can be thought of as the relative normal velocity between the two objects. The next thing to include in this derivation is the coefficient of restitution (elasticity factor). Newton’s Law of Restitution states that:

1 2 |
Eq 3: VelocityBefore * restitution = VelocityAfter |

Often times the restitution identifier is specified by an e, or epsilon symbol. Knowing this it’s fairly simple to include it within our current equation:

1 2 |
Eq 4: VelocityRelativeAfter dot ManifoldNormal = -resitution(VelocityA - VelocityB) dot ManifoldNormal |

Now we need to go to another topic and model an impulse. I have said the term “impulse” quite a few times without defining it, so here is the definition I use: an impulse is an instantaneous change in velocity. We will use an impulse to change the velocity of an object. Here’s how you could use an impulse to modify the velocity of a single object:

1 2 |
Eq 5: VelocityNew = VelocityOld * Impulse |

Here Impulse would be a scalar floating point value. This isn’t too useful however, as it only scales an object’s current velocity, and thusly makes the object move slower or faster along the positive or negative direction of the vector.

What is needed is a way to do component-wise modification of the vector, so we can make it point slightly in one direction or another, allowing objects to make turns and change directions.

1 2 |
Eq 6: VelocityNew = VelocityOld + Impulse(Direction) |

In the above equation we can take a direction vector with a magnitude of 1, and scale it by our impulse. By adding this new vector to our velocity we can then modify the velocity in any way we wish. Our end goal is to solve for our Impulse scalar that will separate two objects from collision, so in the end we’ll need to distribute this scalar across two equations in terms of velocity.

Lets start with a simple momentum equation:

1 2 |
Eq 7: Momentum = Mass * Velocity |

An impulse is defined to be a change in momentum. Thus we get:

1 2 3 |
Eq 8: Impulse = MomentumAfter - MomentumBefore Impulse = Mass * VelocityAfter - Mass * VelocityBefore |

To isolate our velocity after we can rearrange into:

1 2 |
Eq 9: VelocityNew = VelocityOld + Impulse(Direction) / Mass |

Now lets change equation 4 into one that contains velocities under the influence of impulses. However we’ll want to express our VelocityNew as one that is acted upon by impulse, and substitute in equation 9:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
Eq 10: VelocityA + Impulse(Direction) / MassA - VelocityB + Impulse(Direction) / MassB = -Restitution(VelocityRelativeAtoB) * Direction Simplification: VelocityRelativeAtoB * Direction + Impulse( Direction / MassA + Direction / MassB ) + Restitution(VelocityRelativeAtoB) * Direction = 0 Isolate Impulse: Impulse = -(1 + Restitution)(VelocityRelativeAtoB dot Direction) ---------------------------------------------------- 1 + 1 ----- ----- MassA MassB |

Remember that the impulse is a scalar. Also note that all values on the right hand side of the equation are all known, including the Direction which was solved for by the collision detection.

All that is left here is to distribute this scalar impulse over each object’s velocity vector proportional to the total mass of the system (system being collision between both objects).

The total mass is MassA + MassB, so to get an even distribution you do: impulse * Mass / TotalMass. To simplify this one could use the following:

1 2 |
Eq 11 (applying an impulse proportional to mass of object): VelocityNew = VelocityOld + (1 / Mass) * Impulse(Direction) |

This can be done twice, once per object. The total impulse will be applied, except only a portion of the impulse will be applied to each object. This ensures smaller objects move more than larger ones during impact.

One thing you must ensure is that if the velocities are separating (objects moving away from one another) that you do nothing. Here’s a sample version of finalized code for impulse resolution in 2D without rotation or friction:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
// j = -(1 + epsilon) * N dot Vrel // ------------------------------- // invM_1 + invM_2 void Manifold::ApplyImpulse( void ) { Vec2 rv = B->m_velocity - A->m_velocity; real contactVel = Vec2::DotProduct( rv, normal ); // Do not resolve if velocities are separating if(contactVel > 0) return; // Calculate restitution real e = std::min(A->m_material.restitution, B->m_material.restitution); // Calculate impulse scalar real j = -(1.0f + e) * contactVel; j /= A->m_massdata.inv_mass + B->m_massdata.inv_mass; // Apply impulse Vec2 impulse = j * normal; A->m_velocity -= A->m_massdata.inv_mass * impulse; B->m_velocity += B->m_massdata.inv_mass * impulse; } |

# Rotation

Now that we have covered resolution without these two factors adding them in to our final equation (equation 10) will be quite a bit simpler. We will need to understand the concept of “mass” in terms of rotations. The inertia tensor of an object describes how “hard it is to turn” along some axis. In 2D there’s only a single axis of rotation (along the z) so a single tensor is all that’s needed. Here’s a new version of equation 11.

1 2 3 4 5 6 |
Eq 12: RotationalVelocityNew = RotationalVelocityOld + (1 / InertiaTensor) * PointRelativeToCenterofMass cross Direction * Impulse Shortened: V' += I^-1 * r x (n * j) |

For the sake of brevity here’s the final equation, where r is the vector from center of mass to a point on the body (contact point). The velocity of a point on the body relative to it’s center is given by:

1 2 |
Eq 13: Vpoint = VCenterMass + AngularVelocity cross r |

Final equation (Direction substituted for n):

1 2 3 4 5 6 |
Eq 14: Impulse = -(1 + Restitution) * (VelocityRelativeAtoB dot n) ------------------------------------------------- 1 1 (rA cross n)^2 (rB cross n)^2 ----- + ----- + -------------- + -------------- MassA MassB InertiaTensorA InertiaTensorB |

And there we have it! This will solve for a separating impulse given a specific contact point. Now you might have noticed there are a couple cross products that are kinda strange in 2D. I won’t cover what they are here, but they still hold true. I’ll just give them to you:

1 2 3 4 5 |
Cross vector and scalar in 2D: V' = x * V.y, -x * V.x Cross scalar and vector in 2D: V' = -x * V.y, x * V.x |

1 2 |
Cross two vectors in 2D: A.x * B.y - A.y * B.x |

# Friction

Friction is the simplest thing to do in this entire resolution article, assuming we’re dealing with 2D! I actually haven’t derived this portion myself, so again I’m going to just throw the equations at you. Assuming we have our resolution direction in the manifold (the collision normal), we can calculate a tangent vector, which is a vector perpendicular to the collision normal. Just replace all instances of n in equation 14 with this tangent vector.

To solve for the tangent vector you do:

1 2 |
TangentVelocity = VRelAtoB - (VRelAtoB dot n) * n TangentVector = Normalize( TangentVelocity ) |

Again, just take the above TangentVector and replace it in equation 14 for n.

There is something I have missed. When solving for the force of friction there’s a max, which is the manifold normal * the coefficient of friction (from the body definition). This is due to the Coloumb friction law. Treat the coefficient of friction * normal as your “friction cap” when solving for your tangential impulse. This is just a simple capping of the final impulse vector.

# Penetration Resolution

Due to floating point error energy will always be lost from the system over time. As such, when objects start coming to a rest they will sink into each other due to gravity. How to solve? Well the solution is to actually just handle the leftover penetration manually by just moving the objects apart directly. This will prevent objects from sinking into one another, though doesn’t add any energy into the system.

To resolve penetration, simply move each object along the manifold normal in opposite directions. However there’s an art to doing so; you need to be gentle. If you just resolve 100% of the penetration each frame objects underneath other objects will jitter violently due to the naive penetration resolution scheme I am presenting. To avoid this, try only resolving a percentage of the leftover penetration, perhaps 20 to 80%.

Additionally you only want to resolve penetration if the penetration depth after the impulse is applied is above some constant arbitrary (but small) threshold, (try 0.01 to 0.1). If the penetration is below this, then don’t move either object.

This method of penetration resolution is called linear projection. Here’s a snippet of C++ code demonstrating this:

1 2 3 4 5 6 7 8 |
void Manifold::PositionalCorrection( void ) { const real k_slop = 0.01f; // Penetration allowance const real percent = 0.2f; // Penetration percentage to correct Vec2 correction = (std::max( penetration - k_slop, 0.0f ) / (A->m_massdata.inv_mass + B->m_massdata.inv_mass)) * percent * normal; A->m_tx.position -= A->m_massdata.inv_mass * correction; B->m_tx.position += B->m_massdata.inv_mass * correction; } |

# Iterative Solving

There is one one additional tweak you can do to increase the believability of your physics simulation: iterate over all contacts and solve + resolve the impulses many times (perhaps 5 to 20 iterations). Since the large equation 14 has relative velocity within it, each iteration will feed in the previous result to come up with a new one.

This will allow the step to propagate energy throughout multiple objects contacting one another within a single timestep. This is essential for allowing the “billiards balls” effect to ensue.

This simple iteration is a very easy way to vastly improve the results of resolution.

# Resources/Links

- http://chrishecker.com/images/e/e7/Gdmphys3.pdf
- http://www.cs.cmu.edu/~baraff/sigcourse/notesd2.pdf