Coming to an established workflow for the first time is always an interesting experience because I don’t have any “it’s been like this since forever” insights forming my opinion yet. This time it’s about quaternion interpolation and a sudden optimization opportunity.

Traditionally, quaternion slerp() in Magnum was done like this:

\begin{array}{rcl} \theta & = & \arccos \left( \frac{q_A \cdot q_B}{|q_A| |q_B|} \right) = \arccos(q_A \cdot q_B) \\[5pt] q_{SLERP} & = & \cfrac{\sin((1 - t) \theta) q_A + \sin(t \theta) q_B}{\sin(\theta)} \end{array}

Nothing obviously wrong there, right? Or at least I thought so. While implementing all sorts of interpolation methods for the new Animation framework that’s part of the upcoming Magnum release, I discovered that the Animated Triangle glTF sample is rotating in a funny way: first it goes three quarters of the way and then instead of finishing the circle it goes backwards. Apparently I was not the only one with this problem.

Having a flashback to four years ago when I was implementing a simple animation system for 2D touchscreen UIs, the first thing that came to my mind is that the animation data is wrong — instead of going from 270° to 360°, the animation requests to go from 270° to 0° and somehow the glTF players and importers interpret that as going forward instead of backward and nobody questioned the behavior until now.

Seeing the comment about which viewer was used for verification of the sample, I first thought these implementations were a bit “special” and it’s not usual to do it like this. Turns out I was wrong (of course) — the Shortest Path quaternion slerp is the common way:

\begin{array}{rcl} d & = & q_A \cdot q_B \\ {\color{m-info} q'_A} & {\color{m-info} =} & {\color{m-info} \begin{cases} {\color{m-default} \phantom{-}q_A}, & d \ge 0 \\ -q_A, & d < 0 \end{cases} }\\[15pt] \theta & = & \arccos({\color{m-info}|}d{\color{m-info}|}) \\ q_{SLERP} & = & \cfrac{\sin((1 - t) \theta) {\color{m-info} q'_A} + \sin(t \theta) q_B}{\sin(\theta)} \end{array}

But the seed of doubt had already been planted and so I was still unsure if this is the right way to do it — the shortest-path choice basically takes one degree of freedom away from the user. Googling around, I found various people asking how to bypass the shortest path aspect (Unity Forum, Reddit r/Unity3D) and getting extremely dense replies along the lines of “you are asking for the impossible”, “there are infinite ways to do that” or “go through Euler angles instead”.

Give the users a choice

In order to prevent Magnum users from such atrocities, I decided to provide support for both shortest-path and non-shortest-path interpolation. I named the functions Math::slerp() and Math::slerpShortestPath() (and not slerp() and slerpNotShortestPath()) to suggest the one with the longer name is doing something extra and might not be always the best choice.

Here’s the status of such support in various engines I checked. Having support for basic quaternion lerp next to slerp is crucial, as it’s just a vector interpolation with a renormalization step after — a much faster operation suited for cases where precision is not as important (such as most animations with dense keyframes):

lerp lerp
slerp slerp
javax.vecmath [1]
javagl [2]
Unity 3D [3]
Unreal Engine [4] ?
id Tech 4 (Doom 3) [5]
three.js [6]
GLM [7]
Eigen [8]
Magnum::Math [9]
Quat4f.interpolate(), I was not able to find any oth­er vari­ants
SlerpQuaternionInterpolator.interpolate(), based on the javax.vecmath implementation above. I was not able to find any oth­er vari­ants.
Quaternion.Lerp(), Quaternion.Slerp(), both shortest-path. Non-shortest-path is reportedly impossible (Unity Forum, Reddit r/Unity3D).
FQuat::FastLerp() (shortest path but doesn’t renormalize), FQuat::Slerp() and FQuat::SlerpFullPath(). Non-shortest-path lerp has to be hidden there somewhere (probably just a vector lerp would do that, since FastLerp() also doesn’t renormalize).
idQuat::Slerp(), I was not able to find any other variants
Quaternion.slerp(), I was not able to find any other variants
glm::lerp() and glm::slerp(), note that even though the name is similar, one does a shortest-path optimization while the other does not, leading to confusing behavior
Eigen::Quaternion::slerp(), the only implementation where you have to do a weird a.slerp(b, t) instead of slerp(a, b, t). I was not able to find any other variants, even this one was hard to find.
Math::lerp(), Math::lerpShortestPath(), Math::slerp(), Math::slerpShortestPath()

The performance aspect

Besides giving the users more control, there is also the performance side of things. While I originally didn’t assume the extra branch to have a significant effect in slerp, my thinking was that it’ll definitely add something to the basic lerp, since the dot product would not be needed at all otherwise:

\begin{array}{rcl} {\color{m-success} d} & {\color{m-success} =} & {\color{m-success} q_A \cdot q_B }\\[5pt] {\color{m-success} q'_A} & {\color{m-success} =} & {\color{m-success} \begin{cases} {\color{m-default} \phantom{-}q_A}, & d \ge 0 \\ -q_A, & d < 0 \end{cases} }\\[15pt] q_{LERP} & = & \cfrac{(1 - t) {\color{m-success} q'_A} + t q_B}{|(1 - t) {\color{m-success} q'_A} + t q_B|} \end{array}

To verify the above assumption, I benchmarked the Math::lerp(), Math::lerpShortestPath(), Math::slerp() and Math::slerpShortestPath() implementations in latest Magnum master (mosra/magnum@4b7dab1). Hover over the bars below to see precise numbers; benchmark code for reference is here.

2.43 ± 0.13 ns 7.27 ± 0.25 ns 6.53 ± 0.19 ns 48.91 ± 1.91 ns 37.81 ± 1.8 ns 0 10 20 30 40 50 ns baseline lerpShortestPath() lerp() slerpShortestPath() slerp() benchmark overhead ~15% faster ~24% faster CPU time, Linux x64, GCC 8.1 -O3, Core i7 8th gen

The big difference with slerp surprised me — I assumed the time spent by the \arccos() calculation would hide most of the branching overhead — this big difference probably points out to speculative execution done by the CPU, where many things get calculated twice and in the end only half of them is used.

Implementation in Magnum — let’s fix the data instead

In Magnum, users now have the choice to use any interpolation variant they want. Since shortest-path interpolation is used most commonly, Math::slerpShortestPath() is the default interpolator picked when you specify Animation::Interpolation::Linear for quaternion tracks. That’s the least surprising behavior and if you don’t like the choice, simply pass some other interpolator function directly.

But what to do with imported animation data? Since that’s where interpolation will get used most, it would be nice to have some optimization opportunity there too.

Turns out it’s easy — unlike the trigonometry aspects of slerp, which are hard to get rid of, optimizing away the shortest-path flip is easy — just patch the data on import! (Thanks for the hint, @Squareys!) Since mosra/magnum-plugins@bba82bf, the TinyGltfImporter plugin by default patches quaternions in linearly interpolated rotation tracks in order to always have the shortest path from one keyframe to the other. The code that does that is just this:

Containers::ArrayView<Quaternion> values;
Float flip = 1.0f;
for(std::size_t i = 0; i < values.size() - 1; ++i) {
    if(Math::dot(values[i], values[i + 1]*flip) < 0) flip = -flip;
    values[i + 1] *= flip;

Then, once you have the imported Trade::AnimationData instances, you can supply a different interpolator of your choice to rotation tracks either directly with Animation::TrackView::at() or by adding them to the player using Animation::Player::addRawCallback():

Animation::TrackView<Float, Quaternion> track;
Quaternion rotation =, Math::slerp);

The glTF animation import patching is configurable with a runtime option, so if you don’t want it for some reason, simply flip the switch back to false:

std::unique_ptr<Trade::AbstractImporter> importer =
importer->configuration().setValue("optimizeQuaternionShortestPath", false);

And that’s it! As you have probably guessed from the above overly terse code snippets, there’s much more to say about the new Animation library, stay tuned for the next posts. Thank you for reading!