A new Magnum feature provides efficient compile-time and runtime CPU
detection and dispatch on x86, ARM and WebAssembly. The core idea behind
allows adding new variants without having to write any dispatching code.
Rectangle packing doesn’t actually have to be a NP-hard problem if we
don’t need to solve the most general case. In this post I present a simple
yet optimal algorithm for packing power-of-two textures into a texture
array.
Redesigned geometry pipeline together with massive additions to
importer plugins, new debugging, visualization and profiling tools, new
examples including fluid simulation and raytracing, instancing in builtin
shaders and a gallery of cool projects to get inspired from.
Flexible and efficient mesh representation, custom attributes, new
data types and a ton of new processing, visualization and analyzing
tools. GPU-friendly geometry storage as it should be in the 21st century.
The new release brings Python bindings, Basis Universal texture
compression, improved STL interoperability, better Unicode experience for
Windows users, a more efficient Emscripten application implementation,
single-header libraries, new OpenGL driver workarounds and much more.
During the past four months, Magnum began its adventure into the
Python world. Not just with some autogenerated bindings and not just with
some autogenerated Sphinx docs — that simply wouldn’t be Magnum enough.
Brace yourselves, this article will show you everything.
A new example showcases capabilities of the DART integration. Let’s
dive into robotics, explain what DART is able to do and how it compares to
Bullet.
Magnum is developed with a “Zen Garden” philosophy
in mind, focusing on productivity, predictability and ease of use. Let’s
see how that can extend beyond just the library itself — into your daily
workflow.
Magnum recently gained a new data structure usable for easy data
description, transformation and inspection, opening lots of new
possibilities for more efficient workflows with pixel, vertex and animation
data.