How Magnum does GFX API enum mapping
Engines supporting more than one graphics backend very often need to translate various enum values — pixel formats, primitive types etc. — from a generic API-agnostic value to the one used by a particular implementation, in the fastest-possible and most robust way.
Previous work
Historically, before the design of Magnum got reworked to support more than
just one graphics API for the 2018.04 release,
things were simple. There was just OpenGL and thus the engine could affort to
directly hardcode the OpenGL-specific values — so then-named
PixelFormat::RGBA was GL_RGBA
and so on.
This is also the fastest possible way, no big mapping tables, no problems with
slow inverse mapping, just directly aliasing the values.
Second fastest is the approach suggested by @g_truc in Robust and efficient translations in C++ using tables with zero-based enumerations — having zero-based enums and a one-way mapping table, to which you index. Apart from the mapping table, which needs linear amount of memory scaling with the number of values, such way has a time complexity, so pretty good. However the proposed solution involves adding ugly sentinel values to the enums and, as the article itself already points out, adding values to anywhere else than at the end of the enum is very error-prone, not to mention value reordering, and the only way to avoid that is testing every value. And you can forget about easy inverse mapping.
Enter the preprocessor
One potential solution could be to have the mapping table generated by an external tool (written in Python, let’s say) and invoke it as a part of the build. However — and as Our Machinery does as well — I don’t really want to introduce other languages into the build process, as that raises the barrier for external contributors and users building from source. The only exception is flextGL, because if there’s one thing you don’t want to do in C++, it’s parsing XML. (And even in that case, the generated files are checked in to source control, so it doesn’t affect the build process in any way.)
In an ideal language, both the enum definition and the mapping to all underlying APIs would be defined in a single place, however since for C++ the enum definiton should to be put in a documented human-readable header and it’s not feasible to have the header depend on all corresponding Vulkan, OpenGL, D3D etc. enum mappings, just a single place is not possible. But, since we have the right to abuse a preprocessor, two places are enough:
And now, the actual magic preprocessor abuse — creating the mapping table by including the above file inside a C array definition. After that, the mapping function is simply indexing to it to return the corresponding VkFormat:
Note that the FormatMapping
table is filled only using the second
argument of the _c()
macro. The first is in this case unused, but
will get used for testing.
Testing
As you have probably guessed, the above would work correctly only if
pixelFormatMapping.hpp
lists the values in the same order as the
enum
— and so we seem to be arriving back at the core problem. To
solve this, Magnum reuses the same mapping file to test the correct mapping,
by abusing the preprocessor again and #include
-ing the file in a
different context. The essence of the test is in the following snippet:
The CORRADE_COMPARE() macros are part of the TestSuite library. Let’s go through the rest:
- First, basic sanity is checked for a single value, in the simplest way possible. This ensures the test is still able to detect serious cases of the mapping being broken even if the following loop would be giving false positives by accident.
- Second, it goes through the first 65536 numbers. The PixelFormat enum has considerably less values and it will never grow so big, but this is a good tradeoff — going through the whole 32bit range would take too long, while going just through 8 bits might become dangerous when more formats get added.
- For every value that’s a part of the mapping table, one
case
will get hit, verifying that the resulting value corresponds to the expectation. This is the first time where the both the first and the second argument of the_c()
macro gets used. - Values that are not part of the mapping table get ignored – in this case, that’ll be the remaining ~65430 values, since the table has currently only about 50 values.
- PixelFormat values that were accidentally not added to the
pixelFormatMapping.hpp
table will cause an error at compile time, thanks to-Werror=switch
enabled for theswitch
on GCC and Clang. I’m not aware of a similar compiler warning on MSVC, but usually projects are tested on more than one CI and so any error will get caught early on.
The actual test code linked above is slightly more complex, mainly to provide better diagnostic in case values got ordered incorrectly — but nothing that would make this simplified version less thorough.
Separate pixel format and type in OpenGL
OpenGL, with its historic decision to have pixel formats described by two values instead of just one, is making things slightly more complicated. There are separate GL::pixelFormat() and GL::pixelType() functions, returning either GL::PixelFormat or GL::PixelType for given generic PixelFormat. The mapping data and the table definition look like this, in comparison:
Handling unsupported values
While not the case for Vulkan, not all OpenGL editions support everything from the PixelFormat enum — in particular, OpenGL ES 2.0 and WebGL 1 have no support for integers formats like PixelFormat::RGBA8UI. To handle this correctly, the mapping table provides specific dummy entries for unsupported formats:
#ifndef MAGNUM_TARGET_GLES2 _c(R8UI, RedInteger, UnsignedByte) _c(RG8UI, RGInteger, UnsignedByte) _c(RGB8UI, RGBInteger, UnsignedByte) _c(RGBA8UI, RGBAInteger, UnsignedByte) … #else _s(R8UI) _s(RG8UI) _s(RGB8UI) _s(RGBA8UI) … #endif
Then, the mapping table defines the _s()
macro as follows — no OpenGL
format has a value of 0
, so we use it to denote an “invalid” value.
constexpr struct { GL::PixelFormat format; GL::PixelType type; } FormatMapping[] { #define _c(input, format, type) {GL::PixelFormat::format, GL::PixelType::type}, #define _s(input) {GL::PixelFormat{}, GL::PixelType{}}, #include "pixelFormatMapping.hpp" #undef _s #undef _c };
From the API perspective, the GL::pixelFormat() / GL::pixelType()
APIs assert when encountering unsupported formats (i.e., when the mapping table
gives 0
back) and the user is supposed to check for the format presence
on given OpenGL edition using GL::hasPixelFormat() beforehand.
Implementation-specific enum values
It wouldn’t be Magnum if it forced the users to just the defined set of generic
formats and the existing mapping to OpenGL or Vulkan. What if the user needs to
express the intent to use GL_RGB565
data? Or use Magnum together with Apple
Metal, for which the mapping is not implemented at the moment?
Since the 32 bits of the PixelFormat are far from being fullly used (even 16 bits were more than enough, as noted above), the remaining bits can be used to wrap an implementation-specific format. Neither of the common GFX APIs are using the upper bit of the 32bit format value, so it’s used to denote storage of an implementation-specific value. Magnum provides pixelFormatWrap() and pixelFormatUnwrap() that wrap and unwrap an implementation-specific value into and from the PixelFormat, and such values are handled specially when going through the GL::pixelFormat() / Vk::vkFormat() APIs, so the API gets a correct value in any case.
PixelFormat generic = pixelFormatWrap(VK_FORMAT_R10X6_UNORM_PACK16_KHR); VkFormat vulkan = Vk::vkFormat(format); // VK_FORMAT_R10X6_UNORM_PACK16_KHR
Since the implementation-specific enum value is opaque to the implementation, you need to ensure that you pass a correct value (and not e.g. a GL-specific enum to Vulkan, for example).
Inverse mapping
While mapping from the generic format to an implementation-specific one is enough in 90% cases, sometimes it’s needed to have the inverse mapping as well. That’s the case for the recently introduced DebugTools::screenshot(), which queries a pair of GL::AbstractFramebuffer::implementationColorReadFormat() and implementationColorReadType() and then needs to figure the corresponding generic format for them, because that’s what the image converters understand. Otherwise each *ImageConverter would need to depend on GL, Vulkan and others and that’s not a sane design decision for a multitude of reasons, as I painfully realized myself in the past.
Solution? Abuse the pixelFormatMapping.hpp
one more time, and turn each
entry into an if()
that returns corresponding generic value for a
matching pair and a null Containers::Optional otherwise:
This, in particular, is by no means a fast implementation — compared to the forward mapping it’s — but good enough in this case. And there’s nothing preventing anybody from filling a hash map in a similar way.
Enums elsewhere
Similar approach as is used for PixelFormat is also used for other API-specific enums such as SamplerFilter (corresponding to GL::SamplerFilter or VkFilter) or MeshPrimitive (corresponding to GL::MeshPrimitive or VkPrimitiveTopology), however in that case the mapping is done without preprocessor magic abuse, as there’s just a handful of values in each case.
In case of enums in various application implementations (such as
Platform::Sdl2Application::KeyEvent::Key), there the enum is directly
aliasing the underlying value — so far, for the applications, there was no
need to have a generic interface to them. Instead, the application APIs are
designed with static polymorphism
in mind, allowing to switch from one to another usually just by using a
different #include
. Interfaces that need to be able to work with any of
these (such as the Ui library or ImGuiIntegration) then use
simple duck typing, by making the input handlers templated.