How Magnum does GFX API enum mapping

En­gines sup­port­ing more than one graph­ics back­end very of­ten need to trans­late var­i­ous enum val­ues — pix­el for­mats, prim­i­tive types etc. — from a gener­ic API-ag­nos­tic val­ue to the one used by a par­tic­u­lar im­ple­men­ta­tion, in the fastest-pos­si­ble and most ro­bust way.

Pre­vi­ous work

His­tor­i­cal­ly, be­fore the de­sign of Mag­num got re­worked to sup­port more than just one graph­ics API for the 2018.04 re­lease, things were sim­ple. There was just OpenGL and thus the en­gine could af­fort to di­rect­ly hard­code the OpenGL-spe­cif­ic val­ues — so then-named Pix­elFor­mat::RG­BA was GL_RGBA and so on. This is al­so the fastest pos­si­ble way, no big map­ping ta­bles, no prob­lems with slow in­verse map­ping, just di­rect­ly alias­ing the val­ues.

Sec­ond fastest is the ap­proach sug­gest­ed by @g_truc in Ro­bust and ef­fi­cient trans­la­tions in C++ us­ing ta­bles with ze­ro-based enu­mer­a­tions — hav­ing ze­ro-based enums and a one-way map­ping ta­ble, to which you in­dex. Apart from the map­ping ta­ble, which needs lin­ear amount of mem­o­ry scal­ing with the num­ber of val­ues, such way has a \mathcal{O}(1) time com­plex­i­ty, so pret­ty good. How­ev­er the pro­posed so­lu­tion in­volves adding ug­ly sen­tinel val­ues to the enums and, as the ar­ti­cle it­self al­ready points out, adding val­ues to any­where else than at the end of the enum is very er­ror-prone, not to men­tion val­ue re­order­ing, and the on­ly way to avoid that is test­ing ev­ery val­ue. And you can for­get about easy in­verse map­ping.

En­ter the pre­pro­ces­sor

One po­ten­tial so­lu­tion could be to have the map­ping ta­ble gen­er­at­ed by an ex­ter­nal tool (writ­ten in Python, let’s say) and in­voke it as a part of the build. How­ev­er — and as Our Ma­chin­ery does as well — I don’t re­al­ly want to in­tro­duce oth­er lan­guages in­to the build process, as that rais­es the bar­ri­er for ex­ter­nal con­trib­u­tors and users build­ing from source. The on­ly ex­cep­tion is flextGL, be­cause if there’s one thing you don’t want to do in C++, it’s pars­ing XML. (And even in that case, the gen­er­at­ed files are checked in to source con­trol, so it doesn’t af­fect the build process in any way.)

In an ide­al lan­guage, both the enum def­i­ni­tion and the map­ping to all un­der­ly­ing APIs would be de­fined in a sin­gle place, how­ev­er since for C++ the enum defini­ton should to be put in a doc­u­ment­ed hu­man-read­able head­er and it’s not fea­si­ble to have the head­er de­pend on all cor­re­spond­ing Vulkan, OpenGL, D3D etc. enum map­pings, just a sin­gle place is not pos­si­ble. But, since we have the right to abuse a pre­proces­sor, two places are enough:

enum class PixelFormat: UnsignedInt {
    R8Unorm,
    RG8Unorm,
    RGB8Unorm,
    RGBA8Unorm,
    R8Snorm,
    RG8Snorm,
    RGB8Snorm,
    RGBA8Snorm,
    

One will be defin­ing the Pix­elFor­mat enum with ze­ro-based val­ues in a de­sired or­der (doc­u­men­ta­tion com­ments omit­ted for brevi­ty)…

Full sources here.

#ifdef _c
_c(R8Unorm, R8_UNORM)
_c(RG8Unorm, R8G8_UNORM)
_c(RGB8Unorm, R8G8B8_UNORM)
_c(RGBA8Unorm, R8G8B8A8_UNORM)
_c(R8Snorm, R8_SNORM)
_c(RG8Snorm, R8G8_SNORM)
_c(RGB8Snorm, R8G8B8_SNORM)
_c(RGBA8Snorm, R8G8B8A8_SNORM)

… and the sec­ond place is the ac­tu­al ta­ble in pixelFormatMapping.hpp that maps the val­ues to the un­der­ly­ing API, in this case Vulkan.

Full sources here.

And now, the ac­tu­al mag­ic pre­proces­sor abuse — cre­at­ing the \mathcal{O}(1) map­ping ta­ble by in­clud­ing the above file in­side a C ar­ray def­i­ni­tion. Af­ter that, the map­ping func­tion is sim­ply in­dex­ing to it to re­turn the cor­re­spond­ing Vk­For­mat:

constexpr VkFormat FormatMapping[] {
    #define _c(input, format) VK_FORMAT_ ## format,
    #include "pixelFormatMapping.hpp"
    #undef _c
};



VkFormat vkFormat(const PixelFormat format) {
    CORRADE_ASSERT(UnsignedInt(format) < Containers::arraySize(FormatMapping),
        "Vk::vkFormat(): invalid format" << format, {});
    const VkFormat out = FormatMapping[UnsignedInt(format)];
    return out;
}

Full source here.

Note that the FormatMapping ta­ble is filled on­ly us­ing the sec­ond ar­gu­ment of the _c() macro. The first is in this case un­used, but will get used for test­ing.

Test­ing

As you have prob­a­bly guessed, the above would work cor­rect­ly on­ly if pixelFormatMapping.hpp lists the val­ues in the same or­der as the enum — and so we seem to be ar­riv­ing back at the core prob­lem. To solve this, Mag­num reuses the same map­ping file to test the cor­rect map­ping, by abus­ing the pre­proces­sor again and #include-ing the file in a dif­fer­ent con­text. The essence of the test is in the fol­low­ing snip­pet:

/* "Touchstone" verification */
CORRADE_COMPARE(vkFormat(PixelFormat::RGBA8Unorm), VK_FORMAT_R8G8B8A8_UNORM);

/* Going through the first 16 bits is enough in this case */
for(UnsignedInt i = 0; i != 0xffff; ++i) {
    PixelFormat format(i);
    #ifdef __GNUC__
    #pragma GCC diagnostic push
    #pragma GCC diagnostic error "-Wswitch"
    #endif
    switch(format) {
        #define _c(format, expectedFormat)                              \
            case PixelFormat::format:                                   \
                CORRADE_COMPARE(vkFormat(PixelFormat::format),          \
                                VK_FORMAT_ ## expectedFormat);          \
                continue;
        #include "pixelFormatMapping.hpp"
        #undef _c
    }
    #ifdef __GNUC__
    #pragma GCC diagnostic pop
    #endif
}

Full source here.

The COR­RADE_­COM­PARE() macros are part of the Test­Suite li­brary. Let’s go through the rest:

  1. First, ba­sic san­i­ty is checked for a sin­gle val­ue, in the sim­plest way pos­si­ble. This en­sures the test is still able to de­tect se­ri­ous cas­es of the map­ping be­ing bro­ken even if the fol­low­ing loop would be giv­ing false pos­i­tives by ac­ci­dent.
  2. Sec­ond, it goes through the first 65536 num­bers. The Pix­elFor­mat enum has con­sid­er­ably less val­ues and it will nev­er grow so big, but this is a good trade­off — go­ing through the whole 32bit range would take too long, while go­ing just through 8 bits might be­come dan­ger­ous when more for­mats get added.
  3. For ev­ery val­ue that’s a part of the map­ping ta­ble, one case will get hit, ver­i­fy­ing that the re­sult­ing val­ue cor­re­sponds to the ex­pec­ta­tion. This is the first time where the both the first and the sec­ond ar­gu­ment of the _c() macro gets used.
  4. Val­ues that are not part of the map­ping ta­ble get ig­nored – in this case, that’ll be the re­main­ing ~65430 val­ues, since the ta­ble has cur­rent­ly on­ly about 50 val­ues.
  5. Pix­elFor­mat val­ues that were ac­ci­den­tal­ly not added to the pixelFormatMapping.hpp ta­ble will cause an er­ror at com­pile time, thanks to -Werror=switch en­abled for the switch on GCC and Clang. I’m not aware of a sim­i­lar com­pil­er warn­ing on MSVC, but usu­al­ly projects are test­ed on more than one CI and so any er­ror will get caught ear­ly on.

The ac­tu­al test code linked above is slight­ly more com­plex, main­ly to pro­vide bet­ter di­ag­nos­tic in case val­ues got or­dered in­cor­rect­ly — but noth­ing that would make this sim­pli­fied ver­sion less thor­ough.

Sep­a­rate pix­el for­mat and type in OpenGL

OpenGL, with its his­toric de­ci­sion to have pix­el for­mats de­scribed by two val­ues in­stead of just one, is mak­ing things slight­ly more com­pli­cat­ed. There are sep­a­rate GL::pix­elFor­mat() and GL::pix­el­Type() func­tions, re­turn­ing ei­ther GL::Pix­elFor­mat or GL::Pix­el­Type for giv­en gener­ic Pix­elFor­mat. The map­ping da­ta and the ta­ble def­i­ni­tion look like this, in com­par­i­son:

#ifdef _c
_c(R8Unorm, Red, UnsignedByte)
_c(RG8Unorm, RG, UnsignedByte)
_c(RGB8Unorm, RGB, UnsignedByte)
_c(RGBA8Unorm, RGBA, UnsignedByte)
_c(R8Snorm, Red, Byte)
_c(RG8Snorm, RG, Byte)
_c(RGB8Snorm, RGB, Byte)
_c(RGBA8Snorm, RGBA, Byte)

Full source here.

constexpr struct {
    GL::PixelFormat format;
    GL::PixelType type;
} FormatMapping[] {
    #define _c(input, format, type) \
        {GL::PixelFormat::format,   \
         GL::PixelType::type},
    #include "pixelFormatMapping.hpp"
    #undef _c
};

Full source here.

Han­dling un­sup­port­ed val­ues

While not the case for Vulkan, not all OpenGL edi­tions sup­port ev­ery­thing from the Pix­elFor­mat enum — in par­tic­u­lar, OpenGL ES 2.0 and We­bGL 1 have no sup­port for in­te­gers for­mats like Pix­elFor­mat::RG­BA8UI. To han­dle this cor­rect­ly, the map­ping ta­ble pro­vides spe­cif­ic dum­my en­tries for un­sup­port­ed for­mats:

#ifndef MAGNUM_TARGET_GLES2
_c(R8UI, RedInteger, UnsignedByte)
_c(RG8UI, RGInteger, UnsignedByte)
_c(RGB8UI, RGBInteger, UnsignedByte)
_c(RGBA8UI, RGBAInteger, UnsignedByte)

#else
_s(R8UI)
_s(RG8UI)
_s(RGB8UI)
_s(RGBA8UI)

#endif

Then, the map­ping ta­ble de­fines the _s() macro as fol­lows — no OpenGL for­mat has a val­ue of 0, so we use it to de­note an “in­valid” val­ue.

constexpr struct {
    GL::PixelFormat format;
    GL::PixelType type;
} FormatMapping[] {
    #define _c(input, format, type) {GL::PixelFormat::format, GL::PixelType::type},
    #define _s(input) {GL::PixelFormat{}, GL::PixelType{}},
    #include "pixelFormatMapping.hpp"
    #undef _s
    #undef _c
};

From the API per­spec­tive, the GL::pix­elFor­mat() / GL::pix­el­Type() APIs as­sert when en­coun­ter­ing un­sup­port­ed for­mats (i.e., when the map­ping ta­ble gives 0 back) and the us­er is sup­posed to check for the for­mat pres­ence on giv­en OpenGL edi­tion us­ing GL::hasPix­elFor­mat() be­fore­hand.

Im­ple­men­ta­tion-spe­cif­ic enum val­ues

It wouldn’t be Mag­num if it forced the users to just the de­fined set of gener­ic for­mats and the ex­ist­ing map­ping to OpenGL or Vulkan. What if the us­er needs to ex­press the in­tent to use GL_RGB565 da­ta? Or use Mag­num to­geth­er with Ap­ple Met­al, for which the map­ping is not im­ple­ment­ed at the mo­ment?

Since the 32 bits of the Pix­elFor­mat are far from be­ing ful­l­ly used (even 16 bits were more than enough, as not­ed above), the re­main­ing bits can be used to wrap an im­ple­men­ta­tion-spe­cif­ic for­mat. Nei­ther of the com­mon GFX APIs are us­ing the up­per bit of the 32bit for­mat val­ue, so it’s used to de­note stor­age of an im­ple­men­ta­tion-spe­cif­ic val­ue. Mag­num pro­vides pix­elFor­matWrap() and pix­elFor­matUn­wrap() that wrap and un­wrap an im­ple­men­ta­tion-spe­cif­ic val­ue in­to and from the Pix­elFor­mat, and such val­ues are han­dled spe­cial­ly when go­ing through the GL::pix­elFor­mat() / Vk::vk­For­mat() APIs, so the API gets a cor­rect val­ue in any case.

PixelFormat generic = pixelFormatWrap(VK_FORMAT_R10X6_UNORM_PACK16_KHR);
VkFormat vulkan = Vk::vkFormat(format); // VK_FORMAT_R10X6_UNORM_PACK16_KHR

Since the im­ple­men­ta­tion-spe­cif­ic enum val­ue is opaque to the im­ple­men­ta­tion, you need to en­sure that you pass a cor­rect val­ue (and not e.g. a GL-spe­cif­ic enum to Vulkan, for ex­am­ple).

In­verse map­ping

While map­ping from the gener­ic for­mat to an im­ple­men­ta­tion-spe­cif­ic one is enough in 90% cas­es, some­times it’s need­ed to have the in­verse map­ping as well. That’s the case for the re­cent­ly in­tro­duced De­bug­Tools::screen­shot(), which queries a pair of GL::Ab­stract­Frame­buf­fer::im­ple­men­ta­tion­Col­or­Read­For­mat() and im­ple­men­ta­tion­Col­or­Read­Type() and then needs to fig­ure the cor­re­spond­ing gener­ic for­mat for them, be­cause that’s what the im­age con­vert­ers un­der­stand. Oth­er­wise each *Im­age­Con­vert­er would need to de­pend on GL, Vulkan and oth­ers and that’s not a sane de­sign de­ci­sion for a mul­ti­tude of rea­sons, as I painful­ly re­al­ized my­self in the past.

So­lu­tion? Abuse the pixelFormatMapping.hpp one more time, and turn each en­try in­to an if() that re­turns cor­re­spond­ing gener­ic val­ue for a match­ing pair and a null Con­tain­ers::Op­tion­al oth­er­wise:

GL::PixelFormat format = framebuffer.implementationColorReadFormat();
GL::PixelType type = framebuffer.implementationColorReadType();
auto genericFormat = [](GL::PixelFormat format, GL::PixelType type)
        -> Containers::Optional<PixelFormat> {
    #define _c(generic, glFormat, glType)                               \
        if(format == GL::PixelFormat::glFormat &&                       \
           type == GL::PixelType::glType) return PixelFormat::generic;
    #define _s(generic) return {};
    #include "pixelFormatMapping.hpp"
    #undef _c
    #undef _s
    #endif
    return {};
}(format, type);

Full source here.

This, in par­tic­u­lar, is by no means a fast im­ple­men­ta­tion — com­pared to the for­ward map­ping it’s \mathcal{O}(n) — but good enough in this case. And there’s noth­ing pre­vent­ing any­body from fill­ing a hash map in a sim­i­lar way.

Enums else­where

Sim­i­lar ap­proach as is used for Pix­elFor­mat is al­so used for oth­er API-spe­cif­ic enums such as Sam­pler­Fil­ter (cor­re­spond­ing to GL::Sam­pler­Fil­ter or Vk­Fil­ter) or Mesh­Prim­i­tive (cor­re­spond­ing to GL::Mesh­Prim­i­tive or VkPrim­i­tive­Topol­o­gy), how­ev­er in that case the map­ping is done with­out pre­proces­sor mag­ic abuse, as there’s just a hand­ful of val­ues in each case.

In case of enums in var­i­ous ap­pli­ca­tion im­ple­men­ta­tions (such as Plat­form::Sdl2Ap­pli­ca­tion::KeyEvent::Key), there the enum is di­rect­ly alias­ing the un­der­ly­ing val­ue — so far, for the ap­pli­ca­tions, there was no need to have a gener­ic in­ter­face to them. In­stead, the ap­pli­ca­tion APIs are de­signed with stat­ic poly­mor­phism in mind, al­low­ing to switch from one to an­oth­er usu­al­ly just by us­ing a dif­fer­ent #include. In­ter­faces that need to be able to work with any of these (such as the Ui li­brary or ImGui­In­te­gra­tion) then use sim­ple duck typ­ing, by mak­ing the in­put han­dlers tem­plat­ed.