How Magnum does GFX API enum mapping

En­gines sup­port­ing more than one graph­ics backend very of­ten need to trans­late vari­ous enum val­ues — pixel formats, prim­it­ive types etc. — from a gen­er­ic API-ag­nost­ic value to the one used by a par­tic­u­lar im­ple­ment­a­tion, in the fast­est-pos­sible and most ro­bust way.

Pre­vi­ous work

His­tor­ic­ally, be­fore the design of Mag­num got re­worked to sup­port more than just one graph­ics API for the 2018.04 re­lease, things were simple. There was just OpenGL and thus the en­gine could af­fort to dir­ectly hard­code the OpenGL-spe­cif­ic val­ues — so then-named Pixel­Format::RGBA was GL_RGBA and so on. This is also the fast­est pos­sible way, no big map­ping tables, no prob­lems with slow in­verse map­ping, just dir­ectly ali­asing the val­ues.

Second fast­est is the ap­proach sug­ges­ted by @g_truc in Ro­bust and ef­fi­cient trans­la­tions in C++ us­ing tables with zero-based enu­mer­a­tions — hav­ing zero-based enums and a one-way map­ping table, to which you in­dex. Apart from the map­ping table, which needs lin­ear amount of memory scal­ing with the num­ber of val­ues, such way has a \mathcal{O}(1) time com­plex­ity, so pretty good. How­ever the pro­posed solu­tion in­volves adding ugly sen­tinel val­ues to the enums and, as the art­icle it­self already points out, adding val­ues to any­where else than at the end of the enum is very er­ror-prone, not to men­tion value re­order­ing, and the only way to avoid that is test­ing every value. And you can for­get about easy in­verse map­ping.

Enter the pre­pro­cessor

One po­ten­tial solu­tion could be to have the map­ping table gen­er­ated by an ex­tern­al tool (writ­ten in Py­thon, let’s say) and in­voke it as a part of the build. How­ever — and as Our Ma­chinery does as well — I don’t really want to in­tro­duce oth­er lan­guages in­to the build pro­cess, as that raises the bar­ri­er for ex­tern­al con­trib­ut­ors and users build­ing from source. The only ex­cep­tion is flex­t­GL, be­cause if there’s one thing you don’t want to do in C++, it’s pars­ing XML. (And even in that case, the gen­er­ated files are checked in to source con­trol, so it doesn’t af­fect the build pro­cess in any way.)

In an ideal lan­guage, both the enum defin­i­tion and the map­ping to all un­der­ly­ing APIs would be defined in a single place, how­ever since for C++ the enum definiton should to be put in a doc­u­mented hu­man-read­able head­er and it’s not feas­ible to have the head­er de­pend on all cor­res­pond­ing Vulkan, OpenGL, D3D etc. enum map­pings, just a single place is not pos­sible. But, since we have the right to ab­use a pre­pro­cessor, two places are enough:

enum class PixelFormat: UnsignedInt {
    R8Unorm,
    RG8Unorm,
    RGB8Unorm,
    RGBA8Unorm,
    R8Snorm,
    RG8Snorm,
    RGB8Snorm,
    RGBA8Snorm,
    

One will be de­fin­ing the Pixel­Format enum with zero-based val­ues in a de­sired or­der (doc­u­ment­a­tion com­ments omit­ted for brev­ity)…

Full sources here.

#ifdef _c
_c(R8Unorm, R8_UNORM)
_c(RG8Unorm, R8G8_UNORM)
_c(RGB8Unorm, R8G8B8_UNORM)
_c(RGBA8Unorm, R8G8B8A8_UNORM)
_c(R8Snorm, R8_SNORM)
_c(RG8Snorm, R8G8_SNORM)
_c(RGB8Snorm, R8G8B8_SNORM)
_c(RGBA8Snorm, R8G8B8A8_SNORM)

… and the second place is the ac­tu­al table in pixelFormatMapping.hpp that maps the val­ues to the un­der­ly­ing API, in this case Vulkan.

Full sources here.

And now, the ac­tu­al ma­gic pre­pro­cessor ab­use — cre­at­ing the \mathcal{O}(1) map­ping table by in­clud­ing the above file in­side a C ar­ray defin­i­tion. After that, the map­ping func­tion is simply in­dex­ing to it to re­turn the cor­res­pond­ing Vk­Format:

constexpr VkFormat FormatMapping[] {
    #define _c(input, format) VK_FORMAT_ ## format,
    #include "pixelFormatMapping.hpp"
    #undef _c
};



VkFormat vkFormat(const PixelFormat format) {
    CORRADE_ASSERT(UnsignedInt(format) < Containers::arraySize(FormatMapping),
        "Vk::vkFormat(): invalid format" << format, {});
    const VkFormat out = FormatMapping[UnsignedInt(format)];
    return out;
}

Full source here.

Note that the FormatMapping table is filled only us­ing the second ar­gu­ment of the _c() macro. The first is in this case un­used, but will get used for test­ing.

Test­ing

As you have prob­ably guessed, the above would work cor­rectly only if pixelFormatMapping.hpp lists the val­ues in the same or­der as the enum — and so we seem to be ar­riv­ing back at the core prob­lem. To solve this, Mag­num re­uses the same map­ping file to test the cor­rect map­ping, by ab­us­ing the pre­pro­cessor again and #include-ing the file in a dif­fer­ent con­text. The es­sence of the test is in the fol­low­ing snip­pet:

/* "Touchstone" verification */
CORRADE_COMPARE(vkFormat(PixelFormat::RGBA8Unorm), VK_FORMAT_R8G8B8A8_UNORM);

/* Going through the first 16 bits is enough in this case */
for(UnsignedInt i = 0; i != 0xffff; ++i) {
    PixelFormat format(i);
    #ifdef __GNUC__
    #pragma GCC diagnostic push
    #pragma GCC diagnostic error "-Wswitch"
    #endif
    switch(format) {
        #define _c(format, expectedFormat)                              \
            case PixelFormat::format:                                   \
                CORRADE_COMPARE(vkFormat(PixelFormat::format),          \
                                VK_FORMAT_ ## expectedFormat);          \
                continue;
        #include "pixelFormatMapping.hpp"
        #undef _c
    }
    #ifdef __GNUC__
    #pragma GCC diagnostic pop
    #endif
}

Full source here.

The COR­RADE_­COM­PARE() mac­ros are part of the Test­Suite lib­rary. Let’s go through the rest:

  1. First, ba­sic san­ity is checked for a single value, in the simplest way pos­sible. This en­sures the test is still able to de­tect ser­i­ous cases of the map­ping be­ing broken even if the fol­low­ing loop would be giv­ing false pos­it­ives by ac­ci­dent.
  2. Second, it goes through the first 65536 num­bers. The Pixel­Format enum has con­sid­er­ably less val­ues and it will nev­er grow so big, but this is a good tradeoff — go­ing through the whole 32bit range would take too long, while go­ing just through 8 bits might be­come dan­ger­ous when more formats get ad­ded.
  3. For every value that’s a part of the map­ping table, one case will get hit, veri­fy­ing that the res­ult­ing value cor­res­ponds to the ex­pect­a­tion. This is the first time where the both the first and the second ar­gu­ment of the _c() macro gets used.
  4. Val­ues that are not part of the map­ping table get ig­nored – in this case, that’ll be the re­main­ing ~65430 val­ues, since the table has cur­rently only about 50 val­ues.
  5. Pixel­Format val­ues that were ac­ci­dent­ally not ad­ded to the pixelFormatMapping.hpp table will cause an er­ror at com­pile time, thanks to -Werror=switch en­abled for the switch on GCC and Clang. I’m not aware of a sim­il­ar com­piler warn­ing on MS­VC, but usu­ally pro­jects are tested on more than one CI and so any er­ror will get caught early on.

The ac­tu­al test code linked above is slightly more com­plex, mainly to provide bet­ter dia­gnost­ic in case val­ues got ordered in­cor­rectly — but noth­ing that would make this sim­pli­fied ver­sion less thor­ough.

Sep­ar­ate pixel format and type in OpenGL

OpenGL, with its his­tor­ic de­cision to have pixel formats de­scribed by two val­ues in­stead of just one, is mak­ing things slightly more com­plic­ated. There are sep­ar­ate GL::pixel­Format() and GL::pixel­Type() func­tions, re­turn­ing either GL::Pixel­Format or GL::Pixel­Type for giv­en gen­er­ic Pixel­Format. The map­ping data and the table defin­i­tion look like this, in com­par­is­on:

#ifdef _c
_c(R8Unorm, Red, UnsignedByte)
_c(RG8Unorm, RG, UnsignedByte)
_c(RGB8Unorm, RGB, UnsignedByte)
_c(RGBA8Unorm, RGBA, UnsignedByte)
_c(R8Snorm, Red, Byte)
_c(RG8Snorm, RG, Byte)
_c(RGB8Snorm, RGB, Byte)
_c(RGBA8Snorm, RGBA, Byte)

Full source here.

constexpr struct {
    GL::PixelFormat format;
    GL::PixelType type;
} FormatMapping[] {
    #define _c(input, format, type) \
        {GL::PixelFormat::format,   \
         GL::PixelType::type},
    #include "pixelFormatMapping.hpp"
    #undef _c
};

Full source here.

Hand­ling un­sup­por­ted val­ues

While not the case for Vulkan, not all OpenGL edi­tions sup­port everything from the Pixel­Format enum — in par­tic­u­lar, OpenGL ES 2.0 and WebGL 1 have no sup­port for in­tegers formats like Pixel­Format::RGBA8UI. To handle this cor­rectly, the map­ping table provides spe­cif­ic dummy entries for un­sup­por­ted formats:

#ifndef MAGNUM_TARGET_GLES2
_c(R8UI, RedInteger, UnsignedByte)
_c(RG8UI, RGInteger, UnsignedByte)
_c(RGB8UI, RGBInteger, UnsignedByte)
_c(RGBA8UI, RGBAInteger, UnsignedByte)

#else
_s(R8UI)
_s(RG8UI)
_s(RGB8UI)
_s(RGBA8UI)

#endif

Then, the map­ping table defines the _s() macro as fol­lows — no OpenGL format has a value of 0, so we use it to de­note an “in­val­id” value.

constexpr struct {
    GL::PixelFormat format;
    GL::PixelType type;
} FormatMapping[] {
    #define _c(input, format, type) {GL::PixelFormat::format, GL::PixelType::type},
    #define _s(input) {GL::PixelFormat{}, GL::PixelType{}},
    #include "pixelFormatMapping.hpp"
    #undef _s
    #undef _c
};

From the API per­spect­ive, the GL::pixel­Format() / GL::pixel­Type() APIs as­sert when en­coun­ter­ing un­sup­por­ted formats (i.e., when the map­ping table gives 0 back) and the user is sup­posed to check for the format pres­ence on giv­en OpenGL edi­tion us­ing GL::hasPixel­Format() be­fore­hand.

Im­ple­ment­a­tion-spe­cif­ic enum val­ues

It wouldn’t be Mag­num if it forced the users to just the defined set of gen­er­ic formats and the ex­ist­ing map­ping to OpenGL or Vulkan. What if the user needs to ex­press the in­tent to use GL_RGB565 data? Or use Mag­num to­geth­er with Apple Met­al, for which the map­ping is not im­ple­men­ted at the mo­ment?

Since the 32 bits of the Pixel­Format are far from be­ing fullly used (even 16 bits were more than enough, as noted above), the re­main­ing bits can be used to wrap an im­ple­ment­a­tion-spe­cif­ic format. Neither of the com­mon GFX APIs are us­ing the up­per bit of the 32bit format value, so it’s used to de­note stor­age of an im­ple­ment­a­tion-spe­cif­ic value. Mag­num provides pixel­Form­at­Wrap() and pixel­Form­a­tUn­wrap() that wrap and un­wrap an im­ple­ment­a­tion-spe­cif­ic value in­to and from the Pixel­Format, and such val­ues are handled spe­cially when go­ing through the GL::pixel­Format() / Vk::vk­Format() APIs, so the API gets a cor­rect value in any case.

PixelFormat generic = pixelFormatWrap(VK_FORMAT_R10X6_UNORM_PACK16_KHR);
VkFormat vulkan = Vk::vkFormat(format); // VK_FORMAT_R10X6_UNORM_PACK16_KHR

Since the im­ple­ment­a­tion-spe­cif­ic enum value is opaque to the im­ple­ment­a­tion, you need to en­sure that you pass a cor­rect value (and not e.g. a GL-spe­cif­ic enum to Vulkan, for ex­ample).

In­verse map­ping

While map­ping from the gen­er­ic format to an im­ple­ment­a­tion-spe­cif­ic one is enough in 90% cases, some­times it’s needed to have the in­verse map­ping as well. That’s the case for the re­cently in­tro­duced De­bug­Tools::screen­shot(), which quer­ies a pair of GL::Ab­stract­Frame­buf­fer::im­ple­ment­a­tion­Col­or­Read­Format() and im­ple­ment­a­tion­Col­or­Read­Type() and then needs to fig­ure the cor­res­pond­ing gen­er­ic format for them, be­cause that’s what the im­age con­vert­ers un­der­stand. Oth­er­wise each *Im­age­Con­vert­er would need to de­pend on GL, Vulkan and oth­ers and that’s not a sane design de­cision for a mul­ti­tude of reas­ons, as I pain­fully real­ized my­self in the past.

Solu­tion? Ab­use the pixelFormatMapping.hpp one more time, and turn each entry in­to an if() that re­turns cor­res­pond­ing gen­er­ic value for a match­ing pair and a null Con­tain­ers::Op­tion­al oth­er­wise:

GL::PixelFormat format = framebuffer.implementationColorReadFormat();
GL::PixelType type = framebuffer.implementationColorReadType();
auto genericFormat = [](GL::PixelFormat format, GL::PixelType type)
        -> Containers::Optional<PixelFormat> {
    #define _c(generic, glFormat, glType)                               \
        if(format == GL::PixelFormat::glFormat &&                       \
           type == GL::PixelType::glType) return PixelFormat::generic;
    #define _s(generic) return {};
    #include "pixelFormatMapping.hpp"
    #undef _c
    #undef _s
    #endif
    return {};
}(format, type);

Full source here.

This, in par­tic­u­lar, is by no means a fast im­ple­ment­a­tion — com­pared to the for­ward map­ping it’s \mathcal{O}(n) — but good enough in this case. And there’s noth­ing pre­vent­ing any­body from filling a hash map in a sim­il­ar way.

Enums else­where

Sim­il­ar ap­proach as is used for Pixel­Format is also used for oth­er API-spe­cif­ic enums such as Sampler­Fil­ter (cor­res­pond­ing to GL::Sampler­Fil­ter or Vk­Fil­ter) or MeshPrim­it­ive (cor­res­pond­ing to GL::MeshPrim­it­ive or VkPrim­it­iv­eTo­po­logy), how­ever in that case the map­ping is done without pre­pro­cessor ma­gic ab­use, as there’s just a hand­ful of val­ues in each case.

In case of enums in vari­ous ap­plic­a­tion im­ple­ment­a­tions (such as Plat­form::Sdl2Ap­plic­a­tion::KeyEvent::Key), there the enum is dir­ectly ali­asing the un­der­ly­ing value — so far, for the ap­plic­a­tions, there was no need to have a gen­er­ic in­ter­face to them. In­stead, the ap­plic­a­tion APIs are de­signed with stat­ic poly­morph­ism in mind, al­low­ing to switch from one to an­oth­er usu­ally just by us­ing a dif­fer­ent #include. In­ter­faces that need to be able to work with any of these (such as the Ui lib­rary or ImGui­In­teg­ra­tion) then use simple duck typ­ing, by mak­ing the in­put hand­lers tem­plated.