Huh. This is a strange little posting. It's from people with serious job titles, but there's little real content. It's more like a topic starter.
This is a problem I've had to think about too much. As a heavy use of the Rust graphics stack, I get to see people struggling with how to approach the rendering engine problem.
There' a whole stack of components. What each component should do remains a problem, and APIs are difficult. There's even a question of whether there should be "rendering engines" as components at all, as opposed to integrated game engines.
Here's the Rust stack:
- Down at the bottom is the GPU hardware.
- Then there's the GPU driver. This can come from the hardware vendor, or from a third party. There's an open source driver for NVidia GPUs. The APIs offered by GPU drivers are usually Vulkan, Direct-X, Metal, and/or OpenGL. This level has settled down and works reasonably well. Vulkan's API is more like a parts kit that exposes most of the GPU's functionality without abstracting it much.
- Next is a sanity, safety, and resource allocation layer. This layer doesn't even have a good name. It's built into OpenGL, but it has to be explicitly implemented for Vulkan. The GPU has its own memory, and Vulkan has a limited allocator which lets you request large GPU memory buffers. You need a second allocator on top of that one to allocate arbitrary-sized memory buffers. There are other resources - queues, bindless descriptor slots, and such - all of which have to be allocated. There are synchronization issues. How the CPU talks to GPU memory is totally different for "integrated" GPUs as usually seen in laptops, where the GPU lacks dedicated memory. There are various modes where CPU and GPU can access the same memory with a performance penalty, or, alternatively, messages are passed back and forth over queues and the GPU's DMA hardware does the transfer. There are also extra restrictions for phone OSs and web browsers, which are much more constrained than desktop.
Vulkan offers considerable parallelism, which the higher levels should use to get good performance. Vulkan has very complex locking constraints, and something has to check that they are not violated.
This layer is "WGPU" in the Rust stack. The API WGPU offers to higher levels looks a lot like Vulkan, but there's a lot of machinery in there which exists mostly to resolve the above issues.
- Next step up is the "rendering engine". This is where you put in meshes, textures, lights, and material data. You provide transforms to place the meshes, and camera information. The rendering engine puts the scene on the screen. Rendering engines are straightforward until you get to lighting, shadows, and big dynamic scenes. Then it gets hard. In the Rust world, the easy part of this has been done at least five times, but nobody has done the hard part yet.
There's an argument that you shouldn't even try. High-performance lighting and shadows require info about which lights can illuminate which meshes. A naive implementation forces the GPU to test every light against every object, which will slow rendering to a crawl when there are many lights. So the renderer needs info about what's near what.
That kind of spatial information usually belongs to the scene graph, which is part of the next level up, the "game engine". So how does the renderer get that info? Being explicitly told by the game engine? Callbacks to the game engine? Computing this info when the scene changes and caching it? Or should you give up on a general purpose rendering engine and integrate rendering into the game engine, which usually has better spatial data structures. The big game engines such as UE mostly integrate the rendering and game engines, which allows them to pre-compute much info in the scene editor during level design. That's UE's big trick - do everything possible when building the content, rather than deferring work to run time. Unreal Engine Editor does much of the heavy lifting. The work of the "rendering engine" is divided between the run-time player, the run-time game engine, and the build-time editor and optimizer.
Worlds with highly dynamic user-provided content can't do that kind of precomputation. This is a "metaverse problem", and is one of the reasons metaverses have scaling problems. (Meta spend $40 billion, and Improbable spent $400 million trying to do a metaverse. This is part of why.)
Now that's a taxonomy issue. I would have liked to see more insight in the paper on where to cut the layers apart and what they need to say to each other. The authors have a point that the industry doesn't talk about this enough. As I noted above, we don't even have good names for some of this stuff.
That's the argument for general-purpose rendering engines. These offer an API comparable to three.js. The API at that level is much easier to deal with than the API at the Vulkan level. Usually, about a page of code will put a glTF model on the screen.
But fast, general purpose rendering engines are hard. Not impossible, just hard. In open source land, we have about a half dozen examples of My First Rendering Engine, and they mostly work. But they don't scale.
Do open source engines count in your custom engine list? I could see that going either way: yes, because one can modify it to your heart's content, or no because it was built by someone else and not to the exact trade-offs you have in mind
A few more recent non-UE5 custom in-house engines that have demonstrated very impressive results:
* Spiderman (1/2/Miles Morales) and the PS5 Rachet and Clank, by Insomniac on their in-house engine.
* Astro’s Playroom and Astrobot by Team Asobi uses an in-house engine and act as the de-facto hardware showcases for the PS5
* Horizon Zero Dawn/Horizon Forbidden West by Guerilla Games, which shares the custom Decima engine with Death Stranding by Kojima Productions
* The upcoming GTA6 by Rockstar. Not out yet, but the trailers supposedly were captured on a base PS5 and visually look astounding.
* Doom 2016/Doom Eternal/Doom Dark Ages by id, running of course on the latest versions of idTech.
* The Forza racing games continue to use the in-house ForzaTech engine, and Forza Horizon 5 looks amazing visually
* The Call of Duty series continues to use the in-house IW Engine
* The Battlefield series and EA’s various sports games continue to use the in-house Frostbite engine
* Ubisoft’s various games continue to use in-house engines. AFAIK they actually have three major in-house engines: Dunia, Snowdrop, and Anvil.
* Tiny Glade is a small indie game built by a two-person team running on a completely custom engine that despite the team being tiny happens to have possibly the most advanced light transport in any shipping game engine today
UE5 is gigantic today of course, but in-house engines are still alive and well today and constantly producing amazing results.
Huh. This is a strange little posting. It's from people with serious job titles, but there's little real content. It's more like a topic starter.
This is a problem I've had to think about too much. As a heavy use of the Rust graphics stack, I get to see people struggling with how to approach the rendering engine problem. There' a whole stack of components. What each component should do remains a problem, and APIs are difficult. There's even a question of whether there should be "rendering engines" as components at all, as opposed to integrated game engines.
Here's the Rust stack:
- Down at the bottom is the GPU hardware.
- Then there's the GPU driver. This can come from the hardware vendor, or from a third party. There's an open source driver for NVidia GPUs. The APIs offered by GPU drivers are usually Vulkan, Direct-X, Metal, and/or OpenGL. This level has settled down and works reasonably well. Vulkan's API is more like a parts kit that exposes most of the GPU's functionality without abstracting it much.
- Next is a sanity, safety, and resource allocation layer. This layer doesn't even have a good name. It's built into OpenGL, but it has to be explicitly implemented for Vulkan. The GPU has its own memory, and Vulkan has a limited allocator which lets you request large GPU memory buffers. You need a second allocator on top of that one to allocate arbitrary-sized memory buffers. There are other resources - queues, bindless descriptor slots, and such - all of which have to be allocated. There are synchronization issues. How the CPU talks to GPU memory is totally different for "integrated" GPUs as usually seen in laptops, where the GPU lacks dedicated memory. There are various modes where CPU and GPU can access the same memory with a performance penalty, or, alternatively, messages are passed back and forth over queues and the GPU's DMA hardware does the transfer. There are also extra restrictions for phone OSs and web browsers, which are much more constrained than desktop. Vulkan offers considerable parallelism, which the higher levels should use to get good performance. Vulkan has very complex locking constraints, and something has to check that they are not violated.
This layer is "WGPU" in the Rust stack. The API WGPU offers to higher levels looks a lot like Vulkan, but there's a lot of machinery in there which exists mostly to resolve the above issues.
- Next step up is the "rendering engine". This is where you put in meshes, textures, lights, and material data. You provide transforms to place the meshes, and camera information. The rendering engine puts the scene on the screen. Rendering engines are straightforward until you get to lighting, shadows, and big dynamic scenes. Then it gets hard. In the Rust world, the easy part of this has been done at least five times, but nobody has done the hard part yet.
There's an argument that you shouldn't even try. High-performance lighting and shadows require info about which lights can illuminate which meshes. A naive implementation forces the GPU to test every light against every object, which will slow rendering to a crawl when there are many lights. So the renderer needs info about what's near what.
That kind of spatial information usually belongs to the scene graph, which is part of the next level up, the "game engine". So how does the renderer get that info? Being explicitly told by the game engine? Callbacks to the game engine? Computing this info when the scene changes and caching it? Or should you give up on a general purpose rendering engine and integrate rendering into the game engine, which usually has better spatial data structures. The big game engines such as UE mostly integrate the rendering and game engines, which allows them to pre-compute much info in the scene editor during level design. That's UE's big trick - do everything possible when building the content, rather than deferring work to run time. Unreal Engine Editor does much of the heavy lifting. The work of the "rendering engine" is divided between the run-time player, the run-time game engine, and the build-time editor and optimizer.
Worlds with highly dynamic user-provided content can't do that kind of precomputation. This is a "metaverse problem", and is one of the reasons metaverses have scaling problems. (Meta spend $40 billion, and Improbable spent $400 million trying to do a metaverse. This is part of why.)
Now that's a taxonomy issue. I would have liked to see more insight in the paper on where to cut the layers apart and what they need to say to each other. The authors have a point that the industry doesn't talk about this enough. As I noted above, we don't even have good names for some of this stuff.
> But, in many ways, we are still a young industry, we love to talk about shiny things
Well, I would expect nothing less from people who spend all day implementing lighting algorithms.
def got a soft spot for anyone still in the custom engine grind, whole scene's slept on if you ask me
But it's 2025.
There is only UE5.
There are still a few of us out here fighting the custom engine fight :)
Newbies showing off their first triangle are welcome over in r/gameenginedevs/ and r/GraphicsProgramming/ :)
Do you know of any communities outside of reddit? It's not exactly my favorite place to hang out.
There are active Discords for "Vulkan", "DirectX" and "Graphics Programming".
https://gamedev.net/forums/ is the classic hang-out spot.
mastodon.gamedev.place is a thing
I'm sad that Lemmy didn't catch on, I had high hopes for a distributed Reddit replacement
But I think it's a combination of "people have short memories" and "people don't like change"
I think discoverability and fragmentation are bigger problems that the fediverse needs to solve for adoption to take off.
That's the problem. The generic graphics groups are mostly newbies. More advanced stuff is on engine-specific boards.
The other problem is that modern graphics APIs got so complex that they're intimidating even to experienced developers.
That 'first triangle on screen' code can now look something like this - https://github.com/KhronosGroup/Vulkan-Samples/blob/main/sam... - compared to the simpler old days of OpenGL (https://github.com/gamedev-net/nehe-opengl/blob/master/vc/Le...)
That's the argument for general-purpose rendering engines. These offer an API comparable to three.js. The API at that level is much easier to deal with than the API at the Vulkan level. Usually, about a page of code will put a glTF model on the screen.
But fast, general purpose rendering engines are hard. Not impossible, just hard. In open source land, we have about a half dozen examples of My First Rendering Engine, and they mostly work. But they don't scale.
Do open source engines count in your custom engine list? I could see that going either way: yes, because one can modify it to your heart's content, or no because it was built by someone else and not to the exact trade-offs you have in mind
Keep fighting the good fight, monocultures suck.
This is both somewhat funny, but also sad. Such a mood. :D
I can name several games from the top of my head that are built on in-house engines I have played and are reasonably recent:
- The Witness
- Overwatch (1/2)
- Factorio
- The Last of Us 2
- Baldur's Gate 3
- Stellaris
I can't say anything about the quality of the engines themselves. But I find these games very impressive for different types of reasons.
A few more recent non-UE5 custom in-house engines that have demonstrated very impressive results:
* Spiderman (1/2/Miles Morales) and the PS5 Rachet and Clank, by Insomniac on their in-house engine.
* Astro’s Playroom and Astrobot by Team Asobi uses an in-house engine and act as the de-facto hardware showcases for the PS5
* Horizon Zero Dawn/Horizon Forbidden West by Guerilla Games, which shares the custom Decima engine with Death Stranding by Kojima Productions
* The upcoming GTA6 by Rockstar. Not out yet, but the trailers supposedly were captured on a base PS5 and visually look astounding.
* Doom 2016/Doom Eternal/Doom Dark Ages by id, running of course on the latest versions of idTech.
* The Forza racing games continue to use the in-house ForzaTech engine, and Forza Horizon 5 looks amazing visually
* The Call of Duty series continues to use the in-house IW Engine
* The Battlefield series and EA’s various sports games continue to use the in-house Frostbite engine
* Ubisoft’s various games continue to use in-house engines. AFAIK they actually have three major in-house engines: Dunia, Snowdrop, and Anvil.
* Tiny Glade is a small indie game built by a two-person team running on a completely custom engine that despite the team being tiny happens to have possibly the most advanced light transport in any shipping game engine today
UE5 is gigantic today of course, but in-house engines are still alive and well today and constantly producing amazing results.
Let's not forget the masterpiece that is Return of the Obra Dinn. I really hope that algorithm becomes open source at some point.