Introduction
Soul is a rendering engine that I have been working on this past couple of months. Note that it is not a game engine. A game engine is just too unrealistic and ambitious for me right now. In fact, I develop this engine just as a first pass to understand how graphics pipeline works, and I have the plan to ditch this engine as soon as possible.
Godot is one of my reference engines and reading Godot’s rendering progress report help me tremendously in learning new graphics algorithm. Thus, I have always wanted to write something like this. I will post something like this at the end of every month to give an overview of what algorithms I am experimenting at that month. These posts will just describe the algorithm at the surface level. All algorithms will be explained in detail once I finish developing all the feature wishlist here. One of the things that I notice when learning graphics is the lack of beginner-friendly resources. Most articles that I found assume you have written a rendering engine before unless you read some ancient outdated algorithm that isn’t used anymore.
Specification
– Metallic-workflow physically based material (metallic, roughness, albedo and normal). The material system here is based on this paper.
– Shadow calculation using cascaded shadow map with PCF 3×3 filtering.
– Screen space reflection based on this presentation by EA.
– Real-time global illumination using voxel structure based on this paper.
– Image-based lighting.
– OpenGL 4.5. I am using this version of OpenGL for the compute shader so this engine is windows only.
Cascaded Shadow Map
Shadow map techniques are still popular in today game engine to get real-time shadow. The problem with traditional shadow mapping is the amount of space required to get unpixelated shadows for near objects due to perspective aliasing. If you pause for a second and see your surrounding, you will notice that you can see more objects the further it is from your viewing position. The resolution of shadow map will be wasted a lot for the far objects. But if you think about it, we actually don’t really need a super detail shadow for objects that are far away. This is where cascaded shadow map comes into the picture. We split the view frustum into a number of cascades. Each cascade will have its own shadow map. This way we can have more resolution density for near objects.
Note: This visualization is a simplification of the actual shadow map allocation.
Scene with cascade marker. Red(C1), Green(C2), Blue(C3), Purple(C4) Shadow Map: C1(bottom-left), C2(bottom-right), C3(top-left), C4(top-right)
Similar to many graphics algorithms, the devil is in the detail. How do you blend between cascade? How to make the shadow stable and stop
Stochastic Screen Space Reflection
When I read Godot’s screen space reflection algorithm, I don’t have any idea why it is the way it is. I cannot find any explanation of the algorithm. So after trying to find information about SSR, I came across EA presentation that explain Frostbite SSR in detail.
To get a correct PBR reflection, we have to calculate every light that hit the pixel by doing Monte Carlo integration for each pixel. Basically, in simple terms, it means taking a lot of sample by shooting a bunch of random rays in every direction. We need a lot of samples to achieve convergence. One way to reduce the number of samples is by using importance sampling. But still, even with importance sampling, the number of samples is not viable in a real-time setting. So some smart guys from EA get an interesting idea. What if we use the neighboring pixel samples as a sample for the current pixel. This way we can reduce the number of samples per pixel and then later when we resolve the samples, we weight it based on the local material character.
In my engine, I split this algorithm into two parts. First is the screen space ray tracing part. In this part, I will generate one ray for each pixel using importance sampling, then I will trace the ray in fix step and do binary refinement to get the reflected pixel. On the second part, I will resolve the reflection value with the sample that I got from the previous part along with the neighboring samples. Like every screen space algorithm, the problem with SSR is that it can only reflect surface that is inside the screen. In my engine I augment this technique with voxel GI. Some other engine use reflection probe as fallback.
Voxel Global Illumination
When I saw the Voxel GI demo by CryEngine, I was completely hooked by this algorithm. Real-time dynamic GI has always been an interesting open problem in computer graphics. Real-time means it has to be computed at least 30 fps. Dynamic means the light and geometry in the scene can change position, direction and etc at any moment. Global illumination is a solution to ‘how do you simulate light bouncing around the scene?’. In the old days, we use a flat ambient value across the scene. This is a really cheap method that does make the scene a little bit more realistic at that time. Nowadays most engines offer a precomputed static GI solution. Currently, I don’t implement any form of precomputed GI at all beside Skybox Image-based lighting.
Comparison between direct lighting, flat ambient and Voxel GI techniques:



Direct lighting + Flat Ambient
Color of surface that doesn’t get direct lighting stay the same.Direct lighting + Voxel GI
Notice the bounce lighting to the top area of the scene
The general idea of this algorithm is to create a voxel structure where each voxel stores light information of everything inside that voxel. Previously we have talked about the Monte Carlo integration. A technique to calculate the integral of all light that is coming in by shooting thousands of random rays into every direction. This is actually the method that is used in offline path-tracer to create animation films. Now it is impossible to do that in real-time scenario(Maybe NVIDIA real-time ray tracing could do it, I am not sure as I haven’t touch it yet). Instead what we do here is to replace all those ray tracing operations into a much lesser amount of cone tracing operations. This cone tracing operations will sample the voxel structure that we construct. It will not be as correct as ray tracing, but as shown in the demo, it gives an incredible result with reasonable performance with the current hardware generation.
Another great thing about voxel GI is that we can get rough reflection even though the reflected surface is not inside the screen. We can use this as fallback for SSSR techniques.
SSSR only SSSR + Voxel GI
Compare to the original paper, there are two changes that I made in my implementation. First, I don’t use oct-tree to store the voxel information, similar to Godot, I use 3D texture to store the voxel structure. It is faster to access 3D texture on GPU rather than traversing the oct-tree. The problem with 3D texture is the amount of space that it requires even for a really small scene. An interesting middle ground is 3D clipmap. Similar idea with cascaded shadow map, far away objects doesn’t need as much detail as closer objects, so we allocate more resolution for closer objects. Second, I don’t implement anisotropy in my voxel structure. It means I don’t store any directional information. Having directional information in the voxel structure means six times more spaces. With eight times more spaces, I can double the dimension of the 3D texture. This makes me unsure whether anisotropy voxelization worth the cost. I need to do research on this further to see whether it is really worth it.
Render Sequence
- Generate shadow map.
- Z -pass.
- Generate gbuffer and calculate lighting.
- Do Gaussian blur on diffuse lighting information.
- Screen space ray tracing.
- Inject light to voxel structure
- Generate voxel mipmap.
- Final gather + tonemap.
- Render skybox.
Known Artefact

A lot of aliasing problem. Currently, Soul doesn’t implement any kind of anti-aliasing solution. This problem is noticeable especially when the camera move.
When screen space reflection is activated, sometimes it will create an annoying artifact. It creates some sort of wave on the reflection that moves as the camera move. The part of the scene that cannot be reflected change when the camera change. This causes my screen space ray tracing part to give a
Next milestone
- Temporal anti-aliasing.
- Point light, spot light and area light.
- More material property
Reference
- https://blog.selfshadow.com/publications/s2013-shading-course/hoffman/s2013_pbs_physics_math_notes.pdf — I really recommend you to read this if you are new to photo realistic rendering, I find this to be the easiest explanation to understand how light works and how it’s behavior is abstracted in computer graphics.
- https://github.com/google/filament — Read this if you want to extend your material model. They make a really comprehensive explanation of their material model. In fact, they list a lot of the formula they use, still it is better if you read it and understand how the formula works
- https://www.ea.com/frostbite/news/stochastic-screen-space-reflections
- https://seblagarde.files.wordpress.com/2015/07/course_notes_moving_frostbite_to_pbr_v32.pdf
- https://research.nvidia.com/sites/default/files/pubs/2011-09_Interactive-Indirect-Illumination/GIVoxels-pg2011-authors.pdf
- https://github.com/jose-villegas/VCTRenderer — I find this to be easier to understand compare to the original paper on how Voxel GI works, so definitely read this if you want to implement Voxel GI in your work.
- https://github.com/godotengine/godot
- https://learnopengl.com ( If you don’t know how to use OpenGL, this is the best place to learn OpenGL for free. I find the PBR part to be quite difficult to follow if you want to understand the nitty gritty detail, so I recommend you to read the PBR theory from reference above first).
test
I am testing the comment feature