Soul Engine Devlog #Jan2019

Introduction

Soul is a rendering engine that I have been working on this past couple of months. Note that it is not a game engine. A game engine is just too unrealistic and ambitious for me right now. In fact, I develop this engine just as a first pass to understand how graphics pipeline works, and I have the plan to ditch this engine as soon as possible.

Godot is one of my reference engines and reading Godot’s rendering progress report help me tremendously in learning new graphics algorithm. Thus, I have always wanted to write something like this. I will post something like this at the end of every month to give an overview of what algorithms I am experimenting at that month. These posts will just describe the algorithm at the surface level. All algorithms will be explained in detail once I finish developing all the feature wishlist here. One of the things that I notice when learning graphics is the lack of beginner-friendly resources. Most articles that I found assume you have written a rendering engine before unless you read some ancient outdated algorithm that isn’t used anymore.

Specification

– Metallic-workflow physically based material (metallic, roughness, albedo and normal). The material system here is based on this paper.
– Shadow calculation using cascaded shadow map with PCF 3×3 filtering.
– Screen space reflection based on this presentation by EA.
– Real-time global illumination using voxel structure based on this paper.
– Image-based lighting.
– OpenGL 4.5. I am using this version of OpenGL for the compute shader so this engine is windows only.

Cascaded Shadow Map

Shadow map techniques are still popular in today game engine to get real-time shadow. The problem with traditional shadow mapping is the amount of space required to get unpixelated shadows for near objects due to perspective aliasing. If you pause for a second and see your surrounding, you will notice that you can see more objects the further it is from your viewing position. The resolution of shadow map will be wasted a lot for the far objects. But if you think about it, we actually don’t really need a super detail shadow for objects that are far away. This is where cascaded shadow map comes into the picture. We split the view frustum into a number of cascades. Each cascade will have its own shadow map. This way we can have more resolution density for near objects.

Similar to many graphics algorithms, the devil is in the detail. How do you blend between cascade? How to make the shadow stable and stop flickering? If you want to implement this, Microsoft has a really nice article that discusses this technique in detail.

Stochastic Screen Space Reflection

When I read Godot’s screen space reflection algorithm, I don’t have any idea why it is the way it is. I cannot find any explanation of the algorithm. So after trying to find information about SSR, I came across EA presentation that explain Frostbite SSR in detail.

To get a correct PBR reflection, we have to calculate every light that hit the pixel by doing Monte Carlo integration for each pixel. Basically, in simple terms, it means taking a lot of sample by shooting a bunch of random rays in every direction. We need a lot of samples to achieve convergence. One way to reduce the number of samples is by using importance sampling. But still, even with importance sampling, the number of samples is not viable in a real-time setting. So some smart guys from EA get an interesting idea. What if we use the neighboring pixel samples as a sample for the current pixel. This way we can reduce the number of samples per pixel and then later when we resolve the samples, we weight it based on the local material character.

In my engine, I split this algorithm into two parts. First is the screen space ray tracing part. In this part, I will generate one ray for each pixel using importance sampling, then I will trace the ray in fix step and do binary refinement to get the reflected pixel. On the second part, I will resolve the reflection value with the sample that I got from the previous part along with the neighboring samples. Like every screen space algorithm, the problem with SSR is that it can only reflect surface that is inside the screen. In my engine I augment this technique with voxel GI. Some other engine use reflection probe as fallback.

Voxel Global Illumination

When I saw the Voxel GI demo by CryEngine, I was completely hooked by this algorithm. Real-time dynamic GI has always been an interesting open problem in computer graphics. Real-time means it has to be computed at least 30 fps. Dynamic means the light and geometry in the scene can change position, direction and etc at any moment. Global illumination is a solution to ‘how do you simulate light bouncing around the scene?’. In the old days, we use a flat ambient value across the scene. This is a really cheap method that does make the scene a little bit more realistic at that time. Nowadays most engines offer a precomputed static GI solution. Currently, I don’t implement any form of precomputed GI at all beside Skybox Image-based lighting.

Comparison between direct lighting, flat ambient and Voxel GI techniques:

Direct lighting only
Direct lighting + Flat ambient
Direct lighting + Voxel diffuse + Voxel specular

The general idea of this algorithm is to create a voxel structure where each voxel stores light information of everything inside that voxel. Previously we have talked about the Monte Carlo integration. A technique to calculate the integral of all light that is coming in by shooting thousands of random rays into every direction. This is actually the method that is used in offline path-tracer to create animation films. Now it is impossible to do that in real-time scenario(Maybe NVIDIA real-time ray tracing could do it, I am not sure as I haven’t touch it yet). Instead what we do here is to replace all those ray tracing operations into a much lesser amount of cone tracing operations. This cone tracing operations will sample the voxel structure that we construct. It will not be as correct as ray tracing, but as shown in the demo, it gives an incredible result with reasonable performance with the current hardware generation.

Another great thing about voxel GI is that we can get rough reflection even though the reflected surface is not inside the screen. We can use this as fallback for SSSR techniques.

Compare to the original paper, there are two changes that I made in my implementation. First, I don’t use oct-tree to store the voxel information, similar to Godot, I use 3D texture to store the voxel structure. It is faster to access 3D texture on GPU rather than traversing the oct-tree. The problem with 3D texture is the amount of space that it requires even for a really small scene. An interesting middle ground is 3D clipmap. Similar idea with cascaded shadow map, far away objects doesn’t need as much detail as closer objects, so we allocate more resolution for closer objects. Second, I don’t implement anisotropy in my voxel structure. It means I don’t store any directional information. Having directional information in the voxel structure means six times more spaces. With eight times more spaces, I can double the dimension of the 3D texture. This makes me unsure whether anisotropy voxelization worth the cost. I need to do research on this further to see whether it is really worth it.

Render Sequence

  1. Generate shadow map.
  2. Z -pass.
  3. Generate gbuffer and calculate lighting.
  4. Do Gaussian blur on diffuse lighting information.
  5. Screen space ray tracing.
  6. Inject light to voxel structure
  7. Generate voxel mipmap.
  8. Final gather + tonemap.
  9. Render skybox.

Known Artefact

Light leaks caused by Voxel GI. One of the problems with voxel GI is that light might go through thin geometry. In my engine though, I suspect that my lack of anisotropy implementation does contribute to some of this light leaks. This is why I decide to check whether anisotropy will improve this problem. Other than anisotropy, it seems like implementing some sort of Screen Space Ambient Occlusion will help this problem a little bit.

Notice the part of the wall just below the roof is weirdly receiving more light than the surrounding.

A lot of aliasing problem. Currently, Soul doesn’t implement any kind of anti-aliasing solution. This problem is noticeable especially when the camera move.

Specular aliasing. See how the reflective part of the curtain glitter and flicker when the camera move

When screen space reflection is activated, sometimes it will create an annoying artifact. It creates some sort of wave on the reflection that moves as the camera move. The part of the scene that cannot be reflected change when the camera change. This causes my screen space ray tracing part to give a really view-dependent result. I need to go back to this part and see whether the problem is inherently caused by the algorithm itself or there is some sort of bug in my implementation. I do use different ray tracing algorithm than what is presented by EA.

Notice part of the floor below the vase.

Next milestone

  • Temporal anti-aliasing.
  • Point light, spot light and area light.
  • More material property

Reference

2 Replies to “Soul Engine Devlog #Jan2019”

  1. kevinyu94 says:

    I am testing the comment feature

Leave a Reply

%d bloggers like this: