This post is not in Old English :).
A nice thing about having a toy path tracer is that you can play with it. Another nice thing about path tracing (and all ray tracing-like rendering methods) is that adding in reflections and panoramic cameras is generally very easy:
As a reminder, the only light in this scene is from the sky. The sky itself is a uniform emitter of white light, somewhat like a bright but overcast day.
But this post is not about panoramas or reflections, although it is nice to note that the denoiser copes correctly with non-diffuse lighting; I want to discuss ways to ‘optimize’ a path tracer.
Note the nice use of scare quotes there. In this case I don’t mean make the code faster (although that is certainly possible). Instead, I mean we want to make the code smarter.
Up until yesterday, when I was in need of something to distract my fevered brain, the path tracer repeatedly iterated over the image drawing one output sample per pixel. These samples were averaged over a number of iterations and the result outputted.
This is all fine and dandy but something of a waste. Consider the sky in the image above. Every time I fire an eye ray through a sky pixel, I’ll hit sky. And since the sky is uniform I wont get a terribly interesting result. Also this result won’t change much as I draw more samples. Surely it would be better to concentrate my efforts on the areas under the arches where more samples are required to get a meaningful output.
Of course it is easy to say this but hard to implement. How does the path tracer ‘know’ where the pixels it should concentrate on are? Aha! This is where keeping track of the sample statistics suddenly becomes useful.
We want to come up with a measure which is high when the samples we draw from a pixel differ greatly from each other — implying we need to draw a large number to get a good estimate for the mean — and low when the samples are similar.
Of course the variance (or at least relative variance when compared to the mean) gives a measure with these properties. Yesterday I modified the path tracer to, after each iteration, assign a sampling likelihood to each pixel based on it’s relative variance — those with higher relative variance were more likely to be sampled in the next iteration. I then draw a set of pixel locations to be sampled from these likelihoods and iterated.
The result? After a few iterations, there was more sampling effort being directed to the ‘dark’ areas under the arches with complex lighting and less to the more easily calculated areas.
The following image shows the sample counts per pixel, white being the most samples per pixel and black being the least. Notice how the path tracer concentrates on ‘interesting’ areas like the under arch area and areas of fine detail. The tracer quickly realises the sky and strong reflections are relatively uninteresting.
So does this speed up the renderer? Well, it depends of course on how you measure speed. It generates exactly as many samples/second as before but now the level of noise in the image is more uniform meaning that one isn’t left running iteration after iteration ‘waiting for the sodding arches to fill in’.