?

Log in

No account? Create an account
 

More pretty pictures - 410

About More pretty pictures

Previous Entry More pretty pictures May. 8th, 2009 @ 09:11 am Next Entry

This post is not in Old English :).

A nice thing about having a toy path tracer is that you can play with it. Another nice thing about path tracing (and all ray tracing-like rendering methods) is that adding in reflections and panoramic cameras is generally very easy:

Denoised output

As a reminder, the only light in this scene is from the sky. The sky itself is a uniform emitter of white light, somewhat like a bright but overcast day.

But this post is not about panoramas or reflections, although it is nice to note that the denoiser copes correctly with non-diffuse lighting; I want to discuss ways to ‘optimize’ a path tracer.

Note the nice use of scare quotes there. In this case I don’t mean make the code faster (although that is certainly possible). Instead, I mean we want to make the code smarter.

Up until yesterday, when I was in need of something to distract my fevered brain, the path tracer repeatedly iterated over the image drawing one output sample per pixel. These samples were averaged over a number of iterations and the result outputted.

This is all fine and dandy but something of a waste. Consider the sky in the image above. Every time I fire an eye ray through a sky pixel, I’ll hit sky. And since the sky is uniform I wont get a terribly interesting result. Also this result won’t change much as I draw more samples. Surely it would be better to concentrate my efforts on the areas under the arches where more samples are required to get a meaningful output.

Of course it is easy to say this but hard to implement. How does the path tracer ‘know’ where the pixels it should concentrate on are? Aha! This is where keeping track of the sample statistics suddenly becomes useful.

We want to come up with a measure which is high when the samples we draw from a pixel differ greatly from each other — implying we need to draw a large number to get a good estimate for the mean — and low when the samples are similar.

Of course the variance (or at least relative variance when compared to the mean) gives a measure with these properties. Yesterday I modified the path tracer to, after each iteration, assign a sampling likelihood to each pixel based on it’s relative variance — those with higher relative variance were more likely to be sampled in the next iteration. I then draw a set of pixel locations to be sampled from these likelihoods and iterated.

The result? After a few iterations, there was more sampling effort being directed to the ‘dark’ areas under the arches with complex lighting and less to the more easily calculated areas.

The following image shows the sample counts per pixel, white being the most samples per pixel and black being the least. Notice how the path tracer concentrates on ‘interesting’ areas like the under arch area and areas of fine detail. The tracer quickly realises the sky and strong reflections are relatively uninteresting.

Sample counts

So does this speed up the renderer? Well, it depends of course on how you measure speed. It generates exactly as many samples/second as before but now the level of noise in the image is more uniform meaning that one isn’t left running iteration after iteration ‘waiting for the sodding arches to fill in’.

Leave a comment
[User Picture Icon]
From:mas90
Date:May 8th, 2009 11:03 am (UTC)
(Link)
Is it always going to be the case that the areas where more samples are needed are the darker areas, or is that a peculiarity of this particular lighting arrangement?
[User Picture Icon]
From:filecoreinuse
Date:May 8th, 2009 12:09 pm (UTC)
(Link)
The simplest counter example is a dark brown ball next to a white ball both with identical lighting. The brown ball will be darker but will need the same level of sampling as the white. The level of sampling required depends on the geometry of the scene more than the colour of the objects.

Similarly areas of high detail (like the border of the sphere in the image above) may have the same brightness as the rest of the sphere but need more sampling since there is more 'interest' happening at the edge because one pixel can cover both the edge of the sphere and the back wall.
(Leave a comment)
Top of Page Powered by LiveJournal.com