Experimental Adaptive Sampling

We wanted to try our adaptive sampler on Arnold. We know Arnold is a ‘brute force’ renderer that relies mostly on importance and multiple importance based tecniques without the need for an adaptive sampler as the renderer itself is already faster than most renderer outthere that makes in a way or another use of adaptive sampling. We believe Arnold gets its speed mainly because of the underlying geometry intersection routines, high-performance ray tracing kernels, perfect data structure alignments that eventually fully exploit the power of your rendering machine.

So why not try to see if this still holds in the presence of an adaptive sampler ? Technically it means, to what point we can push adaptive sampling data structures and implementations against ‘brute force’ raytracing stuff that fits better and actually kinda fully coalesce with your hardware ? Or in other words, is MIS (ie. weighting the samples) more transparent to raytracing than adaptive sampling(+IS) (ie. distributing the samples) ?

It may look an innoncent scene to render.. I mean it’s just a simple sphere with an HDR found on the internet used as domelight.. but it’s very difficult to sample because the overall env remains dimmed while there’re a lot of tiny super bright lights.. and one can cranck up the sample settings as much as he wants that it will hardly get resolved with plain importance-based brute force sampling. For example before give up I tried with 6400 samples (8×8(camera) x 10×10(specular)). They are already prohibitive settings if one has a real scene and not just a simple sphere to render. The problem is that they are still not enough.

Not bad but if we zoom in we can see how much still has to be done to get a fully converged image.

That will flicker like hell in anim. Not only. But if you take a look at the first raw image above you may see that the bright tiny lights are more scattered on the metallic surface and should create a kind of metallic halo (that is typical of metals) instead to be kind of clamped. Let’s see where Adaptive Sampling may help.

Not only we’re 2x faster but we better resemble metallic appearance because the sampling instead to kinda clamp the overall reflection, just fill it with more samples where needed creating a nice metallic halo. And if we zoom in we can see that the image is almost fully converged.

Here we shoot just 4×4 camera rays while the adaptive shading sampler is set to max:2^8, min:2^4 (threshold 0.001) .. that means we shoot at max 4096 (4×4 x 256) and at min 256 (4×4 x 16) samples.

This was a comparison between brute force importance sampling vs adaptive importance based sampling.

Now let’s try to see if we gain something with multiple importance based sampling (the dome in the IS test was a simple sphere with a env tex wrapped on it), in the following we use an aiSkyDomeLight that afaik it’s importance sampled and thus should provide MIS to the scene.

Here we lowered a bit both camera and spec samples: ie. 6×6 by 8×8 but we had to cranck up the DomeLight samples to 20 to begin to have some smooth highlights. Render time is 3x the one we had with adaptive sampling.

However as soon we zoom in, where the sphere fills the image and everything needs to be properly sampled render times explode (x18 !!) and we’re still a bit less smooth than with adaptive sampling.

So do we have a winner then ? Yes and no.

In the above example, we used adaptive sampling to sample a first bounce of reflections. In this case (where also we have a challenging illumination) adaptive sampling works better than brute force sampling. However we’re still not really testing the impact of the adaptive sampler implementation. Of course our sampler is not a screen-based sampler, it is a dimensional adaptive sampler so we need to track actually the various samples as we get them from reflections bounces and it’s here that we really engage data structures, thread local storage, different code paths, comparisons etc. so it’s exactly here that we become slower compared to a state-of-the-art brute force approach like the one in Arnold.

This image for example took almost 20mins to get rendered with adaptive sampling in Arnold with 5 reflection bounces where with the factory brute force approach it takes exactly 4:30mins.

So to answer our original question, – have adaptive sampling data structures and such an overall worst impact than plain brute force stuff when ray tracing.. yes they have. In facts, we have shown that the overall quality and speed is much better with adaptive sampling for a first reflection bounce in presence of hard to sample lighting but that for common scenes with multiple bounces we really don’t need adaptive sampling in Arnold.

We plan probably to leave AS exposed in our upcoming materials so end-users may still have a chance to go for it where it makes sense because being ‘shader based’ we can just use it for the object or material that really needs it where otherwise we would have to cranck up global sampling affecting the whole image even where we don’t need more samples or even worst, use sample clamping.

 

Leave a Reply