The image plane on a rendering system could be considered as the sensor plane, ie. where the light hits sensitive photosites and the signal is represented then digitally by a certain number of bits (for example RAW 12 bits for reflex cameras). However RGB channels of a sensor have different sensitivities. A consequence of these different spectral sensitivities is that a theoretical constant spectrum (every wavelength has the same intensity) would yield three different RGB values.
Now all the standard RGB color spaces (such as sRGB, AdobeRGB etc.) assume that a neutral reflectance object (i.e., an object that reflects every wavelength equally) has equal values on the three channels. However, the RAW values on the sensor depend on the spectrum of the light illuminating the neutral object, and on the spectral responses of the sensor. The three channels are usually different. That means to go from RAW to RGB channels we cannot just use a color space. We should first apply a sensor response function to the RAW data.
This doesn’t mean Maya or mental ray are wrong allowing to select a color space for our final image. We can just say they assume that a renderer has equal sensitivities for all its channels. However the resulting image.. how does it look ? Well, it looks neutral. Flat. Like those that comes from any digital camera.
Were film cameras better than ? Not necessarly. It’s just that films were produced so that their different sensitivities would yield to characteristic images. Wanting still those kind of richful images from digital cameras we need to apply LUTs or do color grading on RAW files.
However if we want to simulate a true sensor we can apply sensor response functions directly to our pixels treating’em as sensor’s photosites.
Here some results. First is the linear image. Second is the gamma’ed image. All others have sensor response functions applied. And they just look better… that’s the whole point of using SRF. You don’t need to be a colorist or a veteran grader to get state-of-art color corrected images.
An additional note is that here we’re using Rombo.Camera in ThickLens mode so vignetting and deformations come from the lens system simulation that together with the sensor sim give us a full virtual camera system.
Base model by Matkovski Dragos.