Material UI and Usage

Rombo.Material is a monolithic material shader developed to support common material usages found in architecture and product design renderings. It supports most hard-surface materials such as metal, wood, plastic, concretes and glass. Fine tuned for glossy reflections and refractions with build-in energy conservation, outperforms mental ray factory materials like mia and mila materials in terms of features and rendering speed.

Major features are:

  • flexibility and easy to use.
  • physically accurate and energy conserving.
  • advanced sampling with adaptive, fixed and importance based sampling.
  • lambert, mia and Oren-Nayar diffuse models.
  • various BSDF models (Beckmann, GGX etc.).
  • all-in-one material with build-in shadow and photon shaders.
  • performance knobs to better suite rendering workflows.
  • PBS paradigm support

 

Material UI:

rombogui

 

Reflection glossiness with GGX vs Beckmann BRDF :

GGXvsBeckmann

 

Refraction glossiness with Beckmann vs GGX BTDF :

refr_MILAvsGGX

 

Solid vs Thick glass (and build in shadow shader) :

refr_solid_vs_thick

 

Complex IOR and adaptive sampling :

complexIOR_aluminum

 

Cutout opacity with shadow support :

cutoutopacity

8 thoughts on “Material UI and Usage

  • Nice tools… I didn’t even know this exists..

    I have one request.. I would really like to see a 0 – 1 float slider for the “Is Metal” attribute.. So you can blend the amount instead of a boolean. I really didn’t like this on the mia_ either.

  • Translucency is diffuse transmission under the diffuse rollout. SSS is planned as a separate shader. We’ll add a download section asap where we gonna release a shaders package for free where you’ll be able to test and fully work with our adaptive sampler in the form of a reflection shader. cheers

  • *finally some details surface. 😉 I don’t see anything related to ‘sss’, translucency, though. Will you provide a separate shader for that? Will a demo version be available? Thanks.

  • btw, ‘smart’ is really smart only on the first reflection level. it suffers a bit (in terms of render time) on secondary reflections (reflections of reflections) because it tries to reconstruct and render them as full blurry reflections. while for example mila do that only on first reflections and render specular (or single ray) reflections on secondary rays.

    I’m introducing a mechanism right now to be able to render full blurry secondary reflections way more speedy. in facts with our adaptive sampler you may notice that more samples you supply the faster render you have.. why ? because if we supply few samples the adaptive sampler has to struggle recreating those missing while a fixed or importance based sampler would simply return with a lower quality.

Leave a Reply