The case of the noisy shadows


In this case, a client reported a lot of noise in the shadows of his forest of opacity-mapped trees. In a progressive render, he saw lots of fireflies at the low AA levels, so it seemed to be something out of the ordinary, since he’d never had this problem before in other, similar scenes.

He managed the get rid of the noise by cranking up the light samples, the AA, and the transparency depth, at the cost of extremely long render times.

But sampling wasn’t the problem, or the solution, in this case. The problem was a large  “sky” sphere that had the skydome HDR mapped to it. This sphere was just there to make the sky visible, but this sphere was visible to all ray types. The solution was to make the sphere visible to camera rays only.

Unlike a Skydome light, a textured sphere isn’t importance-sampled intelligently by Arnold. So you’ll get noise and fireflies from the random diffuse rays that happen to hit a super bright pixel in the sky texture.

hat tip: TI

Vertices and the maximum valence limit


The valence of a vertex is the number of edges connected to that vertex. In Arnold, the maximum valence is 255 (that’s because of the data type we use to store the valence; we minimize the memory requirements since this is stored per-vertex).

If a mesh in the scene has a vertex with more than 255 edges, you’ll get a WARNING like this:

 [polymesh] example_mesh: mesh has at least a vertex with valence higher than 255, disabling subdivision

But if the adaptive subdivision results in a vertex with too many edges, you’ll get an ERROR:

 ERROR: [arnold] [subdiv] example_mesh: edge (144578,287741) in face 263433 has a vertex that exceeded the max valence limit of 255 

Sampling after the first bounce


With the default Diffuse = 2 sampling, you’ll get four diffuse rays for each camera ray. Those four diffuse rays are the first bounce after the camera ray “hits” a shape.

diffuse_one_diffuse_bounce

After the first bounce, Arnold sets all the sampling settings back to 1, so for each of those four diffuse rays, you get one second-bounce diffuse ray. This prevents an exponential explosion of rays as secondary rays like diffuse rays bounce around a scene.

diffuse_two_diffuse_bounces

The same thing is true for the other secondary rays such as glossy, refraction, and shadow rays.

So, in summary:

  • For a camera ray you can get multiple secondary rays (for example, multiple diffuse rays or multiple glossy rays, and for light sampling, multiple shadow rays)
  • But for a secondary ray, you’ll get just one ray. For example: one diffuse ray, one glossy ray (if any), one shadow ray, and so on.

Overriding shader parameters in procedurals


Procedurals (aka standins) don’t expose the parameters of objects and shaders inside the procedural.

For example, you cannot override shader parameters like standard.Kd (diffuse color) or standard.emission (emission scale) by setting those parameters on a procedural node.

But what you can do is use “user data parameters”. Inside the procedural, use user data shaders to set shader parameters, but don’t define the user data parameters on the shape.

For example, if I wanted to be able to override the emission scale, then I would set up a shader like this:

user_data_procedurals

A userDataFloat shader sets the Emission. The important thing is that the custom attribute emission_scale is not defined on the shape, so the default value is used instead.

By doing that, I can put an emission_scale attribute on the procedural node, and the aiUserDataFloat shader picks up that value.

user_data_procedurals2

In the above example, I’ve added a mtoa_constant_emission_scale attribute to the standinShape, and that allows me to set the Emission for that specific standin.

In Arnold scene source (ASS) format, I have this in my main scene file:

procedural
{
 name ArnoldStandIn1Shape
 dso "standin_torus_w_emission.ass"
 ...
 declare emission_scale constant FLOAT
 emission_scale 0.649999976
}

And in the ASS file loaded by the procedural, I have this:

polymesh
{
 name pTorusShape1
 ...
 shader "someshader"
}

standard
{
 name someshader
 ...
 emission aiUserDataFloat1
 ...
 #
 # NO user data declared here
 #
}

userDataFloat
{
 name aiUserDataFloat1
 floatAttrName "emission_scale"
 defaultValue 0.100000001
}

					

[MtoA] Hiding the image file statistics in the Arnold log


Here’s a good example of why you might use the User Options, and how it lets you set Arnold parameters that aren’t exposed by MtoA in Maya.

If you set the verbosity level to Warnings + Info, you’ll get some detailed image statistics in the Arnold log:

image_file_statistics

But what if you’ve got thousands of image files (eg udims)? You might not want all that in your log. There used to be a check box in the Arnold Render Settings that allowed you to turn off the detailed image file statistics, but we removed that a few years ago.

But the Arnold options.texture_per_file_stats parameter is still there, so you can use the User Options field to set that parameter to false. Just enter the parameter name “texture_per_file_stats” and the parameter value, like this:

render_settings_user_options

Updating MtoA 1.2.7.3 with Arnold 4.2.14.0


MtoA 1.2.7.3 ships with Arnold 4.2.13.

If you want to take advantage of the improvements in Arnold 4.2.14.0 (like the increase  in the maximum number of threads from 128 to 256), here’s what you need to do:

  • Download  Arnold 4.2.14.0 and extract the archive
  • Replace Arnold (libai.so, ai.dll, libai.dylib), kick, and maketx in the MtoA bin folder with the versions from the Arnold 4.2.14.0 download
  • Replace the Arnold Python bindings in  the MtoA scripts/arnold folder with the Python bindings from the Arnold python/arnold folder. For example, replace this folder:
       C:\solidangle\mtoadeploy\2016-1.2.7.3\scripts\arnold

    with the python\arnold folder from the Arnold 4.2.14.0 download. For example:

       C:\solidangle\arnold\Arnold-4.2.14.0-windows\python\arnold

You must update the Arnold Python bindings, otherwise MtoA won’t load. That’s because Arnold 4.2.14.0 included a number of API changes, including the removal of some API (like AiLicenseSetServer). The older Python bindings still refer to the removed API, so there will be Python errors that prevent MtoA from loading.

[Arnold] Understanding the texture cache


Arnold uses the OpenImageIO texture cache. From the OIIO Programmer Documentation:

In short, if you have an application that will need to read pixels from many large image files, you can rely on ImageCache to manage all the resources for you. It is reasonable to access thousands of image files totalling hundreds of GB of pixels, efficiently and using a memory footprint on the order of 50 MB.

So, if you’re using tx files (tiled and mipmapped textures), then Arnold can read tiles as required, and use the texture cache to keep memory usage under control.

The reason we don’t read all the textures at once is that many of our customers have (literally) 100+GB of textures, so we use a texture cache that constantly loads small bits of texture data as required, and unloads old data.

By default, the size of the texture cache is 2048KB (as of Arnold 4.2.13.0). The size of the texture cache is set in the Arnold Render Settings.

Max Cache Size
The maximum amount of memory to be used for texture caching. Arnold uses a tile-based cache with a LRU (Least Recently Used) type algorithm, where the least recently used tiles are discarded when the texture caches is full.

Note If we get an error reading a texture, we mark that texture as bad and we never try to read it again. This makes the renderer a lot faster when you have a missing texture, since we won’t ask the file server millions of times to read from a nonexistent file. But in a transient network error case, one bad experience and the rest of your render has the texture missing.