The case of the 15% rendering utilization


Or, “Understanding the Arnold log, part 23”

In this case, a client had very low (15%) CPU usage for a render. We got the Arnold log, and here’s the interesting part:

00:01:44  856MB   | OpenImageIO ImageCache statistics (0000000025850F20) ver 1.5.24
00:01:44  856MB   |   Images : 3 unique
00:01:44  856MB   |     ImageInputs : 299 created, 2 current, 3 peak
00:01:44  856MB   |     Total size of all images referenced : 192.4 MB
00:01:44  856MB   |     Read from disk : 12.0 GB
00:01:44  856MB   |     File I/O time : 45m 41.5s (48.1s average per thread)
00:01:44  856MB   |     File open time only : 0.0s
00:01:44  856MB   |   Tiles: 711523 created, 559 current, 640 peak
00:01:44  856MB   |     total tile requests : 249379431
00:01:44  856MB   |     micro-cache misses : 3353694 (1.34482%)
00:01:44  856MB   |     main cache misses : 711523 (0.285317%)
00:01:44  856MB   |     Peak cache memory : 10.0 MB
00:01:44  856MB   |   1 not tiled, 1 not MIP-mapped
00:01:44  856MB   | -----------------------------------------------------------------------------------
00:01:44  856MB   | performance warnings:
00:01:44  856MB   | Rendering utilization was only 15%. Your render may be bound by a single threaded process or I/O.
00:01:44  856MB   | -----------------------------------------------------------------------------------
  • File I/O time seems a bit high for three textures and a render that took less than 2 minutes
  • main cache* misses is pretty high. Normally you expect something less than¬†0.01%.

    0.285% means that¬†texture tiles are loaded from disk (instead of from the in-memory texture cache) once out of every 350 texture lookups. That’s a little high.

    * The main cache is the cache of 64×64 texture tiles loaded from disk into the texture cache.

  • Peak cache memory is 10.0 MB !!! That explains the main cache misses: the texture cache is really, really small.

Other clues to the too-small texture cache size:

  • Read from disk is 12 GB but the total size of all images referenced is just 192.4 MB, and the peak cache memory was just 10.0 MB
    So the same texture data is constantly being unloaded from the cache and reloaded from disk.

The solution? Increase the size of the texture cache. The current default is 2048, which should be good in most cases.

Ignoring self-occulsion


Question: how do you set up the Arnold ambient_occlusion shader so that it ignores self-occlusion?

Let’s start with the default ambient occlusion.¬†here’s a sphere and a plane. Each has it’s own Ambient Occlusion shader. For the sphere, Ambient Occlusion.Black is Red.

image (1)

Now, let’s disable Receives Shadows on the sphere.
That means no occlusion based on the plane (shadow rays from the sphere ignore the plane); but there’s still self-occlusion.

image (2)

Next, Cast Shadows disabled. No self-occlusion, but no ambient occlusion on the plane either

image (3)

And finally, Self Shadows disabled.¬†Now, there’s no self-occlusion. The ambient occlusion on the sphere is coming solely from the planeimage (4)

The case of the noisy shadows


In this case, a client reported a lot of noise in the shadows of his forest of opacity-mapped trees. In a progressive render, he saw lots of fireflies at the low AA levels, so it seemed to be something out of the ordinary, since he’d never had this problem before in other, similar scenes.

He managed the get rid of the noise by cranking up the light samples, the AA, and the transparency depth, at the cost of extremely long render times.

But sampling wasn’t the problem, or the solution, in this case. The problem was a large ¬†“sky” sphere that had the skydome HDR mapped to it. This sphere was just there to make the sky visible, but this sphere was visible to all ray types. The solution was to make the sphere visible to camera rays only.

Unlike a Skydome light, a textured sphere isn’t importance-sampled intelligently by Arnold. So you’ll get noise and fireflies from the random diffuse rays that happen to hit a super bright pixel in the sky texture.

hat tip: TI

Vertices and the maximum valence limit


The valence of a vertex is the number of edges connected to that vertex. In Arnold, the maximum valence is 255 (that’s because of the data type we use to store the valence; we minimize the memory requirements since this is stored per-vertex).

If a mesh in the scene has a vertex with more than 255 edges, you’ll get a WARNING like this:

 [polymesh] example_mesh: mesh has at least a vertex with valence higher than 255, disabling subdivision

But if the adaptive subdivision results in a vertex with too many edges, you’ll get an ERROR:

 ERROR: [arnold] [subdiv] example_mesh: edge (144578,287741) in face 263433 has a vertex that exceeded the max valence limit of 255 

Sampling after the first bounce


With the default Diffuse = 2 sampling, you’ll get four diffuse rays for each camera ray.¬†Those four diffuse rays are the first bounce after the camera ray “hits” a shape.

diffuse_one_diffuse_bounce

After the first bounce, Arnold sets all the sampling settings back to 1, so for each of those four diffuse rays, you get one second-bounce diffuse ray. This prevents an exponential explosion of rays as secondary rays like diffuse rays bounce around a scene.

diffuse_two_diffuse_bounces

The same thing is true for the other secondary rays such as glossy, refraction, and shadow rays.

So, in summary:

  • For a camera ray you can get multiple secondary rays (for example, multiple diffuse rays or multiple glossy rays, and for light sampling, multiple shadow rays)
  • But for a secondary ray, you’ll get just one ray. For example: one diffuse ray, one glossy ray (if any), one shadow ray, and so on.

[MtoA] Unable to dynamically load : mtoa.mll The specified module could not be found.


I have another, more general, version of this post here. This one is for new Arnold users with Maya 2017.

Here’s what to do if you get errors like this:

// Error: file: C:/Program Files/Autodesk/Maya2017/scripts/startup/autoLoadPlugin.mel line 32: Unable to dynamically load : C:/solidangle/mtoadeploy/2017/plug-ins/mtoa.mll
The specified module could not be found.
//
// Error: file: C:/Program Files/Autodesk/Maya2017/scripts/startup/autoLoadPlugin.mel line 32: The specified module could not be found.

Check if your processor supports SSE4.1

As of Arnold 4.2.16.2, the SSE requirement is now SSE4.1

If this is the first time you’ve tried to use the Maya to Arnold (MtoA) plugin, then check whether your processor supports SSE4.1.

Reinstall MtoA

If MtoA used to load, but now it doesn’t, then something happened to the MtoA installation. I’ve seen several cases where DLLs were missing from the MtoA bin folder; most importantly, Arnold itself was missing (Arnold is ai.dll on Windows, or libai.so on Linux, or libai.dylib on OSX).

If you make a backup copy of the MtoA install folder, we can investigate after you fix things by installing MtoA.

Get a Process Monitor log

If a clean install of MtoA doesn’t work (and the computer does support SSE4.2), then “The specified module could not be found.” usually means there’s a missing dependency. Dependency Walker is a decent, if aging, tool for checking out dependencies, but for leaving no stone unturned, I prefer Process Monitor.

The MtoA plugin (mtoa.mll) depends on a handful of files only. Here’s the log of loaded DLLs for a working MtoA:

process_monitor_mtoa

Here’s a quick walkthrough (no audio) of how to get a Process Monitor log: