Q100380: Reading images as scanlines and network access
This article explains how Nuke scanline-based image architecture works in terms of processing and reading image data, how this can affect performance when reading files across networks and utilising Nuke's Localization functionality to improve performance.
Nuke is a scanline image compositing system which means that it processes and reads images one line of pixels at a time, until it hits the end of the image. (These scanlines are referred to as rows in the NDK plug-in development terminology)
For example, if the Viewer displays a 640x480 resolution image it will split it up into 480 rows and ask for one row at a time, apply any processing required and display it. If you're displaying a full-aperture Super 4K image (4096x3112) then you're looking at 3112 row requests.
There are two big advantages to scanline rendering. Firstly, processing images in row-size chunks means that the whole image does not need to reside in the computer memory at the same time so Nuke is able to handle an almost unlimited range of image sizes. Secondly, the number of times that visible pixels are processed is kept to a minimum. This behaviour is designed to give good scalable performance so Nuke will take the best advantage of machine resources.
More information on Nuke's architecture and an extensive explanation of how the Viewer reads data can be found in the NDK Developer Guide.
READING FROM THE NETWORK
The way Nuke's scanline architecture reads in files can affect performance when reading files across a network.
An image is read through input/output (I/O) file access requests to get data from the source file to the requester application (Nuke). If the source image is local to the the machine sending the request then the I/O request will be quick, especially if you have fast storage. However, if the source image is on a network storage system then network read/write speeds and bandwidth can add additional latency to file reading performance. For example, reading a 4k (4096x3112) image from the network into Nuke requires 3112 network access requests. The number of network file I/O requests can be affected by network bandwidth and each request will be affected by network read/write speeds.
NOTE: If you're working with a network storage system that does not scale well when dealing with a large number of small read sizes to retrieve the image data, then network latencies may be experienced due to the high number of file I/O requests Nuke requires being limited by the network bandwidth.
You can test the read/write speeds of your network storage system following the steps outlined in this article: Q100296: How to check the network speed
You can improve Nuke's interactive performance when working with footage on a network by using the Localization functionality. More information on this is below.
READING EXR IMAGES AND COMPRESSION TYPES
The EXR image type is the exception to the above rule where Nuke reads image types as scanlines processing one row at a time. The compression of the EXR image will determine the amount of data Nuke has to unpack at a time before it can load it individually into scanlines. ZIP (1) is the fastest compression to read and write by default, since Nuke can directly unpack in and out of scanlines rather than wait for larger chunks of data to be unpacked before beginning to read them. Uncompressed files are read faster than ZIP (1) though.
For certain compression options of EXR images, Nuke will read larger chunks rather than 1 scanline at a time. The following compression options will be interpreted and read in blocks of 64 scanlines at a time, cutting down the overall access to the image: ZIP (16), PIZ, PXR24, B44, B44A.
The list of EXR compression types Nuke uses are listed in our Nuke Online Help under the Appendix C: File Formats > Supported File Formats > EXR Compression section
If you're concerned with the network access to retrieve the full input image, using EXR data compressed with one of the above compressions might minimise the overall network access. However, this is dependant on the read/write speeds of the network storage system used, which may still show slowdowns when dealing with large resolutions.
One way to counter any performance hit from reading files across the network is to use Nuke's Localization functionality to speed up file I/O in Nuke GUI sessions, and reduce network interactions.
When Localization is enabled, Nuke will store a local cache of the images the script reads in. The files copied to create the local cache are read first from the network location as scanlines, but after that it uses the local versions, rather than the original network files, whilst you work on your script in the Nuke GUI.
Nuke keeps the reference to the network files within the script so they can be found when the script is sent to another user or to a render farm, but you reduce the strain on your network by avoiding using many small file I/O requests to repeatedly read in image data across the network as you work on your script.
More information on localising files and media can be found in our Online Help at the following links:
If you are still seeing performance issues with reading and writing files after using the suggestions outlined in this article, then please open a Support ticket and let us know the issue you are encountering and the troubleshooting steps you have taken so far.
For more information on how to open a Support ticket, please refer to the Q100064: Using the Support Portal article.
If you're having trouble with real time playback please review the information available in this article: Q100297: Real-time playback troubleshooting
We're sorry to hear that!Please tell us why.