Visually Evaluating Image Quality
Digitization Program Planning
Visual comparison of an object to the digitized version of the object is a practical and common, but surprisingly limited and imprecise way of judging color and tonal accuracy. Whether comparing to monitors or prints inherent limitations in the hardware and limitations of human perception of color make such visual comparison fraught with difficulty.
“Obtaining an accurate visual match of a complex scene or object under dual-stimulus ( i.e. both rendered image and object are at hand and can be compared side by side) is nearly impossible. While a colorimetric match might be possible, differing spectral content ( metamerism), human sensitivities and capture/display variabilities are always present. For instance, the term “daylight” illumination is often mentioned as some solid gold standard. In fact it can have several meanings ( D50, D55, etc.) and types of spectral content. All of these make it quite difficult to make exact perceptual color matches, especially when multiple people are trying to agree on the match.”
– Don Williams, Image Science Associates
Visual Comparison to a Monitor
Visually comparing a digital capture on a monitor to the original object can be useful, provided due consideration is given to the environment in which this comparison is performed. If normal ambient lighting, consumer monitors, or an uncalibrated workflow are included, then visual comparison will be meaningless.
- Monitor Hardware: Most computer monitors available for purchase are designed with a relatively low priority assigned to color/tone accuracy. Consumer-grade models produced by companies such as Apple are targeted at consumers who want their monitors to be bright, contrasty, and saturated. There are many technical elements that go into creating a monitor intended for color proofing, which are beyond the scope of this document. In general, it can be said that all engineering is a set of compromises and priorities; a monitor engineered to sell to consumers will not perform as well for color-critical uses as one designed specifically for such use.
- Monitor Calibration Method: Normal monitors accomplish “calibration” via software that limits the output of the graphics card. Such software calibration can improve the accuracy of color the monitor displays, but can also lead to artifacts such as banding and gamut instability. Professional color-critical monitors provide hardware calibration where the monitor output itself is calibrated. This allows the full output of the graphics card to be used. Note that using a hardware-calibrated monitor like an Eizo CG series with a software-calibration program like X-Rite i1 Profiler results in a software-calibration. For best results, it is recommended that software allowing for hardware calibration is always used. In the case of an Eizo CG series, the industry standard for color accuracy, Eizo’s Color Navigator software should be used.
- Monitor Calibration Settings: A high-quality monitor calibrated carefully to the wrong standards will be very precise, but inaccurate. When comparing a monitor to a physical object, it’s important the brightness and white point are closely matched. Placing an object inside a proofing booth makes this especially straightforward, as the booth will be made to target a specific white point. For example, when using a D50 proofing booth, it would be appropriate to calibrate your monitor to a D50 standard. A D65 standard is appropriate if using a north-facing window while the sun is high in the sky.
- Ambient Illumination: The human vision system automatically adapts to the predominant ambient illumination. The most obvious adaptation is the opening and closing of the pupil as the brightness of one’s surroundings shift. More subtle adaptation can be demonstrated based on changing color temperature. For this reason, it is important to measure and control the ambient lighting in a room used for color evaluation.
Visual Assessment as a Safeguard
Visual assessment is especially useful as a guard against user-error or equipment failure. For instance, if a test target is dusty, numerical analysis may indicate there is excessive noise in the image; the software has no way of knowing that the target itself is actually dusty. Likewise, any manner of software or user error can result in numerical evaluations that are grossly incorrect. When numerical analysis shows a significant and sudden change from past results, the first step in troubleshooting is usually to make a visual assessment to determine whether the digital object or the evaluation of the digital object is in error.
Visual Assessment for Subjective Rendering
In some cases, objective accuracy comes at the cost of a rendering that would be generally considered pleasant and representative of the intention of the original artist or the original presentation of the object. This is especially true in the case of transmissive and three-dimensional objects. A skilled technician can bring their visual vocabulary, experience, and understanding of the context and intent of the physical object to bear on the way it is rendered. For example, the imaging technician might need to render the object being imaged relative to the weight of the image content. If one is to image white paper that contains a light graphite drawing the exposure, tone needs to be darker than what the target indicates to make the content readable for the viewer. It is the experienced technician’s job, in consultation with curators and other stakeholders, to ensure that the rendering selected is optimal for the intended purpose.
Visual Comparison to a Print
A carefully calibrated print workflow can be used to compare a digital capture to the original physical object. This method has obvious limitations, namely the increased number of variables to control as the printing workflow itself needs to be done with high quality and meticulously calibrated hardware. This makes it less than ideal for truly critical color evaluation.
There are also significant benefits that cannot be easily achieved using a monitor-only comparison workflow. First and foremost, the print can be transported easily, allowing it to be brought to the object for comparison, if, for instance, the object has been returned to storage or is now located with the restoration team. Less obviously, a print is, like the original physical object, also a reflective object and therefore the disparity in perception between transmissive and reflective objects is not present.
It should never be assumed that because both a print and an object are physical objects that they will match in all lighting conditions. Color is a fickle perception. Changing the light source to one with a different spectral output means different objects that were the same color will appear as different colors, an effect called ‘Metameric Failure’ which limits the utility of print-to-object visual comparisons.