There is a widespread belief by many members of the cultural heritage community that high quality digitization is too expensive to achieve and is not truly required for some digitization projects. In this blog post we look deeper into this topic, and why groups like the Federal Agencies Digitization Guidelines Initiative (FADGI) have been advocating for the importance of following imaging standard guidelines.
The Unforeseen Costs of Imaging with a Short Term Mentality
Many digitization programs are inappropriately built around short-term needs. Collections are scanned on a flatbed scanner without any verification of the resulting image quality beyond a cursory visual comparison, or a general-purpose camera (e.g. Canon/Nikon), or a planetary device is employed. The institution justifies using these lower-quality solutions because they meet the requirements of a particular immediate use; they are “good enough.”
This philosophy has two unforeseen repercussions:
- Cost of reimaging
- Cost of maintaining the Preservation Digital Object (the TIFF or JP2 archival digital asset created at the end of the process).
Consequences of Reimaging: Cost, Time & Condition
If, at any point after creation, the image quality of a Preservation Digital Object is found to be insufficient for a particular use, the object must be reimaged. Moreover, most of the digitization chain must be repeated: object retrieval, object preparation, digitization, QC, object return, redepositing the digital asset and metadata into an asset management system. This means additional handling of the object with inherent risk of degradation of its condition, additional internal resources required by the institution, and an additional time period where the object is unavailable to other interested parties (patrons, researchers, conservation staff, etc). In general, it makes sense to do it once, and do it right: it’s safer for the object, it’s more efficient, and it’s simpler.
Cost of Maintaining a Preservation Digital Object
A Preservation Digital Object is meant to be kept indefinitely. However, the cost of maintaining and migrating an archive of Preservation Digital Objects is high, especially when you add this cost up over an indefinite future. Maintaining a Preservation Digital Object with limited applicability (e.g. only good enough for patron web access, but not good enough for research or unforeseen future needs) often costs approximately the same as a Preservation Digital Object created for broad scope-of-use. This is especially true of systems which claim to produce a specific PPI of information but are, underneath the post-process sharpening applied, actually achieving very low Sampling Efficiency (underlying sharpness or its actual resolution). It is our view that the only type of Preservation Digital Object worth creating and indefinitely maintaining is a Preservation Digital Object which will meet even the most quality-demanding future use.
The Rest of the Digitization Chain
Imaging entire collections is a Herculean task in terms of labor, costs and mindshare. It includes the executive arm of the institute, preservation staff, conservation teams, cataloguing personal, imaging specialists and information technologies departments – all involved in the process to reformat collections digitally for long term access and preservation. The scope is so wide and the chain so long that most of the time resources are spent in areas other than the actual imaging of the object. The majority of the process is in administrative planning, internal communication and project assignment, object retrieval, metadata entry, quality control and file management. For instance, a DT RGC180 Capture Cradle with a DT RCam could be used by a single technician to image thousands of photographic prints per hour, if that was the only step in the digitization process. In reality, staff will spend more time retrieving, organizing and returning the boxes/containers of materials than actively imaging – and the imaging technician is only one part of the total digitization chain. Thus, it is imperative that the digitization itself is at the best possible quality, otherwise the rest of the institutional resources involved in the digitization project will have been squandered.
In years past slow scanning systems and multi-shot systems meant that high-quality images took significantly more time to produce and that increasing resolution and quality with these legacy systems required a direct compromise on capture speed. As a result a slow workflow was, for many years, the only viable way to accomplish high-resolution, high color-fidelity and sharp Preservation Digital Objects. However, in the last several years the advent of high-resolution single-shot capture systems has revolutionized digital capture; now it takes the same time to digitize with preservation-grade image quality standards as it does to produce access/patron quality images. A DT BC100 Book Capture System, for instance, can handle bound material up to an A2 page spread at 600ppi with FADGI 4-Star sharpness; its capture rate of approximately 30 pages per minute is the same whether capturing an A2@600ppi@FADGI-4 or an A2@150ppi@FADGI-2. There is no longer a need to compromise between high-quality and high-productivity.
There is No Room for Short-Term Thinking in Preservation Imaging
In summary, preservation imaging is inherently long-term in nature – there is no room for compromises made because of short-term thinking. We believe firmly that following high image-quality guidelines utilizing dedicated, high-quality capture equipment is essential for the creation of Preservation Digital Objects that will stand the test of time, and justify the cost of their creation and indefinite maintenance. Cultural Heritage professionals often focus on quantity when taking on the task of mass digitization, but quality is just as important.
Interested in a similar solution for your program or have a comment? Please contact us.
Division of Cultural Heritage
1.877.f/ortless (367-8537) x2280