Difference between revisions of "Digitization"

From SPNHC Wiki
Jump to: navigation, search
(Georeferencing)
(Georeferencing)
Line 99: Line 99:
 
It is logical to separate georeferencing of collections locality data into two categories.
 
It is logical to separate georeferencing of collections locality data into two categories.
 
# Georeferencing '''legacy data''' from the text-based locality descriptions for specimens collected before the '''global positioning system (GPS)''' made GPS coordinate collection possible. The references and examples in the list above give many hints on best/better practices for georeferencing this type of data.
 
# Georeferencing '''legacy data''' from the text-based locality descriptions for specimens collected before the '''global positioning system (GPS)''' made GPS coordinate collection possible. The references and examples in the list above give many hints on best/better practices for georeferencing this type of data.
# For new specimens entering collections, best practice would be for the georeference for that item to be included. This keeps the '''legacy data''' pile from growing and speeds access to the specimen data needed for scientific research. For best practice, a collection/institution would have a policy in place about what geospatial information is expected to be submitted with specimen. See the current [http://www.gbif.org/orc/?doc_id=1288 Biogeomancer Guide to Best Practices] for guidance on this topic.
+
# For new specimens entering collections, best practice would be for the coordinate data and metadata for that item to be included. This eliminates the need to georeference legacy data, increases the accuracy of the coordinates, and speeds access to data for scientific research. For best practice, a collection/institution would have a policy in place about what geospatial information is expected to be submitted with specimen. See the current [http://www.gbif.org/orc/?doc_id=1288 Biogeomancer Guide to Best Practices] for guidance on this topic.
 
If your institution has such a policy and guidance in place, it would be good to share it here.
 
If your institution has such a policy and guidance in place, it would be good to share it here.
  

Revision as of 18:29, 13 February 2020

Statement of Purpose

Realizing the import of collections [1][2][3], SPNHC recognizes the need to collaborate to develop, discover, disseminate and update best (better, current, recommended) practices for creating digital collections resources and publishing them for global access. Materials linked here represent the efforts of many collections data mobilization projects worldwide. All in the collections and standards community are encouraged to contribute.

Defining Digitization

In the context of the SPNHC wiki 'digitize' means converting ALL analog data to digital data according to standard vocabularies such as DarwinCore and AudubonCore. That is, we start with the concept of a specimen that has been accessioned in a collection. We envision these digital data eventually to include the entirety of analog data that are associated with a particular specimen. This may include but is not limited to:

  • Text data from labels and ledgers associated with specimens
  • Images of specimens
  • DNA and other 'omics
  • Field notes, drawings and images
  • Tomographic imaging data
  • Specimen history (including preservation)
  • Specimen-associated literature and media
  • Collection-level metadata

Digitizing might be accomplished by collections managers, technicians, contractors, volunteers, and other entities, the results of which are included within the institution's collection management system. In many instances these data may be generated off site by investigators.

The process of digitization has been analyzed by Nelson et al. (2012)[4], and five task clusters that comprise the digitization process leading up to data publication have been identified:

  1. Pre-digitization curation and staging
  2. Specimen image capture
  3. Specimen image processing
  4. Electronic data capture
  5. Georeferencing locality descriptions

We expect these groupings to change over time as standards of practice for digitization processes and procedures evolve. For example, the following should likely be added

  • Data mobilization as a task cluster (aka data publishing), as after data capture in a local database, the data need to be shared outside the local database.
  • Feedback re-integration - after collections data is published, feedback from others needs re-integrating into local collections. This re-integration step requires vetting and usually some policy decision-making. At some point, local changes to a collection management system may be desired/needed.
  • Pro-active data capture - policies and procedures for capturing new specimen data in the field (i.e."born-digital data"), already mapped to relevant data standards and formatted accordingly.

Digitization Resources

Data Aggregation

Data mobilization (getting the data out of your local collection management database) involves contributing data and media to a designated aggregator/s. These data are then integrated with data from other institutions to provide access to a greater volume of datasets. The aggregation resource scope may be taxonomic-focused (e.g. SCAN), organization or institution-based (e.g. C. V. Starr Virtual Herbarium), regional (e.g. SEINet), national (e.g. the Atlas of Living Australia), global (e.g. GBIF), or otherwise. Aggregating data offers collections unique opportunities to enhance collections data, facilitate discovery, and increase re-use. The following resources introduce the aggregator's point-of-view and what to expect.

Getting collections data to an aggregator is a multi-step, and often cyclic process. See also: data standards, data management, data mobilization, and workflows.

Data Aggregators

Natural history collections commonly contribute to these data aggregators:

Data Management

Data Mobilization

Consider what needs to be done to get data out of a local collections database and into one or more other online resources. Some of the other categories on this wiki page that relate to this topic are data standards, data management, data aggregation, and workflows. Sharing data is often a cyclic process. Once shared, aggregators provide feedback and collections staff need to evaluate which items to address and how. After updates, data can be published again, with the enhancements.

Data aggregators often differ somewhat in what they expect collections data to look like to simplify aggregation. Overall, the community is moving toward shared aggregation practices. For example, most aggregators today accept darwin core archives (i.e. zippped text files in a specific format) for ingestion.

In the process of preparing to share data, there are many known issues to consider. Have a look at the iDigBio Data Ingestion Guidance for an idea of the scope of the issues. Some overall topics that will come up include:

  • Globally unique identifiers
  • Collection-level metadata
  • Data standard and format issues (e.g. date formats, missing higher taxonomy, geo-coordinate issues,...)
  • Rights information (e.g. Creative Commons licenses for images)
  • Check out the VertNet Norms for Data Use and Publication for a thorough introduction to the licensing issues pertinent to collections data and media.

Data Standards and Mobilization

To share our respective datasets, the data must be mapped to a single set of terms and concepts. By doing this, we can aggregate data into one searchable resource. It's rather similar to agreeing on a common language. Our collections community currently uses both Darwin Core and Access to Biological Collections Data (ABCD) to share biodiversity data. European collections use ABCD more often that Darwin Core. Current discussions are underway to work on merging these standards. Audubon Core (AC) standard provides a common language for sharing information about media (2D, 3D, etc.). Note that Darwin Core is a widely adopted standard for biodiversity data sharing. It was developed by the organization Biodiversity Information Standards (TDWG; historically known as the Taxonomic Databases Working Group) in 2009. A number of resources exist for its use:

More and more, the process of mapping collections data to Darwin Core or other standards is simplified by the collections software itself.

Data Transcription

Transcription, aka data capture or data entry, is an essential part of the digitization process but can pose a number of challenges. Many institutions enlist the services of 'Transcription Portals', also known as 'Volunteer or Citizen Science Portals' for help in transcribing collections records. Whether that be specimen labels, field notes and diaries or helping describe and annotate some other form of media ie. animals appearing in camera trap images. Search iDigBio for materials tagged transcription

Database Software

Those curating natural history collections are currently using a number of different platforms to capture, track, and share data. Below are a few of the more common database systems :

Features may vary widely, including:

  • Ability to customize
  • Ability to easily map to data standards
  • Ability to store and track identifiers (e.g. for people, specimens, identifications, ...)
  • Assignment of globally unique identifiers
  • Available fields
  • Cost
  • Ease of publishing data to aggregators
  • Georeferencing (built-in tools, or not)
  • Linking to media resources (e.g. label images, 2D, 3D media)
  • Ways to batch update records

Georeferencing

A number of resources pertaining to the process of georeferencing, defining a location using map coordinates and assigning the coordinate system of the map frame, are available:

It is logical to separate georeferencing of collections locality data into two categories.

  1. Georeferencing legacy data from the text-based locality descriptions for specimens collected before the global positioning system (GPS) made GPS coordinate collection possible. The references and examples in the list above give many hints on best/better practices for georeferencing this type of data.
  2. For new specimens entering collections, best practice would be for the coordinate data and metadata for that item to be included. This eliminates the need to georeference legacy data, increases the accuracy of the coordinates, and speeds access to data for scientific research. For best practice, a collection/institution would have a policy in place about what geospatial information is expected to be submitted with specimen. See the current Biogeomancer Guide to Best Practices for guidance on this topic.

If your institution has such a policy and guidance in place, it would be good to share it here.

iDigBio Digitization Resources Wiki

The iDigBio Digitization Resources wiki page provides resources and information regarding digitization, including training workshops being conducted by iDigBio, digitization information and resources, and links to documents, websites, videos, presentations, and other important information related to biological collection digitization.

Imaging and Media

A number of techniques are available for two-dimensional (2D) and three-dimensional (3D) digitization, including X-ray computed tomography (CT):

Key References and Further Reading

  • Nelson, G., D. Paul, G. Riccardi, and A.R. Mast. 2012. Five task clusters that enable efficient and effective digitization of biological collections. Zookeys 209:19-45. [1]
  • Vollmar, A. J.A. Macklin, and L.S. Ford. Natural History Specimen Digitization: Challenges and Concerns. 2010. Biodiversity Informatics 7:93-112. [2]
  • ZooKeys Special Issue (No specimen left behind: mass digitization of natural history collections (2012)
  • Search iDigBio for all available digitization materials
  • The Atlas of Living Australia (ALA) digitisation guide.Includes key guidance material such as the ‘digitisation maturity model'

Webinars

Access to various webinars (and select recorded presentations) regarding digization is available:

Workflows

Various general and discipline-specific materials regarding digitization are available via iDigBio:

Workshops and Symposia

A number of workshops and conference symposia have focused on the subject of digitization:

Contributors

Current content contributors: SPNHC members Breda Zimkus, Jessica Cundiff, Genevieve Tocci, Nicole Fisher, and Deborah Paul. We hope that others will add their names to this list as information is added and updated.

Original digitization page content now found here was generated during The American Society of Ichthyologists and Herpetologists (ASIH) Annual Joint Meeting - 2016, during an iDigBio sponsored workshop by the following individuals participating in the "Digitization" working group of the aforementioned workshop: Gil Nelson (Florida State University, Courtesy Faculty), Larry Page (The Florida Museum of Natural History, Ichthyology Curator), Cristina Cox-Fernandes (UMass Amherst Biology, Adjunct Research Associate Professor), Mark Sabaj (ANSP, Ichthyology Collection Manager), Adam Summers (University of Washington, Professor - Friday Harbor Labs), Kevin Love (iDigBio, IT Expert), Ken Thompson (Lock Haven University, Professor; Retired), Randy Singer (Florida Museum of Natural History), and Gregory Watkins-Colwell (Yale Peabody Museum, Herps and Fishes, Collection Manager).

References

  1. Lawrence M. Page, Bruce J. MacFadden, Jose A. Fortes, Pamela S. Soltis, Greg Riccardi, Digitization of Biodiversity Collections Reveals Biggest Data on Biodiversity, BioScience, Volume 65, Issue 9, 01 September 2015, Pages 841–842, https://doi.org/10.1093/biosci/biv104
  2. Nelson, G., & Ellis, S. (2019, January 7). The history and impact of digitization and digital data mobilization on biodiversity research. Philosophical Transactions of the Royal Society B: Biological Sciences. Royal Society Publishing. https://doi.org/10.1098/rstb.2017.0391
  3. Monfils, A. K., Powers, K. E., Marshall, C. J., Martine, C. T., Smith, J. F., & Prather, L. A. (2017). Natural History Collections: Teaching about Biodiversity Across Time, Space, and Digital Platforms. Southeastern Naturalist, 16(sp10), 47–57. https://doi.org/10.1656/058.016.0sp1008
  4. Nelson G, Paul D, Riccardi G, Mast A (2012) Five task clusters that enable efficient and effective digitization of biological collections. ZooKeys 209: 19-45. https://doi.org/10.3897/zookeys.209.3135