Digital Archivist at the Archaeology Data Service, York

2b or not 2b – Aiming for PDF/A

It doesn’t seem five minutes since I was working on the Ipswich backlog archive and writing my 2015 Day of Archaeology post and yet, here I am again – a year later, relating the ins and outs of a day in the life of a Digital Archivist at the Archaeology Data Service.

Being in the mood for reminiscing, having just finished reading my colleague, Tim Evans’ blog about his ten-year anniversary working for the ADS, I was about to launch into a review of everything that I’ve done in the year since my last DoA post.  However, remembering, in the nick of time, that this is in fact the ‘DAY of Archaeology’, you have been spared that and so, to a tale of two tasks.

My first task of the day will be continuing the process of accessioning and archiving the 482 files that have been submitted to the ADS via OASIS.

Grey Literature Library ImageAlong with providing information about archaeological events, archaeologists are encouraged to upload fieldwork reports to the OASIS data capture form which are then archived with the ADS and added to the Library of Unpublished Fieldwork Reports (or Grey Literature Library) for wider re-use. The transfer process is  a fairly complex and involved one, which is described in the blog article: Opening up the Grey Literature Library so I won’t go into it here.  What I can tell you, is that much of my day will be chasing after the precious and, often elusive, validating PDF/A.

Of the 482 files that form this month’s OASIS transfer batch, 462 have been submitted as pdf files, though the versions of pdf and the software and methods of creation vary for instance this batch contained:

  • 14 PDF 1.3 files;
  • 125 PDF 1.4 files;
  • 99 PDF 1.5 files;
  • 155 PDF 1.6 files;
  • 47 PDF 1.7 files;
  • 1 PDF/A-1a file and
  • 21 PDF/A-1b files

As our preservation format is PDF/A, all files that are not already PDF/A need to be migrated, so any that are submitted as PDF/A already can be preserved as they are, saving us time…but are they really PDF/A?

The file profiling tool – DROID, and many validating tools will tell you they are because they identify a PDF/A tag in the file’s XMP metadata. In fact, when you open a PDF file purporting to be PDF/A you also get a helpful blue banner at the top which states that the file complies with PDF/A standard, but this may be deceptive and the file may still not verify in Adobe Acrobat:

PDFA

According to Adobe Acrobat, only 15 of the 22 files purporting to be PDF/A were verified as such, so, as our procedure is to create PDF/A files that are verified by Adobe, the other 7 will need to be migrated along with the remaining 447 PDFs.  Each migration attempt is followed by a PDF/A validation check, even though our migration software states whether the migration has been a failure or a success.  Where we migrate them to PDF/A-2b, for example, (if they don’t migrate to PDF/A-1b) we will still have to repeat the Adobe ‘Verify Conformance’ feature:

PDFA-2b

2b…

PDFA-not2b

…not 2b

This is likely to take a while.

So, my second task, which will be done in and amongst the grey literature archiving, is to release the dozen or so Southampton archives that I have been working on – these form part of ‘Southampton’s Designated Archaeology Collections‘.

These archives have gone through several stages of work and processes, from accessioning (which involves checking the file formats and contents are readable and suitable for archiving andthat the documentation provided with the data is sufficient to allow discovery, reuse and curation) to interface creation (which involves working to a template created for the Southampton archives, creating thumbnails and images, adding introductory text and ensuring that the files can be downloaded from our file system).  Once the files have been accessioned, migrated, documentation added to our databases and the interface completed; the work is then checked by another Digital Archivist prior to release.

The release stage itself involves a few separate tasks including:

  • a final running of the file profiling tool (DROID) on the collection to ensure that all of the file format information we need is added to our database;
  • adding a ‘release date’ to our Collections Management System;
  • assigning (or minting) a Digital Object Identifier (DOI) (via DataCite)to each collection;
  • updating the ADS ‘Collections History‘ page to include the new release;
  • checking the Dublin-Core metadata and transferring that metadata to Archsearch;
  • creating links, where relevant, to/from the Grey Literature Library and The Geophysical Survey Database.

By the end of the day, therefore, if all goes to plan, there should be around a dozen more collections added to the 987 already available on our website!

 

 

 

 

 

 

 

 

 

 

Archiving Ipswich

Two years after posting about my work on the Silbury Hill digital archive, in ‘AN ADS DAY OF ARCHAEOLOGY’, and I’m still busy working as a Digital Archivist with the ADS!

For the past few months, I have been working on the Ipswich Backlog Excavation Archive, deposited by Suffolk County Council, which covers 34 sites, excavated between 1974 and 1990.

Ipswich2

Excavation at St Stephen’s Lane, Ipswich 1987-1988

To give a quick summary of the work so far, the data first needed to be accessioned into our systems which involved all of the usual checks for viruses, removing spaces from file names, sorting the data into 34 separate collections and sifting out duplicates etc.  The archive packages were then created which involved migrating the files to their preservation and dissemination formats and creating file-level metadata using DROID.  The different representations of the files were linked together using object ids in our database and all of the archiving processes were documented before the coverage and location metadata were added to the individual site collections.

Though time consuming, due to the quantity of data, this process was fairly simple as most of the file names were created consistently and contained the site code.  Those that didn’t have descriptive file names could be found in the site database and sorted according to the information there.

The next job was to create the interfaces; again, this was fairly simple for the individual sites as they were made using a template which retrieves the relevant information from our database allowing the pages to be consistent and easily updateable.

The Ipswich Backlog Excavation Archive called for a more innovative approach, however, in order to allow the users greater flexibility with regards to searching, so the depositors requested a map interface as well as a way to query information from their core database.  The map interface was the most complex part of the process and involved a steep learning curve for me as it involved applications, software and code that I had not previously used such as JavaScript, OpenLayers, GeoServer and QGIS.  The resulting map allows the user to view the features excavated on the 34 sites and retrieve information such as feature type and period as well as linking through to the project archive for that site.

OpenLayers map of Ipswich excavation sites.

OpenLayers map of Ipswich excavation sites.

So, as to what I’m up to today…

The next, and final step, is to create the page that queries the database.  For the past couple of weeks I have been sorting the data from the core database into a form that will fit into the ADS object tables, cleaning and consolidating period, monument and subject terms and, where possible, matching them to recognised thesauri such as the English Heritage Monument Type Thesaurus.

Today will be a continuation of that process and hopefully, by the end of the day, all of the information required by the query pages will be added to our database tables so that I can begin to build that part of the interface next week.  If all goes to plan, the user should be able to view specific files based on searches by period, monument/feature type, find type, context, site location etc. with more specialist information, such as pottery identification, being available directly from the core database tables which will be available for download in their entirety.  Fingers crossed that it does all go to plan!

So, that’s my Day of Archaeology 2015, keep a look out for ADS announcements regarding the release of the Ipswich Backlog Excavation Archive sometime over the next few weeks and check out the posts from my ADS colleagues Jo Gilham and Georgie Field!

An ADS Day of Archaeology

Here it is, my Day of Archaeology 2013 and after a routine check of my emails and the daily news I’m ready to begin!

Silbury Hill ©English Heritage

Silbury Hill ©English Heritage

I am currently approaching the end of a year-long contract as a Digital Archivist at the Archaeology Data Service in York on an EH-funded project to prepare the Silbury Hill digital archive for deposition.

For a summary of the project, see the ADS newsletter and for a more in-depth account of my work so far check out my blog from a couple of weeks ago: “The Silbury Hill Archive: the light at the end of the tunnel”

Very briefly, though, my work has involved sifting through the digital data to retain only the information which is useful for the future, discarding duplicates or superfluous data; sorting the archive into a coherent structure and documenting every step of the process.

The data will be deposited with two archives: the images and graphics will go to English Heritage and the more technical data will be deposited with the ADS and as the English Heritage portion of the archive has been completed it is time for the more technical stuff!

So, the plan for today is to continue with the work I have been doing for the past few days: sorting through the Silbury Hill database (created in Microsoft Access).

Originally, I had thought that the database would just need to be documented, but, like the rest of the archive, it seems to have grown fairly organically; though the overall structure seems sound it needs a bit of work to make it as functional as possible and therefore as useful as possible.

The main issue with the database is that there are a fair amount of gaps in the data tables; the database seems to have been set up as a standard template with tables for site photography, contexts, drawings, samples, skeletal remains and artifact data etc.  but some of these tables have not been populated and some are not relevant.  The site photography and drawing records have not been entered for example, meaning that any links from or to these tables would be worthless.  The missing data for the 2007 works are present in the archive, they are just in separate Excel spreadsheets and there are also 2001 data files, these are in simple text format as the information was downloaded as text reports from English Heritage’s old archaeological database DELILAH.  The data has since been exported into Excel, so, again to make the information more accessible, I’m adding the 2001 data to the 2007 database.

My work today, therefore, as it has been for the past couple of days, is to populate the empty database tables with the information from these spreadsheets and text files and resolve any errors or issues that cause the tables to lose their ‘referential integrity’, for example where a context number is referred to in one table but is missing from a linking table.

Silbury database relationship diagram ©English Heritage

Silbury database relationship diagram ©English Heritage

So, this morning I started with the 2001 drawing records. The entering of the data itself was fairly straightforward, just copying and pasting from the Excel spreadsheet into the Access tables, correcting spelling errors as I went.  Some of the fields were controlled vocabulary fields, however, which meant going to the relevant glossary table and entering a new term in order for the site data to be entered as it was in the field.

Once the main drawing table was completed, the linking table needed to be populated; again, this was done fairly simply through cutting and pasting from Excel.

The next step was the most time-consuming: checking the links between the tables, to do this I went to the relationship diagram, clicked on the relevant link and ticked the box marked ‘enforce referential integrity’ this didn’t work which meant that a reference in one table was not matched in the linking table which meant going through the relevant fields and searching for entries that were not correct.  The most common reason for these error messages was that an entry had been mis-typed in one of the tables.

That took me up to lunchtime, so what about the afternoon?  More of the same: starting work on the sample records with the odd break for tea or a walk outside to save my eyes!

As much as the process of updating the database has been fairly routine, it’s an interesting and valuable piece of work for me as it is the first time I’ve ever really delved into the structure of a database and looking at the logic behind its design.  I was fortunate in that I had attended the Database Design and Implementation module taught by Jo Gilham as part of the York University Msc in Archaeological Information Systems which gave me a firm foundation for this work.  Also very helpful was the help provided by Vicky Crosby from English Heritage who created the database and provided a lot of documentation in the first instance.

The next step once the data has been entered will be to remove any blank fields and tables and then to document the database using the ADS’ Guidelines for Depositors and then to move on to the survey data and reports.

I’m looking forward to seeing it all deposited and released to a wider world for, hopefully, extensive re-use and research!