Aberdeen Harbour Board Arrivals Transcription Project

A blog post by Mollie Horne, Project Archivist at Aberdeen City and Aberdeenshire Archives and Ian Watt of Code The City.

The arrivals transcription project is an ongoing partnership between Code the City and Aberdeen City & Aberdeenshire Archives. It forms part of a wider project funded by the Archives Revealed initiative funded by The National Archives which aims to improve the accessibility of records.

The arrival registers are a small part of a much larger collection which was transferred to Aberdeen City and Aberdeenshire Archives as a result of a partnership with the Aberdeen Harbour Board.

The project was originally intended to be part of the physical Code the City 19 event in April 2020 but in anticipation of the nationwide restrictions, it was decided to move entirely online. In the week before we were told to work from home, Mollie photographed each individual page (all 649 of them) from the arrival registers from 1914-1920 and uploaded them to the Google Sheets system which had been set up by Ian. This meant that we had a large amount of material which could be worked on for an extended period.

After creating a set of guidelines and helpful links, we invited the public to work on transcribing and checking entries from March 27th onwards. As the online CTC19 event was scheduled for 11-12th April this allowed us two weeks to create enough data to be useful to the coders over the official weekend.

Transcribers accessed two Google sheets. The first was to log their participation and note what photograph they were transcribing.

The second sheet was the one into which they transcribed the data.

Example of 1916 transcription
Example of 1916 transcription

We also set up an open Slack group where transcribers could chat, ask questions, get help etc.

Progress was rapid: by the end of the weekend almost 4,000 records had been transcribed and checked. At the time of writing (2nd May 2020) that has now grown to over 7,000 records transcribed.

When an image has been transcribed, and checked, we lock off the entries to preserve them form change.

The data which had been transcribed was used to create a website, set up by Andrew Sage of CTC, where we could see information in a collated an organised way – this was extremely useful to inform other transcriptions. So far we have managed to fully complete 1914 and are working through the rest of the years.

The arrivals transcription project started as a great way to highlight an important time in the history of the Harbour, which has always been a big part of Aberdeen. However, given current circumstances, it has also become a great opportunity to give people something to focus on.

The project remains open – and you can still get involved by contributing just an hour or two of your time. Start here.

You can lead a horse to water but Covid19 data must be scraped

Just over a month ago I wrote about the lack of Covid19 Open Data for Scotland.

I showed how the Italian health authorities were doing just what was needed in the most difficult of circumstances. I explained how, in the absence of an official publication of open data (in an openly-licensed, neutral format, machine-readable format ) I’d taken it on myself on 15th March 2020 to gather and publish the data. My hope was that I’d have to do it for a week or two then the Scottish Government would take over.

And here we are five weeks and one day later and I am still having to do it. Meantime a growing list of websites and applications has developed to use the data which is great but adds to the pressure.

So what’s happened in the intervening time and, more importantly, what hasn’t?

From manual to coded scraping

Originally I was gathering the data manually. Going to the Scottish Government web page and retyping the data into CSVs. This is a terrible practice – open to errors and demanding double and triple checking before pushing to Github. While it looked like my publication was going to be short term that hardly mattered.

But as the weeks dragged by I had to concede that there was no rescue coming from Scottish Government any time soon. Initially it appeared that the daily data was going to be published openly via their statistics platform. That eventually morphed into an additional but different set of data (from National Records of Scotland, not HPS).

So, I resolved to build a scraper – a piece of code that will read the HTML of a webpage and extract the data from that. Sounds easy – but in practice it can be far from it.  And when all is said and done it is the most brittle of solutions: as any small change can break the code.

SG Nested Span Tags
Nested Span Tags on the SG web page

Given how poorly the page was structured (endless nested blank span tags being just one crime against HTML) I didn’t have a great deal of confidence that it could be kept working.

I built it and tested it daily but it wasn’t until 14th April that I was confident enough that it would work daily. Even then it wouldn’t take much to derail it. At that point it was 360 lines of code just to get a few dozens numeric values from a single page.

There is probably some law named after someone wiser than me that says that once you launch a piece of software it will be broken the very next day, and so it did the very next morning. The scraper relies on knowing the structure of a page – finding bulleted lists, tables, and iterating through those structures looking for patterns to match and grabbing the numbers.

Since then the Scottish Government have changed the structure of the page as many as six times, including

  • making the final item in a bulleted list into a new paragraph on its own right,
  • removing a table completely,
  • and today changing the format of numbers in a table to include commas where none were used before.
Text all in bullets
Text all in bullets
Final list item now a para
Final list item now a para

If you are interested I have archived the page contents for each day (minus the styling).

A breakthrough?

Last week it looked like we might have an easier solution on our hands: not only did they change the URL of the page with the data, they then without fanfare added a new XLSX spreadsheet with the daily data in it, updated each day. While not a CSV file, it appeared that it would be very useful.

So yesterday I started to code up a routine to

  • grab the XSLX file,
  • download that,
  • save it as a reference copy, then
  • figure out the worksheet names which have data, not charts,
  • go to those worksheets,
  • find the ranges with the data (ignoring comments in rows above the data, to the right of the data, below the data – see image below),
  • extract that data and write it back to plain CSV files as I was doing wth my original scraper.
A screenshot of one worksheet showing one area of data and three of non-data (red)
A screenshot of one worksheet showing one area of data (green) and three of non-data (red)

Having tested the first part of it yesterday I re-ran it today and it broke. It turns out the URL at which the spreadsheet is published changes from day to day. I suspect that this is as a result of some sort of Content Management System.

All of which means I have to now do another scraper to identify each day’s URL before I can do any of the above.

Why are we in this position?

The current position defies logic. There are so many factors that should have meant that Scottish Government would have this sorted out by now.

  1. I’d identified the need for plain CSV publishing previously, and very publicly, giving good examples.
  2. I’d had an email from contact in SG mentioning two of my CSV files (in the context of the forthcoming NRS data publication).
  3. I work as part of the Civic Society group as part of the Open Government partnership, and I am the lead for open data.
  4. So people know where to find me and interact with me.
  5. I have blogged extensively about what we need – and have emailed contacts at SG.
  6. As part of the planning for Scottish Open Data Unconference I set up a Slack group to which SG contacts were invited – and I believe some signed up.
  7. So there are forms through which, if there was any uncertainty, anyone at SG could ask “what does the data community want?” or “we’re thinking about doing ‘x’ would that work for you?” But there has been no such approach.

Meantime, I’ve spent many 10s of hours for no financial reward doing what Peter Drucker called ‘doing the wrong thing righter.” i.e. allowing the SG to continue to publish data wrongly on the basis that the effort is transferred onto the community to create work-arounds.

After five weeks of doing this daily, I’m absolutely fed up of it. I’m going to formally raise it through the Open Government Partnership with a view to getting the right data, in the right format, published daily.

 

Header picture cropped from a photo by Nathan Dumlao on Unsplash

SODU2020 – a guest post by Sarah Roberts of Swirrl

Scottish Open Data Unconference

It’s all going on in Scotland in March. As we spring into Spring (nearly there!), we’re very excited to be sponsoring, and going, to the Scottish Open Data Unconference in Aberdeen on 14th and 15th March. Topics are pitched in the morning of each day, an agenda is created and participants talk as much as the chair. 

Our colleague Jamie Whyte is lucky enough to have a ticket, so if you spot him do say hi! Here are some recent open data happenings we’ve picked up on our radar…

Scottish Index of Multiple Deprivation

The Scottish Index of Multiple Deprivation was released late January and we loved the accompanying briefing document, which put the numbers into context (find it here). The data’s also available on the Scottish Government’s Open Data site, where you can use the Atlas section to find key data zones and see key facts about them. The below screenshot is of the data zone which is ranked as the most deprived in the 2020 SIMD.

SIMD - Greenock Town centre
SIMD – Greenock Town centre

People are already making stuff with the data — below is a screenshot of Jamie’s lava lamp visualisation of the data

Commentary, explanation and analysis from others include: Alasdair Rae’s summary matrix of the SIMD data by council area, a story graphic of the data, an interactive mapping tool, an analysis blog post from Scottish parliament information centre and news articles, like this one from the BBC.

Jamie Whyte - King of the Lava Lamp
Jamie Whyte – King of the Lava Lamp image

W3C Community Group

Another thing we’ve noticed is that there’s preliminary work happening on GraphQL and RDF, which aims to serve as a case for future standardisation. More on this here, where you can send a request to join the group if this is your bag. It’s definitely ours! 

Collaborative work with data

Last, but not least, collaboration. This is a wide concept but it’s also a trend that’s cropping up in different aspects of working with open data. Here are some we’ve noticed:

“promote trust and co-operation between government and civil society.”

  • The Office for National Statistics is publishing data in a collaborative project across a spread of organisations including ONS, HMRC, MHCLG, DWP and DIT. The Connected Open Government Statistics (COGS) project involves a lot of technical collaborative work in harmonising codelists, as well as harmonising a data model and all the processes that go into it. More on this project here on the GSS blog site. 
  • 2019 saw a growing, collaborative API community, with API events involving government and people working with government. We went to one in Newcastle and another one’s arranged for March 16th (if you’re still hungry for more after the unconference!) 
  • The Open Data Institute have been busy, busy, busy. Jeni Tennison spoke about the idea of how collaboration is key for new institutions of the data age, at our Power of Data conference in October (catch that video here). The ODI have also been working on a data and public services toolkit & there’s an introductory event to this in Edinburgh just a few days before the Scottish Open Data Unconference. 

Thanks for reading! If you’d like to find out a bit more about who we are and what we do, take a look at our website, our blog, our latest newsletter and / or our twitter stream. We’ve just been named as one of the FT1000 fastest growing companies in Europe and we’re still hiring, so if you think you can help us we’d love to hear from you. 

We love data and we’re delighted to be sponsoring the Scottish Open Data Unconference. See you there. 

 

Joining the dots between Britain’s historical railways using Wikidata – Part One

A bit of background

The evening before Code The City 18 I started to think about what fun project to spend the day doing at our one day mini-hack event. After reading Ian Watt’s blogpost about Wikidata and spending 10 minutes or so playing around with it, I decided a topic for further experimentation was required.

At the time of writing, I’m just over a third of the way through my very interesting part-time online MA Railway Studies at University of York. Looking at Britain’s railways from their very beginning, there are many railway companies from 1821 onwards. Some of these companies merged, some were taken over, others just disappeared whilst others were replaced by new companies. All these amalgamations eventually led to the “Big Four” groupings in 1923 and then on to British Railways in 1948’s railway nationalisation. British Railways rebranded as British Rail in 1965 and then splintered into numerous companies as a result of the denationalisation of the 1990s.

With the railway companies appearing in some form or another in Wikipedia, I thought it would be useful to be able to pick any railway company and view the chain of companies that led to it and those that followed. The ultimate goal would be to be able to bring up the data for British Rail and then see the whole past unfold to the left and the future unravel to the right. In theory at least, Wikidata should allow me to do that. 

No software coding skills are required to see the results of my experimentation: by clicking on the links provided (usually directly after the code) it is possible to run the queries and see what happens. However, using the code provided as a start, it is possible to build on the examples to find out things for yourself.

Understanding Wikidata and SPARQL

SPARQL is the query language used to retrieve various data sets from Wikidata via the Wikidata Query Service.

As is always the case with anything software related, the examples and tutorials never seem to handle those edge cases that you seem to hit within the first 5 minutes. Maybe I hit these cases so soon due to jumping straight from the “hello world” of requesting all the railway companies formed in the UK to trying to build the more complex web of railway companies rather than working my way through all the simpler steps? However, my belief is to fail quickly, leaving plenty of time left to fail some more before succeeding, after all you never see a young child plan out a strategy when they are learning to get the different shaped blocks through the correct holes.

At the time of writing…

Comments about the state of certain items of data were relevant at the time I wrote this article. As one of the big features of Wikidata is it constantly being updated, expanded and corrected, the data referenced may have changed by the time you read this. Some of the changes are those I’ve made in reaction to my discoveries, but I have left some out there for others to fix.

A simple list

First off, I created a simple SPARQL query to request all the railway companies that were formed in the UK.

SELECT ?company ?companyLabel
WHERE {
?company wdt:P31 wd:Q249556; wdt:P17 wd:Q145 .
SERVICE wikibase:label {
bd:serviceParam wikibase:language “en”.
}
}
ORDER BY (lcase(?companyLabel))

Run the query

The output of this query can be seen by running it yourself here by clicking on the white-on-blue arrow displayed on the Wikidata Query Service console. It is safe to modify the query in the console without messing up my query as any changes cause a new bookmarked query to be created. So please experiment as that’s the only way to learn.

Now what does the query mean and where do all those magic numbers come from?

  • wdt:P31 means get me all Wikidata triples (wdt) that have the property instance of  (P31) that is has a value of railway company (Q249556).
  • wdt:P17 means get me all of the results so far that have the property country (P17) set to United Kingdom (Q145).

Where did I get those numbers from? First, I went to Wikipedia and searched for a railway company, LMS Railway, and got to the page for London, Midland and Scottish Railway. From here I went to the Wikidata item for the page.

Screen grab of Wikipedia page for LMSR that shows how to get to the Wikidata
Wikipedia page for LMSR that shows how to get to the Wikidata

From here I hovered my pointer over instance ofrailway companycountry and United Kingdom to find out those magic numbers.

Screen grab of the Wikidata page for LMSR
Wikidata page for LMSR

Some unexpected results

Some unexpected companies turned up in the results list due to my query not being specific enough. For example, Algeciras Gibraltar Railway Company, located in Gibraltar but with headquarters registered in the UK the data has its country as United Kingdom. To filter my results down to just those that are located in the UK I tried searching for those that had the located in the administrative territorial entity (P131) with any of the following values:

  • England (Q21) 
  • Northern Ireland (Q26)
  • Scotland (Q22)
  • Wales (Q25)
  • Ireland (Q57695350) (covering 1801 – 1922)

using this query:

SELECT ?company ?companyLabel ?countryLabel
WHERE {
  VALUES ?country { wd:Q21 wd:Q26 wd:Q22 wd:Q25 wd:Q57695350 }
  ?company wdt:P31 wd:Q249556; wdt:P17 wd:Q145; wdt:P131 ?country.
  SERVICE wikibase:label {
bd:serviceParam wikibase:language "en".
}
}

ORDER BY (lcase(?companyLabel))

Run the query

However, that dropped my result set from 228 to 25 due to not all the companies having that property set.

Note: When trying to find out what values to use it is often quick and easy to run a simple query to ask Wikidata itself. To find out what all the values were for UK countries I wrote the following that asked for all countries that had an instance of value of country within the United Kingdom (Q3336843):

select ?country ?countryLabel
WHERE {
  ?country wdt:P31 wd:Q3336843 .
  SERVICE wikibase:label {
 bd:serviceParam wikibase:language "en".
  }
}

Run the query

Dates

In order to see what other information could easily be displayed for the companies, I looked at the list of properties on the London, Midland and Scottish Railway. I saw several dates listed so decided that would be my next area of investigation. There is an inception (P571) date that shows when something came into being, so I tried a query with that:

SELECT ?company ?companyLabel ?inception
WHERE {
  ?company wdt:P31 wd:Q249556; wdt:P17 wd:Q145 .
  ?company wdt:P571 ?inception 
  SERVICE wikibase:label {
   bd:serviceParam wikibase:language "en".
  }
}
ORDER BY (lcase(?companyLabel))

Run the query.

This demonstrated two big issues with data. Firstly, the result set had dropped from 228 to 106 indicating that not all the company entries have the inception property set. The second was that only one, Scottish North Eastern Railway, had a full date (29th July 1856) specified, the rest only had a year and that was being displayed as 1st January for the year. Adding the OPTIONAL clause to the inception request returns the full data set with blanks where there is no inception date specified.

SELECT ?company ?companyLabel ?inception
WHERE {
  ?company wdt:P31 wd:Q249556; wdt:P17 wd:Q145 .
  OPTIONAL { ?company wdt:P571 ?inception. }
  SERVICE wikibase:label {
   bd:serviceParam wikibase:language "en".
  }
}
ORDER BY (lcase(?companyLabel))

Run the query

Railway companies are not a straightforward case when it comes to a start date due to there being no one single start date. Each railway company required an Act of Parliament to officially enable it to be formed and grant permission to build the railway line(s). This raises the question: is it the date that Act was passed, the date the company was actually formed or the date that the company commenced operating their service that should be used for the start date? Here is a revised query that gets both the start time (P580) and end time(P582) of the company if they have been set:

SELECT ?company ?companyLabel ?inception ?startTime ?endTime
WHERE {
  ?company wdt:P31 wd:Q249556; wdt:P17 wd:Q145 .
  OPTIONAL { ?company wdt:P571 ?inception. }
  OPTIONAL { ?company wdt:P580 ?startTime. }
  OPTIONAL { ?company wdt:P582 ?endTime. }
  SERVICE wikibase:label {
   bd:serviceParam wikibase:language "en".
  }
}
ORDER BY (lcase(?companyLabel))

Run the query

Unfortunately, of the 228 results only one, London, Midland and Scottish Railway, has a startTime and endTime, and London and North Eastern Railway is the only with endTime. Based on these results it looks like that startTime and endTime are not generally used for railway companies. Looking through the data for Scottish North Eastern Railway did turn up a new source of end dates in the form of the dissolved, abolished or demolished (P576) property. Adding a search for this resulted in 9 companies with dissolved dates. 

SELECT ?company ?companyLabel ?inception ?startTime ?endTime ?dissolved
WHERE {
  ?company wdt:P31 wd:Q249556; wdt:P17 wd:Q145 .
  OPTIONAL { ?company wdt:P571 ?inception. }
  OPTIONAL { ?company wdt:P580 ?startTime. }
  OPTIONAL { ?company wdt:P582 ?endTime. }
  OPTIONAL { ?company wdt:P576 ?dissolved. }
  SERVICE wikibase:label {
bd:serviceParam wikibase:language "en".
  }
}
ORDER BY (lcase(?companyLabel))

Run the query

There is no logic in which companies have this property: they range from Scottish North Eastern Railway dissolving on 10th August 1866 to several that ended due to the formation of British Railways, the more recent British Rail ending on 1st January 2001, and the short lived National Express East Coast (1st January 2007 – 1st January 2009). However, once again, the dates are at times misleading as, in the case of National Express East Coast, it is only the year rather than full date in the inception and dissolved, abolished or demolishedproperties.

Some of the railway companies, such as Underground Electric Railways Company of London, have another source of dates and that is as part of the railway company value for their instance of. It is possible to extract the start and end dates if they are present by making use of nested conditional queries. In the line:

OPTIONAL {?company p:P31 [ pq:P580 ?companyStart]. }

the startTime property is extracted from the instance of property if it exists.

SELECT ?company ?companyLabel ?inception ?startTime ?endTime ?dissolved ?companyStart ?companyEnd
WHERE {
  ?company wdt:P31 wd:Q249556; wdt:P17 wd:Q145 .
  OPTIONAL { ?company wdt:P571 ?inception. }
  OPTIONAL { ?company wdt:P580 ?startTime. }
  OPTIONAL { ?company wdt:P582 ?endTime. }
  OPTIONAL { ?company wdt:P576 ?dissolved. }
  OPTIONAL { ?company p:P31 [ pq:P580 ?companyStart] . }
  OPTIONAL { ?company p:P31 [ pq:P582 ?companyEnd] . }
  SERVICE wikibase:label {
   bd:serviceParam wikibase:language "en".
  }
}
ORDER BY (lcase(?companyLabel))

Run the query

Another date that can be used to work out the start and end of the companies can be found hanging off the values of very useful pair of properties: replaced by (P1366) and replaces (P1365). This conveniently connects into the next part of my exploration that will follow in Part Two. Although, as with many railway related things, the exact time of arrival of part two cannot be confirmed.

[Header photograph taken by Andrew Sage]

Aberdeen Plaques – Part One

On Saturday 14th December 2019 we ran a one-day mini hack event. The idea behind it was for people to come along for a day to work on their side projects and, if they needed support, attempt to persuade others to assist them.

That’s what I did with my Aberdeen Plaques project: something I’d had on the back burner for more than a year.

Why do it?

The commemorative plaques which are dotted around the city are a perfect candidate for open data. They have a subject, usually some dates, are located somewhere, and are of different types etc. Making that all available as open data would open up a whole range of possibilities.

Some Aberdeen plaques
Some Aberdeen plaques

If we captured all of that well then we could do analysis on the data (ratio of women to men, most represented professions), create walking routes (maybe one for the arts, one for the sciences and so on), create timelines to see what periods are more represented.

Having recently trained as a WikiMedia UK trainer – and having experimented with some of the tools (Wiki Commons, Wiki Data, Wikipedia, Histropedia) I was convinced that these were the right way to go.

Pre-event prep

So, in advance of the hack day I’d done a bit of prep in the two weeks running up to the day iteself.

I’d created a spreadheet which recorded the
* subject (person or ‘thing’)
* Gender if known
* the link to the now-retired city council plaques system (hidden from public view)
* The location if known
* The geo coordinates (to be determined)
* Whether the subject had a Wikipedia page (tbd)
* Whether there was an image of the plaque on Wiki Commons (tbd)
* Whether the subject of the plaque was represented on Wiki Data (tbd)
* Any identifiers on Open Plaques (tbd)
* Any external links (eg to Flickr for photos)

I’d then populated some of the data (eg whether there were images of the plaque on Wiki Commons) as well as some other bits. But most cells were blank.

Pre-event spreadsheet
Pre-event spreadsheet

As a keen walker and photographer I had also photographed and uploaded seventeen plaque images to Wiki Commons in the lead up, so that we would have some images to work with.

How to use our time most effectively on the day?

Our aim for the day was then to find out what data / info / images existed, fill in the gaps, and explore how to use WikiData to store and retrieve data, and how we could potentially create maps, timelines and similiar new products.

What we did on the day

At the start of the event we pitched our project ideas, and I managed to persude five others (Angela, Mike, Stephen, James and Steve) to join me in working on the plaques project.

Angela and Mike, and later Angela and Stephen would go out and take photographs. Steve, James and I would work on the data capture, completing research on what existed, creating new entries for the data on Wiki Data, and testing queries on the Wiki Data query service.

How we did it

We used the spreadsheet that I had set up to capture all of the data we’d gathered – and as it eveolved it would show progress as well as what was still lacking. We had no expectations that we would do it all on the day, but we could pick away at it in future weeks and months.

In the run-up to the event I’d discovered The Pingus’ album of plaques photographs on Flickr. Sadly these had not been published with a licence that would allow us to use them. I’d sent a request, a few days before CTC18, for them to change the licence for the Aberdeen plaques pictures to a CC-SA one. This would have allowed our republishing on Wiki Commons. Sadly it didn’t elicit a response. But the album did show that there were many more plaques than the old ACC system listed. And it was possible to get co-ordinates from them. So the number of plaques to deal with kept growing.

During the day James filled in loads of gaps in which subjects were on Wikipedia and which on Wikidata.

Steve and I experimented with capturing and querying the data. Structuring that in a way that aids recall through Wiki Data Query Service was an interative process. Firstly I tried adding a statement ‘commomorative plaque image’ (P1801) into the wikidata record for the subject as you can see in this first example https://www.wikidata.org/wiki/Q2095630. But that limited what we could do.

So, we discovered that we could create a new object which was an instance of commemorative plaque. Our first attempt was https://www.wikidata.org/wiki/Q78438703 and we evolved what we captured there – adding statement, and Steve discovered the ‘openPlaques plaque ID'(P1893). Incidentally we also tried ‘openplaques Subject ID’ (P1430) but adding that to the plaque object throws an error. The latter should be added to the person record not the plaque.

At the end of CTC18

We ended the day with

  • 138 plaques listed.
  • 57 sets of co-ordinates identified
  • 68 Wikipedia articles identified as matching plaque subjects (and eleven plaques subjects who had NO wikipedia page)
  • 36 Images in WikiCommons
  • 77 WikiData entries for the subject of the plaques (existing or created)
  • 11 new wikidata entries for the plaques themselves

This was a great leap forward in one day and would pave the way for future work.

What next?

Since CTC18 ended, I’ve got firmly stuck into this project over the xmas break. Over the last three weeks I have now photographed over a hundred plaques (plenty of walking) and have created wikidata entries for most plaques and also their subjects in wikidata.

I’ll cover all of that, and how we can now use the data in part two, coming soon.