Aberdeen Provosts

In the run up to Code The City 19 we had several suggestions of potential projects that we could work on over the weekend. One was that we add all of the Provosts of Aberdeen to Wikidata. This appealed to me so I volunteered to work on it in a team with Wikimedia UK’s Scotland Programme Coordinator, Dr Sara Thomas, with whom I have worked on other projects.

In preparation for CTC19 I’d been reading up on the history of the City’s provosts and discovered that up to 1863 the official title was Provost, and from that point it was Lord Provost. I’d made changes to the Wikipedia page to reflect that, and I’d added an extra item to Wikidata so that we could create statements that properly reflected which position the people held.

Sara and I began by agreeing an approach and sharing resources. We made full use of Google Docs and Google Sheets.

We had two main sources of information on Provosts:

Running the project

I started by setting up a Google Sheet to pull data from Wikipedia as a first attempt to import a list to work with. The importHTML function in Google Sheets is a useful way to retrieve data in list or table format.

I entered the formula in the top left cell (A1):

=importhtml("https://en.wikipedia.org/wiki/List_of_provosts_of_Aberdeen", "list", 27)

and repeating the formula for all the lists – one per century. This populated our sheet with the numerous lists of provosts.

That state didn’t last very long. The query is dynamic. The structure of the Wikipedia page was being adapted, it appeared, with extra lists – so groups of former provosts kept disappearing from our sheet.

I decided to create a list manually – copying the HTML of the Wikipedia page and running some regex find and replace commands in a text editor to leave only the text we needed, which I then pasted into sheets.

Partial list of Provosts
Partial list of Lord Provosts

Once we had that in the Google Sheet we got to work with some formulae to clean and arrange the data. Our entries were in the form “(1410–1411) Robert Davidson” so we had to

    • split names from dates,
    • split the start dates from end dates, and
    • split names into family names and given names.

Having got that working (albeit with a few odd results to manually fix) Sara identified a Chrome plugin called “Wikipedia and WikiData tools” which proved really useful. For example we could query the term in a cell e.g. “Hadden” and get back the QID of the first instance of that. And we could point another query at the QID and ask what it was an instance of. If it was Family Name, or Given Name we could use those codes and only manually look up the others. That saved quite a bit of time.

Identifying QIDs for Given and Family Names
Identifying QIDs for Given and Family Names

Our aim in all of this was to prepare a bulk upload to Wikidata with as little manual entry as possible. To do that Sara had identified Quickstatements, which is a bulk upload tool for Wikidata, which allows you to make large numbers of edits through a relatively simple interface.

Sara created a model for what each item in Quickstatements should contain:

A model of a Quickstatements entry
A model of a Quickstatements entry

There are a few quirks – for example, how you format a date – but once you’ve got the basics down it’s an incredibly powerful tool. The help page is really very useful.

Where dates were concerned, I created a formula to look up the date in another cell then surround it with the formatting needed:

="+"&Sheet1!J99&"-00-00T00:00:00Z/9"

Which gave +1515-00-00T00:00:00Z/9 as the output.

You can also bulk-create items, which is what we did here. We found that it worked best in Firefox, after a few stumbles.

Data harvesting

As mentioned above, we used a printed source, from which we harvested the data about the individual Provosts.  It’s easy to get very detailed very quickly, but we decided on a basic upload for:

  • Name
  • First name
  • Last name
  • Position held (qualified by the dates)
  • Date of birth, and death (where available).

Some of our provosts held the position three or four times, often with breaks between. We attempted to work out a way to add the same role held twice with different date qualifiers, but ultimately this had to be done manually

The first upload

We made a few test batches – five or six entries to see how the process worked.

A test batch to upload via Quickstatements
A test batch to upload via Quickstatements

When that worked we created larger batches. We concluded the weekend with all of the Provosts and Lord Provosts being added to Wikidata which was very satisfying. We also had a list of further tasks to carry out to enhance the data. These included:

  • Add multiple terms of office – now complete,
  • Add statements for Replaces (P1365) and Replaced By (P1366) – partly done,
  • Add honorific titles, partly done
  • Add images of signatures (partly done) and portraits ( completed) from the reference book,
  • Add biographical details from the book – hardly started,
  • Source images for WIkiCommons from the collection portraits at AAGM – request sent,
  • Add places of burial, identifiers from Find A Grave, photographs of gravestones,
  • Add streets named after provosts and link them.

You can see the results in this WikiData query: https://w.wiki/PsF

A Wikidata Query showing Provosts' Terms of Office, and their replacements
A Wikidata Query showing Provosts’ Terms of Office, and their replacements

This was a very interesting project to work on – and there is still more to do to improve the data, which you can help with.

Aberdeen Harbour Board Arrivals Transcription Project

A blog post by Mollie Horne, Project Archivist at Aberdeen City and Aberdeenshire Archives and Ian Watt of Code The City.

The arrivals transcription project is an ongoing partnership between Code the City and Aberdeen City & Aberdeenshire Archives. It forms part of a wider project funded by the Archives Revealed initiative funded by The National Archives which aims to improve the accessibility of records.

The arrival registers are a small part of a much larger collection which was transferred to Aberdeen City and Aberdeenshire Archives as a result of a partnership with the Aberdeen Harbour Board.

The project was originally intended to be part of the physical Code the City 19 event in April 2020 but in anticipation of the nationwide restrictions, it was decided to move entirely online. In the week before we were told to work from home, Mollie photographed each individual page (all 649 of them) from the arrival registers from 1914-1920 and uploaded them to the Google Sheets system which had been set up by Ian. This meant that we had a large amount of material which could be worked on for an extended period.

After creating a set of guidelines and helpful links, we invited the public to work on transcribing and checking entries from March 27th onwards. As the online CTC19 event was scheduled for 11-12th April this allowed us two weeks to create enough data to be useful to the coders over the official weekend.

Transcribers accessed two Google sheets. The first was to log their participation and note what photograph they were transcribing.

The second sheet was the one into which they transcribed the data.

Example of 1916 transcription
Example of 1916 transcription

We also set up an open Slack group where transcribers could chat, ask questions, get help etc.

Progress was rapid: by the end of the weekend almost 4,000 records had been transcribed and checked. At the time of writing (2nd May 2020) that has now grown to over 7,000 records transcribed.

When an image has been transcribed, and checked, we lock off the entries to preserve them form change.

The data which had been transcribed was used to create a website, set up by Andrew Sage of CTC, where we could see information in a collated an organised way – this was extremely useful to inform other transcriptions. So far we have managed to fully complete 1914 and are working through the rest of the years.

The arrivals transcription project started as a great way to highlight an important time in the history of the Harbour, which has always been a big part of Aberdeen. However, given current circumstances, it has also become a great opportunity to give people something to focus on.

The project remains open – and you can still get involved by contributing just an hour or two of your time. Start here.

Joining the dots between Britain’s historical railways using Wikidata – Part One

A bit of background

The evening before Code The City 18 I started to think about what fun project to spend the day doing at our one day mini-hack event. After reading Ian Watt’s blogpost about Wikidata and spending 10 minutes or so playing around with it, I decided a topic for further experimentation was required.

At the time of writing, I’m just over a third of the way through my very interesting part-time online MA Railway Studies at University of York. Looking at Britain’s railways from their very beginning, there are many railway companies from 1821 onwards. Some of these companies merged, some were taken over, others just disappeared whilst others were replaced by new companies. All these amalgamations eventually led to the “Big Four” groupings in 1923 and then on to British Railways in 1948’s railway nationalisation. British Railways rebranded as British Rail in 1965 and then splintered into numerous companies as a result of the denationalisation of the 1990s.

With the railway companies appearing in some form or another in Wikipedia, I thought it would be useful to be able to pick any railway company and view the chain of companies that led to it and those that followed. The ultimate goal would be to be able to bring up the data for British Rail and then see the whole past unfold to the left and the future unravel to the right. In theory at least, Wikidata should allow me to do that. 

No software coding skills are required to see the results of my experimentation: by clicking on the links provided (usually directly after the code) it is possible to run the queries and see what happens. However, using the code provided as a start, it is possible to build on the examples to find out things for yourself.

Understanding Wikidata and SPARQL

SPARQL is the query language used to retrieve various data sets from Wikidata via the Wikidata Query Service.

As is always the case with anything software related, the examples and tutorials never seem to handle those edge cases that you seem to hit within the first 5 minutes. Maybe I hit these cases so soon due to jumping straight from the “hello world” of requesting all the railway companies formed in the UK to trying to build the more complex web of railway companies rather than working my way through all the simpler steps? However, my belief is to fail quickly, leaving plenty of time left to fail some more before succeeding, after all you never see a young child plan out a strategy when they are learning to get the different shaped blocks through the correct holes.

At the time of writing…

Comments about the state of certain items of data were relevant at the time I wrote this article. As one of the big features of Wikidata is it constantly being updated, expanded and corrected, the data referenced may have changed by the time you read this. Some of the changes are those I’ve made in reaction to my discoveries, but I have left some out there for others to fix.

A simple list

First off, I created a simple SPARQL query to request all the railway companies that were formed in the UK.

SELECT ?company ?companyLabel
WHERE {
?company wdt:P31 wd:Q249556; wdt:P17 wd:Q145 .
SERVICE wikibase:label {
bd:serviceParam wikibase:language “en”.
}
}
ORDER BY (lcase(?companyLabel))

Run the query

The output of this query can be seen by running it yourself here by clicking on the white-on-blue arrow displayed on the Wikidata Query Service console. It is safe to modify the query in the console without messing up my query as any changes cause a new bookmarked query to be created. So please experiment as that’s the only way to learn.

Now what does the query mean and where do all those magic numbers come from?

  • wdt:P31 means get me all Wikidata triples (wdt) that have the property instance of  (P31) that is has a value of railway company (Q249556).
  • wdt:P17 means get me all of the results so far that have the property country (P17) set to United Kingdom (Q145).

Where did I get those numbers from? First, I went to Wikipedia and searched for a railway company, LMS Railway, and got to the page for London, Midland and Scottish Railway. From here I went to the Wikidata item for the page.

Screen grab of Wikipedia page for LMSR that shows how to get to the Wikidata
Wikipedia page for LMSR that shows how to get to the Wikidata

From here I hovered my pointer over instance ofrailway companycountry and United Kingdom to find out those magic numbers.

Screen grab of the Wikidata page for LMSR
Wikidata page for LMSR

Some unexpected results

Some unexpected companies turned up in the results list due to my query not being specific enough. For example, Algeciras Gibraltar Railway Company, located in Gibraltar but with headquarters registered in the UK the data has its country as United Kingdom. To filter my results down to just those that are located in the UK I tried searching for those that had the located in the administrative territorial entity (P131) with any of the following values:

  • England (Q21) 
  • Northern Ireland (Q26)
  • Scotland (Q22)
  • Wales (Q25)
  • Ireland (Q57695350) (covering 1801 – 1922)

using this query:

SELECT ?company ?companyLabel ?countryLabel
WHERE {
  VALUES ?country { wd:Q21 wd:Q26 wd:Q22 wd:Q25 wd:Q57695350 }
  ?company wdt:P31 wd:Q249556; wdt:P17 wd:Q145; wdt:P131 ?country.
  SERVICE wikibase:label {
bd:serviceParam wikibase:language "en".
}
}

ORDER BY (lcase(?companyLabel))

Run the query

However, that dropped my result set from 228 to 25 due to not all the companies having that property set.

Note: When trying to find out what values to use it is often quick and easy to run a simple query to ask Wikidata itself. To find out what all the values were for UK countries I wrote the following that asked for all countries that had an instance of value of country within the United Kingdom (Q3336843):

select ?country ?countryLabel
WHERE {
  ?country wdt:P31 wd:Q3336843 .
  SERVICE wikibase:label {
 bd:serviceParam wikibase:language "en".
  }
}

Run the query

Dates

In order to see what other information could easily be displayed for the companies, I looked at the list of properties on the London, Midland and Scottish Railway. I saw several dates listed so decided that would be my next area of investigation. There is an inception (P571) date that shows when something came into being, so I tried a query with that:

SELECT ?company ?companyLabel ?inception
WHERE {
  ?company wdt:P31 wd:Q249556; wdt:P17 wd:Q145 .
  ?company wdt:P571 ?inception 
  SERVICE wikibase:label {
   bd:serviceParam wikibase:language "en".
  }
}
ORDER BY (lcase(?companyLabel))

Run the query.

This demonstrated two big issues with data. Firstly, the result set had dropped from 228 to 106 indicating that not all the company entries have the inception property set. The second was that only one, Scottish North Eastern Railway, had a full date (29th July 1856) specified, the rest only had a year and that was being displayed as 1st January for the year. Adding the OPTIONAL clause to the inception request returns the full data set with blanks where there is no inception date specified.

SELECT ?company ?companyLabel ?inception
WHERE {
  ?company wdt:P31 wd:Q249556; wdt:P17 wd:Q145 .
  OPTIONAL { ?company wdt:P571 ?inception. }
  SERVICE wikibase:label {
   bd:serviceParam wikibase:language "en".
  }
}
ORDER BY (lcase(?companyLabel))

Run the query

Railway companies are not a straightforward case when it comes to a start date due to there being no one single start date. Each railway company required an Act of Parliament to officially enable it to be formed and grant permission to build the railway line(s). This raises the question: is it the date that Act was passed, the date the company was actually formed or the date that the company commenced operating their service that should be used for the start date? Here is a revised query that gets both the start time (P580) and end time(P582) of the company if they have been set:

SELECT ?company ?companyLabel ?inception ?startTime ?endTime
WHERE {
  ?company wdt:P31 wd:Q249556; wdt:P17 wd:Q145 .
  OPTIONAL { ?company wdt:P571 ?inception. }
  OPTIONAL { ?company wdt:P580 ?startTime. }
  OPTIONAL { ?company wdt:P582 ?endTime. }
  SERVICE wikibase:label {
   bd:serviceParam wikibase:language "en".
  }
}
ORDER BY (lcase(?companyLabel))

Run the query

Unfortunately, of the 228 results only one, London, Midland and Scottish Railway, has a startTime and endTime, and London and North Eastern Railway is the only with endTime. Based on these results it looks like that startTime and endTime are not generally used for railway companies. Looking through the data for Scottish North Eastern Railway did turn up a new source of end dates in the form of the dissolved, abolished or demolished (P576) property. Adding a search for this resulted in 9 companies with dissolved dates. 

SELECT ?company ?companyLabel ?inception ?startTime ?endTime ?dissolved
WHERE {
  ?company wdt:P31 wd:Q249556; wdt:P17 wd:Q145 .
  OPTIONAL { ?company wdt:P571 ?inception. }
  OPTIONAL { ?company wdt:P580 ?startTime. }
  OPTIONAL { ?company wdt:P582 ?endTime. }
  OPTIONAL { ?company wdt:P576 ?dissolved. }
  SERVICE wikibase:label {
bd:serviceParam wikibase:language "en".
  }
}
ORDER BY (lcase(?companyLabel))

Run the query

There is no logic in which companies have this property: they range from Scottish North Eastern Railway dissolving on 10th August 1866 to several that ended due to the formation of British Railways, the more recent British Rail ending on 1st January 2001, and the short lived National Express East Coast (1st January 2007 – 1st January 2009). However, once again, the dates are at times misleading as, in the case of National Express East Coast, it is only the year rather than full date in the inception and dissolved, abolished or demolishedproperties.

Some of the railway companies, such as Underground Electric Railways Company of London, have another source of dates and that is as part of the railway company value for their instance of. It is possible to extract the start and end dates if they are present by making use of nested conditional queries. In the line:

OPTIONAL {?company p:P31 [ pq:P580 ?companyStart]. }

the startTime property is extracted from the instance of property if it exists.

SELECT ?company ?companyLabel ?inception ?startTime ?endTime ?dissolved ?companyStart ?companyEnd
WHERE {
  ?company wdt:P31 wd:Q249556; wdt:P17 wd:Q145 .
  OPTIONAL { ?company wdt:P571 ?inception. }
  OPTIONAL { ?company wdt:P580 ?startTime. }
  OPTIONAL { ?company wdt:P582 ?endTime. }
  OPTIONAL { ?company wdt:P576 ?dissolved. }
  OPTIONAL { ?company p:P31 [ pq:P580 ?companyStart] . }
  OPTIONAL { ?company p:P31 [ pq:P582 ?companyEnd] . }
  SERVICE wikibase:label {
   bd:serviceParam wikibase:language "en".
  }
}
ORDER BY (lcase(?companyLabel))

Run the query

Another date that can be used to work out the start and end of the companies can be found hanging off the values of very useful pair of properties: replaced by (P1366) and replaces (P1365). This conveniently connects into the next part of my exploration that will follow in Part Two. Although, as with many railway related things, the exact time of arrival of part two cannot be confirmed.

[Header photograph taken by Andrew Sage]

2019 – the year in review

Intro

The year just past has been a pivotal one for Code The City, we’ve moved into a new home, expanded our operations, engaged with new communities of people, and started to put in place solid planning which will be underpinned by expansion and better governance. 

Here are some of the highlights from 2019.

Sponsors, volunteers and attendees

We couldn’t do what we do without the help of some amazing people. With just three trustees (Bruce, Steve and Andrew) and Ian our CEO, we couldn’t cover such a range of activities without serious help. Whether you come to our events, volunteer, or your company sponsors our work, you are making a difference in Aberdeen. 

Listing things is always dangerous as the potential to miss people out is huge. But here we go! 

The Data Lab, MBN Solutions, Scotland IS, InoApps, Forty-Two Studio, who all provided very generous financial support; H2O AI  donated to our charity in lieu of sponsorship of a meet-up;  and the James Hutton Institute and InoApps who also donated laptops for us to re-use at our code clubs. Codify, IFB, Converged Comms who provided specific funding for projects including buying kit for code club, and paying for new air quality devices – some of which we have still to build.

Our regular volunteers – Vanessa, Zoe, Attakrit, Charlotte, and Shibo –  plus the several parents who stay to help too, all help mentor the kids at Young City Coders club. 

Lee, Carlos, Scott, Rob who are on the steering group of the Python User Group meetup. 

Naomi, Ian N, David, and Gavin who are on the steering group for Air Aberdeen along with Kevin from 57 North who supervises the building of new sensor devices. 

The ONE Tech Hub, and ONE Codebase have created a great space not only for us to work in, but also in which to run our public-facing events. 

Everyone who stays behind to help us clear away plates, cups and uneaten food – or nips out to the shops when we run out of milk.

Apologies to anyone we have missed!


And finally YOU – everyone who has attended one of or sessions – you’ve helped make Aberdeen a little bit better place to live in. Thank you!

Hack weekends

We ran four hack events this year. Here is a quick run-down. 

Air Quality 1

We kicked off 2019 with the CTC15 AIr Quality hack in February. This saw us create fourteen new devices which people took home to install and start gathering data. We also had a number of teams looking at the data coming from the sensors, and some looking at how we could use LoraWAN as a data transport network. We set some targets for sensor numbers which were, in retrospect, perhaps a little ambitious. We set up a website (https://airaberdeen.org

Air Quality 2

Unusually for us we had a second event on the same theme in quick succession: CTC16 in June. Attendees created another fourteen devices. We developed a better model for the data, improved on the website and governance of the project. We got great coverage on TV, on radio and in local newspapers. 

Make Aberdeen Better

CTC17 came along in November. The theme was a broad one – what would you do to make Aberdeen a better place to live, work or play? Attendees chose four projects to work on: public transport, improved methods of monitoring air quality, how we might match IT volunteers to charities needing IT help, and the open data around recycling.

Xmas mini-hack

CTC18, our final hack of the year was another themeless one, timed to fit into a single day. We asked participants to come and work on a pet side-project, or to help someone else with theirs. Despite a lower turnout in the run-up to Christmas, we still had eight projects being worked on during the day.

New home, service

In the late summer the ONE Tech Hub opened and we moved in as one of the first tenants. So far we rent a single desk in the co-working space but we aim to expand that next year. The building is great, which is why we run all of our events there now, and as numbers grow it promises to fulfil its promise as the bustling centre of Aberdeen’s tech community. 

Having started a new Data Meet-up in 2018 we moved that to ONE Tech Hub along with our hack events. We also kicked off a new Python User group in September this year, the same year as we started to deliver Young City Coders sessions to encourage youngsters to get into coding, using primarily Scratch and Python. 

We also ran our first WikiMedia Editathon in August – using WIkipedia, WIki Commons and Wikidata to capture and share some of the history of Aberdeen’s cinemas using these platforms. We are really supportive of better using all of the wikimedia tools. Ian recently attended a three-day course to become a wikimedia trainer. And at CTC18 there were two projects using wikidata and wiki commons too. Expect much more of this next year! 

Some recognition and some numbers

We’ve been monitoring our reach and impact this year.  

In March we were delighted to see that Code The City made it onto the Digital Social Innovation For Europe platform.  This project was to identify organisations and projects across the EU who are making an impact using tech and data for civic good. 

In July we appeared for the first time in an Academic journal – in an article about using a hackathon to bring together health professionals, data scientists and others to address health challenges. 

We will be launching our  dashboard in the New Year. Meantime, here are some numbers to chew on. 

Hack events

We ran four sessions, detailed above. We had 102 attendees and 15 facilitators who put in a total of 1,872 hours of effort on a total of 20 projects. All of this was for civic benefit. 

Young City Coders

We ran six sessions of our Young City Coders which started in September. The sessions had a total of 114 kids attending and 28 mentors giving up two hours or more. 

Data Meet-ups

In 2019 we had 12 data meet-ups with 28 speakers and 575 attendees! This is becoming a really strong local community of practitioners and researchers from academia and local industry. 

Python Meet-ups

Each of our four sessions from September to December had a speaker, and attracted a total of 112 attendees who were set small project tasks. 

The year ahead

2020 is going to see CTC accelerate its expansion. We’re recruiting two new board members, and we have drawn up a business plan which we will share soon. That should see us expand the team and strengthen our ability to drive positive societal change through tech, data and volunteering. We have two large companies considering providing sponsorship for new activities next year.  We’ll also be looking at improving our fundraising – widening the range of sources that we approach for funding, and allowing us to hire staff for the first time. 

Open Data

We’re long-term champions of open data as many of you will have read in previous posts. We’ve identified the need to strengthen the Open Data community in Scotland and to contribute beyond our own activities. Not only has Ian joined the Civic side of Open Government Partnership, and is leading on Commitment three of that to improve open data provision, but he has also joined the board of the Data Commons Scotland programme at Stirling University. 

Scottish Open Data Unconference

Beyond that we have created, and we are going to run, the Scottish Open Data Unconference in March. This promises to be a great coming together of the data community including academia, government, developers, and publishers. If you haven’t yet signed up please do so now – there are only 11 tickets of 90 still available. We’ll also need volunteers to help run it: scribes for sessions, helping to orientate new visitors, covering reception, photography, blogging etc. Let us know how you could help. 

We look forward to working with you all in the New Year and wish you all a peaceful and relaxing time over the festive period. 

 

Ian, Steve, Bruce and Andrew

[Photo by Eric Rothermel on Unsplash\