Opening the data on Aberdeen Convicts – part 1

A guest blog-post by Sara Mazzoli, a post-graduate student at Edinburgh University, who has been interning at Code The City for the last three months. During this project she has worked closely with us and with the Aberdeen City and Aberdeenshire Archives.

Introduction: what is the Register of Returned Convicts?

Historical context, use and description

The Register of Returned Convicts of Aberdeen (1869-1939) is a fascinating, “small-but-chunky” (Astley, 2021) volume contained in the Aberdeen and Aberdeenshire Archives, comprising a total of 279 entries. It is located in the Grampian police collection of the Archives. Out of these entries, about sixty feature mug shots – which can be seen here.

As suggested by the register’s title, the register was used to take note of convicts’ addresses upon release. In fact, Phil Astley – Aberdeen’s Archivist – explained to us that this register contains information on convicts that were sentenced to Penal Servitude (often noted in the register as P.S.). 

The Penal Servitude Act, enforced in 1857, was meant to substitute transportation with a prison sentence. This specific sentence consisted of three parts: solitary confinement; labour and release on licence. This latter element meant that individuals sentenced to P.S. had to report monthly to the police during their licence time. Also, they had to report any change in address within 48 hours. 

A typical page of the Register looks like this:

As it can be seen, at the top of each page of the register, information was noted on convicts’ physical traits and age upon release, as well as conviction and sentence. In the “Marks” section, anything noteworthy – such as tattoos, scars, deformities and moles – was written down. In fact, according to Phil Astley the industrialisation process determined a high incidence of accidents in factories. Therefore, disfigurements were common amongst workers. 

At the bottom half of the page, the register featured information on the convicts’ addresses after their sentence ended. Most of the addresses of the people noted in the register were in Aberdeen. However, some also moved to nearby towns and villages – such as Dundee – or to bigger cities, such as Edinburgh and Glasgow. 

Moreover, Phil suggested that there are other two particular acts that shaped the register. 

  • The Habitual Offenders Act 1869. 
  • The Prevention of Crimes Act of 1871. This act 

Simply put, these two acts tightened former criminals’ liberties, and enhanced police monitoring of these individuals. These laws were in fact especially crafted to fight habitual criminals (Radzinowicz and Hood, 1980): with increasing urbanization, authorities were concerned with what they labelled as “criminal classes”, an expression by which they referred to individuals who mainly lived through criminal activities. The Register can ultimately be understood as an example of the attempt to monitor the movements of these repeat offenders.

The mugshots and the “habitual criminal”

The camera was developed in the first half of the Nineteenth century, and was initially seen as a tool to represent bodies in a realistic manner. Indeed, photography was depicted as an objective and neutral representation of reality, and therefore authorities started using this tool for law enforcement since the 1840s: “Given its material features and its cultural value as an objective form of representation, the camera provided the perfect tool for the documentation, classification, and regulation of the body within the carceral network” (Finn, 2009, p. 29). 

Indeed, at first, as the concept of “mug shot” was developing, photos of individuals in the Register lacked a unique formatting, which started to appear in the 1890s. Indeed, as claimed by Finn (2009), mug shots developed from the Nineteenth-century portrait. These portraits featured an individual sitting, with no facial expression, and were usually taken from the front. As it can be seen, the first few mug shots look more like portraits compared to the later ones. For example:

 Fig 1: two mug shots from the Register of the Returned Convicts (1869-1939). The first one, depicts Ann Mc Govern, released in 1872. The second one is the mug shot of John Proctor, discharged in 1893.

According to Holligan and Maitra (2018, p. 173), mug shots were established and developed in a milieu of “pessimism about classes of society”. Moreover, the development of criminal anthropology led to a more wide-spread use of photography in carceral settings. Scholars of this field of studies, such as Cesare Lombroso, believed that certain physical characteristics could yield the identification of criminals. The believed objectivity of photography meant that mug shots could further inform these studies; as characteristics found in mugshots could be analysed by criminal anthropologists. At the same time, the popularity of criminal anthropology led this field of studies also to shape law enforcement practices; first and foremost, by shaping the practices of mug shots taking and of noting distinguishing marks. 

Specifically, mug shots were introduced in the UK thanks to the above-mentioned Prevention of Crimes Act of 1871: “Under the section 7 of the Prevention of Crime Act 1871 it was recommended that convicted prisoners be photographed before release, full and side face,measurements in millimetres and feet and inches to be made of length and width of head, and lengths of arms,feet and left middle finger including the papillary ridges of the ten fingers as well as distinctive marks by position on body” (Holligan & Maitra, 2018, p. 177). Indeed, Holligan and Maitra (2018) contend that the development of criminal anthropology led to the belief that “habitual criminals” could be identified by some specific marks; such as the length of imbs. Some of these marks were collected and published by the British Registry of Distinctive Marks, which regulated and influenced the ways in which authorities saw and noted distinguishing marks on prisoners. 

Ultimately, we aim to argue that this Victorian construction of crime and of the criminal influenced the way in which the register is composed as well, and that the meaning of “crime” and “criminal” are dictated by moral and social standards. Indeed, many were arrested due to charges of Theft “Habit & Repute” which, according to Dr. Darby, means considering someone’s as having a “bad character, a bad name for theft specifically, and that other witnesses considered him a bad person”. Analysing the register means considering those social rules that shaped the way in which the register is written. 

It is in our opinion fundamental to acknowledge such dimensions of the register as we open its data. It is in fact important to recognize that “Registers are political” (Ziegler, 2020), and that therefore the categories of the register are constructed. However, it must also be acknowledged that their construction does not make these categories any less impactful on individuals’ lives.  Indeed, this is why we embrace attempts such as that of Phil, who tried to retrieve the humanity of the individuals in the register by associating their mug shots to stories; as we shall argue in the next paragraph. 

Why this project is important: how did everything start?

Phil Astley explained that the interest in the register was built up during the 2019 and 2020 Granite Noir festival exhibitions, to which the Archives provided 19th century wanted posters, photos of 1930s crime scenes, as well as mug shots contained in the Register. 

Indeed, the mug shots had attracted a positive response, and Phil started the Criminal Portraits blog, in which he started exploring the stories of returned convicts whose mug shots are contained in the register. Therefore, Phil has published more than 50 blog posts, drawing on heterogeneous sources, such as newspapers of the time and censuses. The blog has attracted more than 20 thousand views.

In discussing the plans for this project with Phil and Ian Watt of Code The City we agreed that opening up the data contained in the register – making it available as Open Data – would have social and other benefits. 

According to the Open Data Handbook, open data is data that can be easily available and re-usable by anyone. There are many values pertaining to open data. Indeed, it can allow for more transparency, and therefore institutions’ or organizations’ accountability. Moreover, it can also prompt economic participation and investment by private companies. Finally, open data can enable citizen participation and engagement, as it is with this project. 

In this specific case, we decided to open data from the register precisely because of the public interest it attracted. Not only is the life of the individuals contained in the register fascinating in itself, but we would argue that opening up this data has also a greater social value. For example, it would allow for individuals with a genealogical interest to find out more about their possible ancestors; or it could be useful for researchers who are carrying out their work on criminality in Scotland. 

In any case, opening up data from the Archives could lead to more interest towards their rich collections, as well as to a more thorough understanding of these collections’ communal utility.

It was agreed that we would use Wikidata as the place to host the data, given Code The City’s and Ian’s knowledge of, and enthusiasm, for this platform. 

How we made the data available

In the second part of this blog we will detail how we transcribed the data, prepared it for Wikidata, uploaded it in bulk, published mugshot photos and linked those. 

References

Astley, P. [Aberdeen Performing Arts]. (2021, February 23). Phil Astley – Criminal Portaits Webinar – Granite Noir 2021 [Video]. YouTube.
https://www.youtube.com/watch?v=UFcOG_7Cv0I&t=2346s 

Darby, N. (2014, 11 28). The ‘habit and repute’ thief in Scottish law. Retrieved from Dr Nelly Darby. Criminal Historian:
www.criminalhistorian.com/the-habit-and-reute-thief-in-scottish-law/

Finn, J. M. (2009). Capturing the criminal image: From mug shot to surveillance society. U of Minnesota Press.

McLean, R., Maitra, D. E. V., & Holligan, C. (2017). Voices of quiet desistance in UK prisons: Exploring emergence of new identities under desistance constraint. The Howard journal of crime and justice, 56(4), 437-453

Open Knowledge Foundation. (n.d.). Open Definition: Defining Open in Open Data, Open Content and Open Knowledge. Retrieved from Open Knowledge Foundation:
https://opendefinition.org/od/2.1/en/ 

Radzinowicz, L., & Hood, R. (1980). Incapacitating the habitual criminal: The English
experience. Michigan Law Review, 78(8), 1305-1389.

Ziegler, S. L. (2020). Open Data in Cultural Heritage Institutions: Can We Be Better Than Data Brokers?. Digital Humanities Quarterl, 14(2).

Waste Wizards at CTC22

A write-up of progress at the March 2021 Environment-themed hack weekend.

What problem we were addressing?


The public have access to two free, easy accessible waste recycling and disposal methods. The first is “kerbside collection” where a bin lorry will drive close to almost every abode in the UK and crews will (in a variety of different ways) empty the various bins, receptacles, boxes and bags. The second is access to recycling centres, officially named Household Waste Recycling Centres (HWRCs) but more commonly known as the tip or the dump. These HWRCs are owned by councils or local authorities and the information about these is available on local government websites.


However, knowledge about this second option: the tips, the dumps, the HWRCs, is limited. One of the reasons for that is poor standardisation. Council A will label, map, or describe a centre one way; Council B will do it in a different way. There is a lot of perceived knowledge – “well everybody just looks at their council’s website, and everybody knows you can only use your council’s centres”. This is why at CTC22 we wanted to get all the data about HWRCs into a standard set format, and release it into the open for communities to keep it present and up to date. Then we’d use that data to produce a modern UI so that residents can actually get the information they require:

  • Which tips they can use?
  • When these dumps are open?
  • What can they take to these HWRCs?
  • “I have item x – where can I dispose of it?”

Our approach


There were six main tasks to complete:

  1. Get together a list of all the HWRCs in the UK
  2. Build an open data community page to be the centre point
  3. Bulk upload the HWRCs’ data to WikiData
  4. Manually enter the HWRCs into OpenStreetMap
  5. Create a website to show all the data
  6. Create a connection with OpenStreetMap so that users could use the website to update OSM.

What we built / did

All HWRCs are regulated by a nation’s environmental regulator:

  • For Scotland it is SEPA
  • For Northern Ireland it is NIEA
  • For Wales it is NRW
  • For England it is EA

A list of over 1,000 centres was collated from these four agencies. The data was of variable quality and inconsistent.


This information was added to a wiki page on Open Street Map – Household waste in the United Kingdom, along with some definitions to help the community navigate the overly complex nature of the waste industry.


From that the lists for Scotland, Wales and England were bulk uploaded to WikiData. The was achieved by processing the data in Jupiter Notebooks, from which formatted data was exported to be bulk uploaded via the Quick Statements tool. The NIEA dataset did not include geolocation information so future investigation will need to be done to add these before these too can be uploaded. A Wikidata query has been created to show progress on a map. At the time of writing 922 HWRCs are now in Wikidata.

Then the never-ending task of locating, updating, and committing the changes of each of the OSM locations was started.

To represent this data the team built a front-end UI with .NET Core and Leaflet.js that used Overpass Turbo to query OSM. Local Authority geolocation polygons were added to highlight the sites that a member of the public could access. By further querying the accepted waste streams the website is able to indicate which of those centres they can visit can accept the items they are wanting to recycle.

However, the tool is only as good as the data so to close the loop we added a “suggest a change” button that allowed users to post a note on that location on OpenStreetMap so the wider community can update that data.

We named the website OpenWasteMap and released it into the wild.

The github repo from CTC22 is open and available to access.

Pull requests are also welcome on the repo for OpenWasteMap.

What we will do next (or would do with more time/ funding etc)

The next task is to get all the data up-to-date and to keep it up to date; we are confident that we can do this because of the wonderful open data community. It would also be great if we could improve the current interface on the frontend for users to edit existing waste sites. Adding a single note to a map when suggesting a change could be replaced with an edit form with a list of fields we would like to see populated for HWRCs. Existing examples of excellent editing interfaces in the wild include healthsites.io which provides an element of gamification and completionism with a progress bar with how much data is populated for a particular location.

An example entry from Healthsites.io

Source: https://healthsites.io/map#!/locality/way/26794119

While working through the council websites it has become an issue that there is no standard set of terms for household items, and the list is not machine friendly. For example, a household fridge can be called:

  • Fridge
  • Fridge Freezer
  • WEEE
  • Large Domestic Electrical Appliance
  • Electric Appliance
  • White Good

A “fun” next task would be to come up with a taxonomy of terms that allows easier classification and understanding for both the user and the machine. Part of this would include matching “human readable” names to relevant OpenStreetMap tags. For example “glass” as an OSM tag would be “recycling:glass”


There are other waste sites that the public can used called Bring Banks / Recycling Points that are not run by Local Authorities that are more informal locations for recycling – these too should be added but there needs to be some consideration on how this information is maintained as their number could be tenfold that of HWRCs.

As we look into the future we must also anticipate the volume of data we may be able to get out of sources like OpenStreetMap and WikiData once well populated by the community. Starting out with a response time of mere milliseconds when querying a dozen points you created in a hackathon is a great start; but as a project grows the data size can spiral into megabytes and response times into seconds. With around 1,000 recycling centres in the UK and thousands more of the aforementioned Bring Banks this could be a lot of data to handle and serve up to the public in a presentable manner.

Using Wikidata to model Aberdeen’s Industrial Heritage

Saturday 6th March, 2021 was World Open Data Day. To mark this international event CTC ran a Wikidata Taster session. The objectives were to introduce attendees to Wikidata and how it works, and give them a few hours to familiarise themselves with how to add items, link items, and add images.

Presentation title screen
Presentation title screen

The theme of the session (to give it some structure and focus) was the Industrial Heritage of Aberdeen. More specifically the bygone industries of Aberdeen and, more specific still, the many Iron Foundries that once existed. I chose the specific topic as it is still relatively easy to spot the products of the industry on streets and pavements as we walk around the city, photograph those and add them to Wiki Commons, as I have been doing.

We had thirteen people book and eight turn up. After I gave a short presentation on how Wikidata operates we divided ourselves into three groups in breakout rooms. This was all on Zoom, of course, while we were still under lockdown.

The teams of attendees chose a foundry each: Barry, Henry & Cook Limited; Blaikie Brothers, and William McKinnon & Company Ltd. I’d already created an entry for John Duffus and Company in preparation for the event and to use as a model.

I’d also created a Google Sheet with a tab for each of the other thirteen foundries I’d identified (including those selected by the groups). I’d also spent quite a while trying to figure out how to access and search the old business and Post Office Directories for the city which had been digitised for 1824 to 1941. I eventually I built myself a tool, which I shared with the teams, which generated an URL for a specific search term for a certain directory. They used this, as well as other sources, to identify key dates, addresses and name changes of businesses.

By the end of the session our teams had created items for

They had also created items for foundry buildings – linked to Canmore etc, as well as founders. We enhanced these with places of their burial, portraits and images of gravestones. I took further photos which I uploaded to Commons and linked the following Monday. I created two Wikidata queries to show the businesses added, and the founders who created the businesses.

The statistics for the 3 hour session (although some worked into the afternoon and even the next day) are impressive. You can see more detail on the event dashboard.

We received positive feedback from the attendees who have been able to take their first steps towards using Wikidata as a public linked open data for heritage items.

I hope that the attendees will keep working on the iron founders until we have all of these represented on Wikidata. Next we can tackle shipbuilders and the granite industry!

Nautical Wrecks

This is project started as part of CTC21: Put Your City on the Map which ran Saturday 28th Nov 2020 and Sunday 29th Nov 2020. You can find our code on Github.

There are thousands of ship wrecks off the coast of Scotland which can be seen on Marine Scotland’s website

Marine Scotland map of wrecks

In Wikidata the position was quite different with only a few wrecks being logged. The information for the image below was derived from running the following query in Wikidata https://w.wiki/nDt

Initial map of Wikidata shipwrecks

Day one – sourcing the information of the wrecks. 

The project started by research various website to obtain the raw data required. Maps with shipwrecks plotted were found but finding the underlying data source was not so easy.

Data on Marine Scotland, Aberdeenshire Council’s website and on the Canmore website were considered. 

Once data was found, the next stage was finding out the licensing rights and whether or not the data could be downloaded and legitimately reused. The data found on Canmore’s website indicated that it was provided under an Open Government Licence hence could be uploaded to Wikidata. This is the data source which was then used on day two of the project. 

A training session on how to use Wikidata was also required on day one to allow the team to understand how to upload the data to Wikidata and how the identifiers etc worked.

Day two – cleaning and uploaded the data to Wikidata. 

Deciding on the identifiers to use in Wikidata was the starting point, then the data had to be cleaned and manipulated. This involved translating Easting and Northings coordinates to latitude and longitude, matching the ship types between the Canmore file and Wikidata, extracting the reference to the ship from Canmore’s URL and general overall common sense review of the data. To aid with this work a Python script was created. It produced a tab separated file with the necessary statements to upload to Wikidata via Quickstatements. 

A screenshot of the output text file.

The team members were new to Wikidata and were unable to create batch uploads as they didn’t have 4 days since creating their accounts and 50 manual edits to their credit – a safeguard to stop new accounts creating scripts to do damage. 

We asked Ian from Code The City to assist, as he has a long editing history. He continues this blog post. 

Next steps

I downloaded the output.txt file and checked if it could be uploaded straight to Quickstatements. It looked like there were minor problems with the text encoding of strings. So I imported the file into Google Docs. There, I ensured that the Label, Description and Canmore links were surrounded in double quotation marks. A quick find and replace did this. 

I tested an upload of five or six entries and these all ran smoothly. I then did several hundred. That turned up some errors. I spotted loads of ships with the label “unknown” and every wreck had the same description. I returned to the Python script and tweaked it to concatenate the word “Unknown” with a Canmore ID. This fixed the problem. I also had to create a checking method of seeing if our ship had already been uploaded. I did this by downloading all the matching Canmore IDs for successfully uploaded ships. I then filtered these out before re-creating the output.txt file. 

I then generated the bulk of the 24,185 to be uploaded.  I noticed a fairly high error rate. This was due to a similar issue to the Unknown-named ships. The output.txt script was trying to upload multiple ships with the same names (e.g. over 50 ships with the name Hope). I solved this in the same manner as with Unknown-named wrecks, concatenating ship names with “Canmore nnnnnn.”

I prepared this even as the bulk upload was running. Filtering out the recently uploaded ships and re-running the creation of the Output.txt file meant that within a few minutes I was able to have the corrective upload ready. Running this a final time resulted in all shipwrecks being added to WIkidata, albeit with some issues to fix. This had taken about a day to run, refine and rerun. 

The following day I set out to refine the quality of the data. The names of shipwrecks had been left in sentence case: an initial capital and everything else in lower case. I downloaded a CSV of records we’d created, and changed the Labels to Proper Case. I also took the opportunity to amend the descriptions to reflect the provenance of the records from Canmore in the description of each. I set one browser the task of changing Labels, and another the change to descriptions. This was 24,185 changes each – and took many hours to run. I noticed several hundred failed updates – which appear to just be “The save has failed” messages. I checked those and reran them. Having no means of exporting errors from Quickstatements (that I know of) makes fixing errors more difficult than it should be.

Finally I noticed by chance that a good number of records (estimated at 400) are not shipwrecks at all but wrecks of aircraft. Most, if not all, are prefixed “A/C’ in the label.

I created a batch to remove statements for ships and shipwrecks and to add statements saying that these are instances of crash sites. I also scripted the change to descriptions identifying these as aircraft wrecks rather than ship wrecks.

This query https://w.wiki/pjA now identifies and maps all aircraft wrecks.

aircraft wrecks uploaded from Canmore
All aircraft wrecks uploaded from Canmore

This query https://w.wiki/pSy maps all shipwrecks

the location of all shipwrecks uploaded to Wikidata from Canmore.
The location of all shipwrecks uploaded to Wikidata from Canmore.

Next steps?

I’ve noted the following things that the team could do to enhanced and refine the data further:

  • Check what other data is available by download or scraping from Canmore (such as date of sinking, depth, dimensions) and add that to the wikidata records
  • Attempt to reconcile data uploaded from Aberdeen built ships at CTC19 with these wrecks – there may be quite a few to be merged

Finally, in the process of working on the cleaning of this uploaded data I noticed the the data model on Wikidata to support this is not well structured.

This was what I sketched out as I attempted to understand it.

The confusing data model in Wikidata
A confusing data model

Before I changed the aircraft wrecks to “crash site” I merged the two items which works with the queries above. But this needs more work.

  • Should the remains of a crashed aircraft be something other than a crash site? The latter could be cleared of debris and still be the crash site. The term Shipwreck more clearly describes where a wreck is whether buried, on land, or beneath the sea.
  • Why is a shipwreck a facet of a ship, but a crash site is a subclass of aircraft.
  • And Disaster Remains seems like the wrong term for what might be a non-disastrous event (say if a ship from the middle ages gently settled into mud over the centuries and was forgotten about – and certainly isn’t a subclass of Conservation Status, anyway.

I’d be happy to work with anyone else on better working out an ontology for this.

Swift use of Doric Place Names

Introduction

One of the Code the City 21 projects was looking at providing Scots translations of Aberdeenshire place names for displaying on an OpenStreetMap map. Part of the outcomes for that project included a list of translated places names and potentially an audio version of name to guide in pronunciation.

I’m a firm believer that Open Data shouldn’t just become “dusty data left on the digital shelf” and to “show don’t tell”. This led me to decide to show just how easy it is to do something with the data created as part of the weekend’s activities and to make use of outcomes from a previous CTC event (Aberdeenshire Settlements on Wikidata and Wikipedia) and thus take that data off the digital shelf.

My plan was to build a simple iOS app, using SwiftUI, that would allow the following:

  • Listing of place names in English and their Scots translation
  • View details about a place including its translation, location and photo
  • Map showing all the places and indicating if a translation exists or not

I used SwiftUI as it is fun (always an important consideration) to play with and quick to get visible results. It also provides the future option to run the app as a Mac desktop app.

Playing along at home

Anyone with a Mac running at least Catalina (macOS 10.15) can install Xcode 12 and run the app on the Simulator. The source code can be found in GitHub.

Getting the source data

Knowing that work had previously been done on populating Wikidata with a list of Aberdeenshire Settlements and providing photos for them, I turned to Wikidata for sourcing the data to use in the app.

# Get list of places in Aberdeenshire, name in English and Scots, single image, lat and long

 
SELECT  ?place (SAMPLE(?place_EN) as ?place_EN) (SAMPLE(?place_SCO) as ?place_SCO) (SAMPLE(?image) as ?image) (SAMPLE(?longitude) as ?longitude)  (SAMPLE(?latitude) as ?latitude)
  WHERE {
    ?place wdt:P31/wdt:P279* wd:Q486972 .
    ?place wdt:P131 wd:Q189912 .
    ?place p:P625 ?coordinate.
    ?coordinate psv:P625 ?coordinate_node .
    ?coordinate_node wikibase:geoLongitude ?longitude .
    ?coordinate_node wikibase:geoLatitude ?latitude .
    OPTIONAL { ?place wdt:P18 ?image }.
    OPTIONAL { ?place rdfs:label ?place_EN filter (lang(?place_EN) = "en" )}.
    OPTIONAL { ?place rdfs:label ?place_SCO filter (lang(?place_SCO) = "sco" )}.
    }
GROUP BY ?place
ORDER By ?place_EN

The query can be found in the CTC21 Doric Tiles GitHub repository and run via the Wikidata Query Service.

The query returned a dataset that consisted of:

  • Place name in English
  • Place name in Scots (if it exists)
  • Single image for the place (some places have multiple images so had to be restricted to single image)
  • Latitude of place
  • Longitude of place

Just requesting the coordinate for each place resulted in a text string, such as Point(-2.63004 57.5583), which complicated the use later on. Adding the relevant code

?coordinate psv:P625 ?coordinate_node .
?coordinate_node wikibase:geoLongitude ?longitude .
?coordinate_node wikibase:geoLatitude ?latitude .

to the query to generate latitude and longitude values simplified the data reuse at the next stage.

The results returned by the query were exported as a JSON file that could be dropped straight into the Xcode project.

The App

SwiftUI allows data driven apps to be quickly pulled together. The data powering the app was a collection of Place structures populated with the contents of the JSON exported from Wikidata.

struct Place: Codable, Identifiable {
     let place: String
     let place_EN: String
     let place_SCO: String?
     let image: String?
     var latitude: String
     var longitude: String
     
     // Computed Property
     var id: String { return place }
     var location: CLLocationCoordinate2D {
         CLLocationCoordinate2D(latitude: Double(latitude)!, longitude: Double(longitude)!)
     }
 }

The app itself was split into three parts: Places list, Map, Settings. The Places list drills down to a Place details view.

List view of Places showing English and Scots translation.
List of places in English and their Scots translation if included in the data
Details view showing place name, photo, translation and map.
Details screen about a place
Map showing places and indication if they have been translated into Scots or not.
Map showing places and indicating if they have Scots translation (yellow) or not (red)

The Settings screen just displays some about information and where the data came from. It acts partially as a placeholder for now with the room to expand as the app evolves.

Next Steps

The app created over the weekend was very much a proof of concept and so has room from many improvements. The list includes:

  • Caching the location photos on the device
  • Displaying additional information about the place
  • Adding search to the list and map
  • Adding audio pronunciation of name (the related Doric Tiles project did not achieve adding of audio during the CT21 event)
  • Modified to run on Mac desktop
  • Ability to requested updated list of places and translations

The final item on the above list, the ability to request an updated list of places, in theory is straight forward. All that would be required is to send the query to the Wikidata Query Service and process the results within the app. The problem is that the query takes a long time to run (nearly 45 seconds) and there may be timeout issues before the results arrive.