PLEASE NOTE – Due to low take-up this event has been postponed. We are sorry for any inconvenience this will cause.
Perth wants to boost its tourism offer and wants some help!. They want to see whether some well developed apps could help the city and its wider area bring attractions, trails, events, culture,accommodation, eateries; and activities to life.
They are also interested in bringing the quirky and interesting aspects of the city together, using great images and interesting user generated content through social media.
= DATA sOURCES aDDED On Github
They have developed the website http://www.perthcity.co.uk/ and there is an app (http://www.mi-perthshire.co.uk/ ) but want some creative minds to take a fresh look at the city and surrounding area, generate new ideas that they could then develop into some new apps, open data or other projects.
As always we’re looking for coders, designers, data wranglers, service users and providers, bloggers – in fact anyone with an interest – to join us for a weekend of ideation, creation, open data and rapid prototyping.
We’ll feed you, keep you stimulated, and provide good wifi. You will leave with a sense of accomplishment, new skills and potentially new friends.
In addition there are a cluster of B&BS on Dunkeld Road.
Also, just outside the city itself, The Lodge at the Perth Racecourse are offering a flat rate of £90 per night in a Double or Twin bedded room (£45 per person), which also includes a full breakfast. See http://perthlodge.co.uk/dining
On 19th and 20th March we found ourselves back at Aberdeen Uni with 35 or so eager hackers looking to bring to life a 3D Virtual Reality historic model of Aberdeen city centre using new open data. So how did it go?
This time we were more prescriptive than at any previous Code The City event. In the run up to the weekend we’d identified several sub-team roles.
Locating, identifying and curating historic content
Transcribing, formatting and creating valid open data
Building the 3D model, fixing and importing images and
Integrating and visualising the new data in the model.
After some breakfast, an intro and a quick tutorial on Open Data, delivered by Andrew Sage, we got stuck in to the work in teams.
Old Books into Open Data
We were lucky to have a bunch (or should be a shelf-ful) of city librarians, an archivist and a gaggle of other volunteers working on sourcing and transcribing data into some templates we’d set up in Google Sheets.
Given that we’d been given scanned photos of all the shop frontages of Union Street, starting in 1937, of which more below, we settled on that as the main period to work from.
The librarians and helpers quickly got stuck into transcribing the records they’d identified – particularly the 1937-38 Post Office Directory of Aberdeen. If my arithmetic is correct they completely captured the details of 1100+ business in the area around Union Street.
At present these are sitting in a Google Spreadsheet – and we will be working out with the librarians how we present this as well structured, licensed Open Data. It is also a work in progress. So there are decisions to be made – do we complete the transcription of the whole of Aberdeen – or do we move onto another year? e.g. 1953 which is when we have the next set of shopfront photos.
Music, pictures and sound
At the same time as this transcription was ongoing, we had someone sourcing and capturing music such might have been around in 1937, and sounds that you might have heard on the street – including various tram sounds – which could be imported into the model.
And three of us did some work on beginning an open list of gigs for Aberdeen since the city had both the Capitol Theatre (Queen, AC/DC, Hawkwind) and the Music Hall (Led Zeppelin, David Bowie, Elton John) on Union Street. This currently stands at 735 gigs and growing. Again, we need to figure out when to make it live and how.
The 3D Model
At CTC5 back in November 2015, Andrew Sage had started to build a 3D model of the city centre in Unity. That relied heavily on manually creating the buildings. Andrew’s idea for CTC6 was to use Open Streetmap data as a base for the model, and to use some scripting to pull the building’s footprints into the model.
This proved to be more challenging than expected. Steven Milne has written a great post on his site. I suggest that you read that then come back to this article.
As you’ve hopefully just read, Steve has identified the challenge of using Open Streetmap data for a project such as this: the data just isn’t complete enough or accurate enough to be the sole source of the data.
While we could update data – and push it back to OSM, that isn’t necessarily the best use of time at a workshop such as this.
There is an alternative to some of that. All 32 local authorities in Scotland maintain a gazetteer of all properties in their area. These are highly accurate, constantly-update, and have Unique Property Reference Numbers (UPRNs) and geo-ordinates for all buildings. This data (if it was open) would make projects such as this so much easier. While we would still need building shapes to be created in the 3D model, we would have accurate geo-location of all addresses, and so could tie the transcribed data to the 3d map very easily.
By using UPRNs as the master data across each transcribed year’s data we could match the change in use of individual buildings through time much more easily. There is a real need to get the data released by authorities as open data, or at least with a licence allowing generous re-use of the data. ODI Aberdeen are exploring this with Aberdeen City Council and the Scottish Government
We were given by the city’s Planning Service, scans of photos of shopfronts of Union Street from a number of decades from 1937, 1953 and on to the present. Generally the photos are very good but there are issues: we have seams between photos which run down the centre of buildings. We have binding tape showing through etc.
These issues are not so very difficult to fix – but they do need someone with competence in Photoshop, some standard guidance, and workflow to follow.
We started fixing some photos so that they could provide the textures for the building of Union Street in the model. But given the problems we were having with model, and a lack of dedicated Photoshop resource we parked this for now.
Taking this project forward, while still posing some challenges, is far from impossible. We’ve shown that the data for the entire city centre for any year can be crowd-transcribed in just 36 hours. But there are some decisions to be made.
Picking up on the points above, these can be broken down as follows.
Licensing model to be agreed
Publishing platform to be identified
Do we widen geo-graphically (across the city as a whole) or temporally (same area different years)
Creating volunteer transcribing teams, with guidance, supervision and perhaps a physical space to carry out the work.
Identify new data sources (e.g. the Archives were able to offer valuation roll data for the same period – would these add extra data for buildings, addresses, businesses?)
Set up a means for the general public to get involved – gamifying the transcription process, perhaps?
Similar to the data above.
We need clear CC licences to be generated for the pictures
Crowdsource the fixing of the photos
Create workflow, identify places for the pictures to be stored
Look at how we gamify or induce skilled Photoshop users to get involved
Set up a repository of republished, fixed pictures, licensed for reuse, with proper addressing system and naming – so that individual pictures can be tied to the map and data sources
The 3D Model
Build the model
Extend the coverage (geographically and through time)
Establish how best to display the transcribed data – and to allow someone in the 3D environment to move forward and back in time.
Look at how we can import other data such as a forthcoming 3D scan of the city centre to shortcut some development work
Look at how we can reuse the data in other formats and platforms (such as Minecraft) with minimum rework.
Speed up the 3D modelling by identifying funding streams that could be used to progress this more quickly. If you have suggestions please let us know as a comment below.
Taking all of this forward is quite an undertaking, but it is also achievable if we break the work down into streams and work on those. Some aspects would benefit from CTC’s involvement – but some could be done without us. So, libraries could use the experience gained here to set up transcribing teams of volunteers – and be creating proper open data with real re-use value. That data could then easily be used by anyone who wants to reuse it – e.g. to create a city centre mobile app which allows you to see any premises on Union Street, call up photos from different periods, find out which businesses operated there etc
As the model takes shape and we experiment with how we present the data we can hopefully get more attention and interest (and funding?) to support its development. It would be good to get some students on placements working on some aspects of this too.
Aberdeen City Council is working with the Scottish Cities Alliance to replace and improve the Open Data platforms for all seven Scottish cities later this year – and that will provide a robust means of presenting and storing all this open data once in place but in the mean time we will need to find some temporary alternatives (perhaps on Github ) until we are ready.
We welcome your input on this – how could you or your organisation help, what is your interest, how could you assist with taking this forward? Please leave comments below.
Code The City 6 – The History Jam was funded by Aberdeen City Council’s Libraries service and generously supported by Eventifier who provided us with free use of their Social Media platform and its LiveWall for the sixth consecutive time!.
The History Jam (or Code The City #6 if you are counting) will take place on 19-20 March 2016 at Aberdeen University. You can get one of the remaining tickets here.
As an participant, you’ll be bringing history to life, creating a 3D virtual reality map of a square mile of Aberdeen’s city centre. You’ll be gathering data from a variety of historical sources, transcribing that and creating new open data. You’ll import that into the the 3D model.
And there will also be the opportunity to re-use that data in imaginative new ways. So, if you are a MineCraft fan, why not use the data to start building Minecraft Aberdeen.
This is not one of our usual hacks, whatever that is! This time around instead of you proposing problems to be worked on, we’ve set the agenda, we’ll help form the teams, and provide you with more guidance and support.
If you come along you’ll learn open data skills. And you’ll get a year’s free membership of the Open Data Institute!
Saturday’s Running Order
09:00 Arrive in time for fruit juices, coffee, pastries, or a rowie.
09:30 Introduction to the day 09:45 Briefing of teams and, if you are new to Open Data, a quick training session
10:15 Split into three streams:
Sourcing and curation of data, and structuring capture mechanisms
Transcribing, cleaning, and publishing open data
Creating the 3D map, importing and visualising the data
Throughout the day we’ll have feedback sessions, presenting back to the room on progress. We’ll write blog posts, create videos, photograph progress.
13:00 Lunch (the best sandwiches in Aberdeen)
More workstream sessions with feedback and questions.
17:30 (or so) Pizza and a drink
We’ll wind up about 8pm or so if you can stay until then
09:30 arrive for breakfast
10:00 kick off
16:00 Show and Tell sessions – demonstrate to the room, and a wider audience, and preserve for posterity what you’ve produced in less than 36 hours. You’ll be amazed!
We are the first node in Scotland (and the only one north of Leeds) and will be working with all sectors to ensure that we raise awareness of the best practice in Open Data through training, networking, and organising events across the country. Continue reading Our first week as a Node of the ODI
They’re taking two tables of data about community organisations and merging and exporting those from a closed source, processing and cleaning the data, then uploading it as both Open Data and to a mapping site.