CTC23 – The OD Bods

Introduction

This blog post was written to accompany the work of The OD Bods team at Code the City 23 – The Future of The City

Open data has the power to bring about economic, social, environmental, and other benefits for everyone. It should be the fuel of innovation and entrepreneurship, and provide trust and transparency in government.

But there are barriers to delivering those benefits. These include:

  • Knowing who publishes data, and where,
  • Knowing what data is being published – and when that happens, and
  • Knowing under what licence (how) the data is made available, so that you can use it, or join it together with other agencies’ data.

In a perfect world we’d have local and national portals publishing or sign-posting data that we all could use. These portals would be easy to use, rich with metadata and would use open standards at their core. And they would be federated so that data and metadata added at any level could be found further up the tree. They’d use common data schemas with a fixed vocabulary which would be used as a standard across the public sector. There would be unique identifiers for all identifiable things, and these would be used without exception. 

You could start at your child’s school’s open data presence and get an open data timetable of events, or its own-published data on air quality in the vicinity of the school (and the computing science teacher would be using that data in classes). You could move up to a web presence at the city or shire level and find the same school data alongside other schools’ data; and an aggregation or comparison of each of their data. That council would publish the budget that they spend on each school in the area, and how it is spent. It would provide all of the local authority’s schools’ catchment areas or other LA-level education-specific data sets. And if you went up to a national level you’d see all of that data gathered upwards: and see all Scottish Schools and also see the national data such as SQA results, school inspection reports – all as open data.

But this is Scotland and it’s only six years since the Scottish Government published a national Open Data Strategy; one which committed data publication would be open by default

Looking at the lowest units – the 32 local authorities – only 10, or less than a third, even have any open data. Beyond local government, of the fourteen health boards none publishes open data, and we note that of the thirty Health and Social Care Partnerships only one has open data. Further, in 2020 it was found that of an assumed 147 business units comprising Scottish Government (just try getting data of what comprises what is in the Scottish Government) – 120 have published no data.

And, of course there are no regional or national open data portals. Why would Scottish Government bother? Apart, that is, from that six year old national strategy and an EU report in 2020 from which it was clear that OD done well would benefit the Scottish economy by around £2.21bn per annum? Both of these are referred to in the Digital Strategy for Scotland 2021

Why there is no national clamour around this is baffling. 

And despite there being a clear remit at Scottish Government for implementing the OD Strategy no-one, we are told, measures or counts the performance nationally. Because if you were doing this poorly, you’d want to hide that too, wouldn’t you? 

And, for now, there is no national portal. There isn’t even one for the seven cities, let alone all 32 councils. Which means there is 

  • no facility to aggregate open data on, say, planning, across all 32 councils. 
  • no way to download all of the bits of the national cycle paths from their custodians. 
  • no way to find out how much each spends on taxis etc or the amount per pupil per school meal. 

There is, of course, the Spatial Hub for Scotland, the very business model of which is designed (as a perfect example of the law of unintended consequences) to stifle the publication of open data by local government. 

So, if we don’t have these things, what do we have?

What might we expect?

What should we expect from our councils – or even our cities? 

Here are some comparators

Remember, back about 2013 , both Aberdeen and Edinburgh councils received funding from Nesta Scotland to be part of Code For Europe where they learned from those cities above. One might have expected that by now they’d have reached the same publication levels as these great European cities by now? We’ll see soon. 

But let’s be generous. Assume that each local authority in Scotland could produce somewhere between 100 and 200 open data sets. 

  • Scotland has 32 local authorities 
  • Each should be able to produce 100  – 200 datasets per authority  – say 150 average

= 150 x 32 = 4800 data sets.

The status quo

Over the weekend our aim was to look in detail at each of Scotland’s 32 local authorities and see which was publishing their data openly – to conform with the 2015 Open Data Strategy for Scotland. What did we find?

Our approach

As we’ve noted above there is no national portal. And no-one in Scottish Government is counting or publishing this data. So, following the good old adage, “if you want something done, do it yourself”, a few of us set about trying to pull together a list of all the open datasets for Scotland’s 7 cities and the other 25 authorities. For the naive amongst us, it sounded like an easy thing to do. But getting started even became problematic. Why?

  1. Only some councils had any open data – but which?
  2. Only some of those had a landing page for Open Data. Some had a portal. Some used their GIS systems. 
  3. Those that did provide data used different categories. There was no standardised schema. 
  4. For others, some had a landing page but then additional datasets were being found elsewhere on their websites
  5. Contradictory licence references on pages – was it open or not?

We also looked to see if there was already a central hub of sorts upon which we could build. We found reference to Open Data on Scottish Cities Alliance website but couldn’t find any links to open data. 

Curiosity then came into play, why were some councils prepared to publish some data and others so reluctant? What was causing the reluctancy? And for those publishing, why were all datasets not made open, what was the reason for selecting the ones they had chosen?

What we did

Our starting point was to create a file to allow us to log the source of data found. As a group, we decided upon headers in the file, such as the type of file, the date last updated to name but a few.

From previous CTC events which we attended we knew that Ian had put a lot of effort previously into creating a list of council datasets – IW’s work of 2019 and 2020 which became our starting source. We also knew that Glasgow and Edinburgh were famous for having large, but very out of date, open data portals which were at some point simply switched off. 


We were also made aware of another previous attempt from the end of 2020 to map out the cities’ open data. The screenshot below (Fig 1) is from a PDF by Frank Kelly of DDI Edinburgh which compared datasets across cities in Scotland. You can view the full file here.

Fig 1 From an analysis of Scottish Cities’s open data by Frank Kelly of DDI Edinburgh, late 2020 or early 2021

For some councils, we were able to pull in a list of datasets using the CKAN API. That worked best of all with a quick bit of scripting to gather the info we needed. If all cities, and other authorities did the same we’d have cracked it all in a few hours! But it appears that there is no joined up thinking, no sharing of best practices, no pooling of resources at play in Scotland. Surely COSLA, SCA, SOCITM and other groups could get their heads together and tackle this? 

For others there were varying degrees of friction. We could use the arcGIS API to gather a list of data sets. But the arcGIS API tied us up in knots trying to get past the sign in process, i.e. did we need an account or could we use it anonymously – it was difficult to tell. Luckily with an experienced coder in our team we were able to make calls to the API and get responses – even if these were verbose and needed manual processing afterwards. This post from Terence Eden “What’s your API’s “Time To 200”?” is really relevant here! 

For the rest it was a manual process of going into each city/council website and listing files. With three of us working on it for several hours. We succeeded in pulling together the datasets from the different sources into our csv file

One council trying to publish open data but the quality, and the up-to-date-ness was questionable

Ultimately, the sources were so varied and difficult to navigate that it took 5 digitally-skilled individuals a full day, that is 30 man-hours, to pull this data together. Yet if we have missed any, as we are sure to have done, it may be because they have moved or are hidden away. Let us know if there are more. 

From this output it became clear that there was no consistency in the types of files in which the data was being provided and no consistency in the refresh frequency. This makes it difficult to see a comprehensive view in a particular subject across Scotland (because there are huge gaps) and makes it difficult for someone not well versed in data manipulation to aggregate datasets, hence reducing usability and accessibility. After all, we want everyone to be able to use the data and not put barriers in the way.

We have a list, now what

We now had a list of datasets in a csv file, so it was time to work on understanding what was in it. Using Python in Jupyter Notebooks, graphs were used to analyse the available datasets by file type, the councils which provided it, and how the data is accessed. This made it clear that even among the few councils which provide any data, there is a huge variation in how they do that. There is so much to say about the findings of this analysis, that we are going to follow it up with a blog post of its own.

Unique Datasets by Council
Unique dattes by council and filetype
Average filetypes provided for each data set by Council

One of our team also worked on creating a webpage (not currently publicly-accessible) to show the data listings and the graphs from the analysis. It also includes a progress bar to show the number of datasets found against an estimated number of datasets which could be made available – this figure was arbitrary but based on a modest expectation of what any local authority could produce. As you saw above, we set this figure much lower than we see from major cities on the continent.

What did we hope to achieve?

A one stop location where links to all council datasets could be found. 

Consistent categories and tags such that datasets containing similar datasets could be found together. 

But importantly we wanted to take action – no need for plans and strategies, instead we took the first step.

What next?

As we noted at the start of this blog post, Scotland’s approach Open Data is not working. There is a widely-ignored national strategy. There is no responsibility for delivery, no measure of ongoing progress, no penalty for doing nothing and some initiatives which actually work against the drive to get data open. 

Despite the recognised economic value of open data – which is highlighted in the 2021 Digital Strategy but was also a driver for the 2015 strategy! – we still have those in government asking why they should publish and looking specifically to Scotland (a failed state for OD) for success stories rather than overseas. 

 We’ve seen closed APIs being, we assume, to try to measure use. We suspect the thinking goes something like this:

A common circular argument

In order for open data to be a success in Scotland we need it to be useful, usable, and used. 

Useful

That means the data needs to be geared towards those who will be using it: students, lecturers, developers, entrepreneurs, data journalists, infomediaries. Think of the campaign in 2020 led by Ian to get Scottish Government to publish Covid data as open data, and what has been made of it by Travelling Tabby and others to turn raw data into something of use to the public.

Usable

The data needs to be findable, accessible, and well structured. It needs to follow common standards for data and the metadata. Publishers need to collaborate – coordinate data releases across all cities, all local authorities. ‘Things’ in the data need to use common identifiers across data sets so that they can be joined together, but the data needs to be usable by humans too. 

Used

The data will only be used if the foregoing conditions are met. But government needs to do much more to stimulate its use: to encourage, advertise, train, fund, and invest in potential users. 

The potential GDP rewards for Scotland are huge (est £2.21bn per annum) if done well. But that will not happen by chance. If the same lacklustre, uninterested, unimaginative mindsets are allowed to persist; and no coordination applied to cities and other authorities, then we’ll see no more progress in the next six years than we’ve seen in the last. 

While the OGP process is useful, bringing a transparency lens to government, it is too limited. Government needs to see this as an economic issue as is the case, and one which the current hands-off approach is failing. We also need civic society to get behind this, be active, visible, militant and hold government to account. What we’ve seen so far from civic society is at best complacent apathy. 

Scotland could be great at this – but the signs, so far, are far from encouraging!

Team OD Bods (Karen, Pauline, Rob, Jack, Stephen and Ian)

Waste Wizards at CTC22

A write-up of progress at the March 2021 Environment-themed hack weekend.

What problem we were addressing?


The public have access to two free, easy accessible waste recycling and disposal methods. The first is “kerbside collection” where a bin lorry will drive close to almost every abode in the UK and crews will (in a variety of different ways) empty the various bins, receptacles, boxes and bags. The second is access to recycling centres, officially named Household Waste Recycling Centres (HWRCs) but more commonly known as the tip or the dump. These HWRCs are owned by councils or local authorities and the information about these is available on local government websites.


However, knowledge about this second option: the tips, the dumps, the HWRCs, is limited. One of the reasons for that is poor standardisation. Council A will label, map, or describe a centre one way; Council B will do it in a different way. There is a lot of perceived knowledge – “well everybody just looks at their council’s website, and everybody knows you can only use your council’s centres”. This is why at CTC22 we wanted to get all the data about HWRCs into a standard set format, and release it into the open for communities to keep it present and up to date. Then we’d use that data to produce a modern UI so that residents can actually get the information they require:

  • Which tips they can use?
  • When these dumps are open?
  • What can they take to these HWRCs?
  • “I have item x – where can I dispose of it?”

Our approach


There were six main tasks to complete:

  1. Get together a list of all the HWRCs in the UK
  2. Build an open data community page to be the centre point
  3. Bulk upload the HWRCs’ data to WikiData
  4. Manually enter the HWRCs into OpenStreetMap
  5. Create a website to show all the data
  6. Create a connection with OpenStreetMap so that users could use the website to update OSM.

What we built / did

All HWRCs are regulated by a nation’s environmental regulator:

  • For Scotland it is SEPA
  • For Northern Ireland it is NIEA
  • For Wales it is NRW
  • For England it is EA

A list of over 1,000 centres was collated from these four agencies. The data was of variable quality and inconsistent.


This information was added to a wiki page on Open Street Map – Household waste in the United Kingdom, along with some definitions to help the community navigate the overly complex nature of the waste industry.


From that the lists for Scotland, Wales and England were bulk uploaded to WikiData. The was achieved by processing the data in Jupiter Notebooks, from which formatted data was exported to be bulk uploaded via the Quick Statements tool. The NIEA dataset did not include geolocation information so future investigation will need to be done to add these before these too can be uploaded. A Wikidata query has been created to show progress on a map. At the time of writing 922 HWRCs are now in Wikidata.

Then the never-ending task of locating, updating, and committing the changes of each of the OSM locations was started.

To represent this data the team built a front-end UI with .NET Core and Leaflet.js that used Overpass Turbo to query OSM. Local Authority geolocation polygons were added to highlight the sites that a member of the public could access. By further querying the accepted waste streams the website is able to indicate which of those centres they can visit can accept the items they are wanting to recycle.

However, the tool is only as good as the data so to close the loop we added a “suggest a change” button that allowed users to post a note on that location on OpenStreetMap so the wider community can update that data.

We named the website OpenWasteMap and released it into the wild.

The github repo from CTC22 is open and available to access.

Pull requests are also welcome on the repo for OpenWasteMap.

What we will do next (or would do with more time/ funding etc)

The next task is to get all the data up-to-date and to keep it up to date; we are confident that we can do this because of the wonderful open data community. It would also be great if we could improve the current interface on the frontend for users to edit existing waste sites. Adding a single note to a map when suggesting a change could be replaced with an edit form with a list of fields we would like to see populated for HWRCs. Existing examples of excellent editing interfaces in the wild include healthsites.io which provides an element of gamification and completionism with a progress bar with how much data is populated for a particular location.

An example entry from Healthsites.io

Source: https://healthsites.io/map#!/locality/way/26794119

While working through the council websites it has become an issue that there is no standard set of terms for household items, and the list is not machine friendly. For example, a household fridge can be called:

  • Fridge
  • Fridge Freezer
  • WEEE
  • Large Domestic Electrical Appliance
  • Electric Appliance
  • White Good

A “fun” next task would be to come up with a taxonomy of terms that allows easier classification and understanding for both the user and the machine. Part of this would include matching “human readable” names to relevant OpenStreetMap tags. For example “glass” as an OSM tag would be “recycling:glass”


There are other waste sites that the public can used called Bring Banks / Recycling Points that are not run by Local Authorities that are more informal locations for recycling – these too should be added but there needs to be some consideration on how this information is maintained as their number could be tenfold that of HWRCs.

As we look into the future we must also anticipate the volume of data we may be able to get out of sources like OpenStreetMap and WikiData once well populated by the community. Starting out with a response time of mere milliseconds when querying a dozen points you created in a hackathon is a great start; but as a project grows the data size can spiral into megabytes and response times into seconds. With around 1,000 recycling centres in the UK and thousands more of the aforementioned Bring Banks this could be a lot of data to handle and serve up to the public in a presentable manner.

Swift use of Doric Place Names

Introduction

One of the Code the City 21 projects was looking at providing Scots translations of Aberdeenshire place names for displaying on an OpenStreetMap map. Part of the outcomes for that project included a list of translated places names and potentially an audio version of name to guide in pronunciation.

I’m a firm believer that Open Data shouldn’t just become “dusty data left on the digital shelf” and to “show don’t tell”. This led me to decide to show just how easy it is to do something with the data created as part of the weekend’s activities and to make use of outcomes from a previous CTC event (Aberdeenshire Settlements on Wikidata and Wikipedia) and thus take that data off the digital shelf.

My plan was to build a simple iOS app, using SwiftUI, that would allow the following:

  • Listing of place names in English and their Scots translation
  • View details about a place including its translation, location and photo
  • Map showing all the places and indicating if a translation exists or not

I used SwiftUI as it is fun (always an important consideration) to play with and quick to get visible results. It also provides the future option to run the app as a Mac desktop app.

Playing along at home

Anyone with a Mac running at least Catalina (macOS 10.15) can install Xcode 12 and run the app on the Simulator. The source code can be found in GitHub.

Getting the source data

Knowing that work had previously been done on populating Wikidata with a list of Aberdeenshire Settlements and providing photos for them, I turned to Wikidata for sourcing the data to use in the app.

# Get list of places in Aberdeenshire, name in English and Scots, single image, lat and long

 
SELECT  ?place (SAMPLE(?place_EN) as ?place_EN) (SAMPLE(?place_SCO) as ?place_SCO) (SAMPLE(?image) as ?image) (SAMPLE(?longitude) as ?longitude)  (SAMPLE(?latitude) as ?latitude)
  WHERE {
    ?place wdt:P31/wdt:P279* wd:Q486972 .
    ?place wdt:P131 wd:Q189912 .
    ?place p:P625 ?coordinate.
    ?coordinate psv:P625 ?coordinate_node .
    ?coordinate_node wikibase:geoLongitude ?longitude .
    ?coordinate_node wikibase:geoLatitude ?latitude .
    OPTIONAL { ?place wdt:P18 ?image }.
    OPTIONAL { ?place rdfs:label ?place_EN filter (lang(?place_EN) = "en" )}.
    OPTIONAL { ?place rdfs:label ?place_SCO filter (lang(?place_SCO) = "sco" )}.
    }
GROUP BY ?place
ORDER By ?place_EN

The query can be found in the CTC21 Doric Tiles GitHub repository and run via the Wikidata Query Service.

The query returned a dataset that consisted of:

  • Place name in English
  • Place name in Scots (if it exists)
  • Single image for the place (some places have multiple images so had to be restricted to single image)
  • Latitude of place
  • Longitude of place

Just requesting the coordinate for each place resulted in a text string, such as Point(-2.63004 57.5583), which complicated the use later on. Adding the relevant code

?coordinate psv:P625 ?coordinate_node .
?coordinate_node wikibase:geoLongitude ?longitude .
?coordinate_node wikibase:geoLatitude ?latitude .

to the query to generate latitude and longitude values simplified the data reuse at the next stage.

The results returned by the query were exported as a JSON file that could be dropped straight into the Xcode project.

The App

SwiftUI allows data driven apps to be quickly pulled together. The data powering the app was a collection of Place structures populated with the contents of the JSON exported from Wikidata.

struct Place: Codable, Identifiable {
     let place: String
     let place_EN: String
     let place_SCO: String?
     let image: String?
     var latitude: String
     var longitude: String
     
     // Computed Property
     var id: String { return place }
     var location: CLLocationCoordinate2D {
         CLLocationCoordinate2D(latitude: Double(latitude)!, longitude: Double(longitude)!)
     }
 }

The app itself was split into three parts: Places list, Map, Settings. The Places list drills down to a Place details view.

List view of Places showing English and Scots translation.
List of places in English and their Scots translation if included in the data
Details view showing place name, photo, translation and map.
Details screen about a place
Map showing places and indication if they have been translated into Scots or not.
Map showing places and indicating if they have Scots translation (yellow) or not (red)

The Settings screen just displays some about information and where the data came from. It acts partially as a placeholder for now with the room to expand as the app evolves.

Next Steps

The app created over the weekend was very much a proof of concept and so has room from many improvements. The list includes:

  • Caching the location photos on the device
  • Displaying additional information about the place
  • Adding search to the list and map
  • Adding audio pronunciation of name (the related Doric Tiles project did not achieve adding of audio during the CT21 event)
  • Modified to run on Mac desktop
  • Ability to requested updated list of places and translations

The final item on the above list, the ability to request an updated list of places, in theory is straight forward. All that would be required is to send the query to the Wikidata Query Service and process the results within the app. The problem is that the query takes a long time to run (nearly 45 seconds) and there may be timeout issues before the results arrive.

Aberdeen Built Ships

This project was one of several initiated at the fully-online Code the City 19 History and Data event.

It’s purpose is to gather data on Aberdeen-built ships, with the permission of the site’s owners, and to push that refined bulk data, with added structure, onto Wikidata as open data, with links back to the Aberdeen Ships site through using a new identifier.

By adding the data for the Aberdeen Built Ships to Wikidata we will be able to do several things including

  • Create a timeline of ship building
  • Create maps, charts and graphs of the data (e.g. showing the change in sizes and types of ships over time
  • Show the relative activity of the many shipbuilders and how that changed
  • Link ship data to external data sources
  • Improve the data quality
  • Increase engagement with the ships database.

The description below is largely borrowed from the ReadMe file of the project’s Github Repo.

Progress to date

So far the following has been accomplished, mainly during the course of the weekend.

Next Steps?

To complete the project the following needs to be done

  • Ensure that the request for an identifier for ABS is created for use by us in adding ships to Wikidata. A request to create an identifier for Aberdeen Ships is currently pending.
  • Create Wikidata entities for all shipbuilders and note the QID for each. We’ve already loaded nine of these into WikiData.
  • Decide on how to deal with the list of ships that MAY be already in Wikidata. This may have to be a manual process. Think about how we reconcile this – name / year / tonnage may all be useful.
  • Decide on best route to bulk upload – eg Quickstatements. This may be useful: Wikidata Import Guide
  • Agree a core set of data for each ship that will parsed from ships.json to be added to Wikidata – e.g. name, year, builder, tonnage, length etc
  • Create a script to output text that can be dropped into a CSV or other file to be used by QuickStatements (assuming that to be the right tool) for bulk input ensuring links for shipbuilder IDs and ABS identifiers are used.

We will also be looking to get pictures of the ships published onto Wiki Commons with permissive licences, link these to the Wiki Data and increase and improve the number of Wikipedia articles on Aberdeen Ships in the longer-term.

Header Image of a Scale Model of Thermopylae at Aberdeen Maritime Museum By Stephencdickson – Own work, CC BY-SA 4.0

2019 – the year in review

Intro

The year just past has been a pivotal one for Code The City, we’ve moved into a new home, expanded our operations, engaged with new communities of people, and started to put in place solid planning which will be underpinned by expansion and better governance. 

Here are some of the highlights from 2019.

Sponsors, volunteers and attendees

We couldn’t do what we do without the help of some amazing people. With just three trustees (Bruce, Steve and Andrew) and Ian our CEO, we couldn’t cover such a range of activities without serious help. Whether you come to our events, volunteer, or your company sponsors our work, you are making a difference in Aberdeen. 

Listing things is always dangerous as the potential to miss people out is huge. But here we go! 

The Data Lab, MBN Solutions, Scotland IS, InoApps, Forty-Two Studio, who all provided very generous financial support; H2O AI  donated to our charity in lieu of sponsorship of a meet-up;  and the James Hutton Institute and InoApps who also donated laptops for us to re-use at our code clubs. Codify, IFB, Converged Comms who provided specific funding for projects including buying kit for code club, and paying for new air quality devices – some of which we have still to build.

Our regular volunteers – Vanessa, Zoe, Attakrit, Charlotte, and Shibo –  plus the several parents who stay to help too, all help mentor the kids at Young City Coders club. 

Lee, Carlos, Scott, Rob who are on the steering group of the Python User Group meetup. 

Naomi, Ian N, David, and Gavin who are on the steering group for Air Aberdeen along with Kevin from 57 North who supervises the building of new sensor devices. 

The ONE Tech Hub, and ONE Codebase have created a great space not only for us to work in, but also in which to run our public-facing events. 

Everyone who stays behind to help us clear away plates, cups and uneaten food – or nips out to the shops when we run out of milk.

Apologies to anyone we have missed!


And finally YOU – everyone who has attended one of or sessions – you’ve helped make Aberdeen a little bit better place to live in. Thank you!

Hack weekends

We ran four hack events this year. Here is a quick run-down. 

Air Quality 1

We kicked off 2019 with the CTC15 AIr Quality hack in February. This saw us create fourteen new devices which people took home to install and start gathering data. We also had a number of teams looking at the data coming from the sensors, and some looking at how we could use LoraWAN as a data transport network. We set some targets for sensor numbers which were, in retrospect, perhaps a little ambitious. We set up a website (https://airaberdeen.org

Air Quality 2

Unusually for us we had a second event on the same theme in quick succession: CTC16 in June. Attendees created another fourteen devices. We developed a better model for the data, improved on the website and governance of the project. We got great coverage on TV, on radio and in local newspapers. 

Make Aberdeen Better

CTC17 came along in November. The theme was a broad one – what would you do to make Aberdeen a better place to live, work or play? Attendees chose four projects to work on: public transport, improved methods of monitoring air quality, how we might match IT volunteers to charities needing IT help, and the open data around recycling.

Xmas mini-hack

CTC18, our final hack of the year was another themeless one, timed to fit into a single day. We asked participants to come and work on a pet side-project, or to help someone else with theirs. Despite a lower turnout in the run-up to Christmas, we still had eight projects being worked on during the day.

New home, service

In the late summer the ONE Tech Hub opened and we moved in as one of the first tenants. So far we rent a single desk in the co-working space but we aim to expand that next year. The building is great, which is why we run all of our events there now, and as numbers grow it promises to fulfil its promise as the bustling centre of Aberdeen’s tech community. 

Having started a new Data Meet-up in 2018 we moved that to ONE Tech Hub along with our hack events. We also kicked off a new Python User group in September this year, the same year as we started to deliver Young City Coders sessions to encourage youngsters to get into coding, using primarily Scratch and Python. 

We also ran our first WikiMedia Editathon in August – using WIkipedia, WIki Commons and Wikidata to capture and share some of the history of Aberdeen’s cinemas using these platforms. We are really supportive of better using all of the wikimedia tools. Ian recently attended a three-day course to become a wikimedia trainer. And at CTC18 there were two projects using wikidata and wiki commons too. Expect much more of this next year! 

Some recognition and some numbers

We’ve been monitoring our reach and impact this year.  

In March we were delighted to see that Code The City made it onto the Digital Social Innovation For Europe platform.  This project was to identify organisations and projects across the EU who are making an impact using tech and data for civic good. 

In July we appeared for the first time in an Academic journal – in an article about using a hackathon to bring together health professionals, data scientists and others to address health challenges. 

We will be launching our  dashboard in the New Year. Meantime, here are some numbers to chew on. 

Hack events

We ran four sessions, detailed above. We had 102 attendees and 15 facilitators who put in a total of 1,872 hours of effort on a total of 20 projects. All of this was for civic benefit. 

Young City Coders

We ran six sessions of our Young City Coders which started in September. The sessions had a total of 114 kids attending and 28 mentors giving up two hours or more. 

Data Meet-ups

In 2019 we had 12 data meet-ups with 28 speakers and 575 attendees! This is becoming a really strong local community of practitioners and researchers from academia and local industry. 

Python Meet-ups

Each of our four sessions from September to December had a speaker, and attracted a total of 112 attendees who were set small project tasks. 

The year ahead

2020 is going to see CTC accelerate its expansion. We’re recruiting two new board members, and we have drawn up a business plan which we will share soon. That should see us expand the team and strengthen our ability to drive positive societal change through tech, data and volunteering. We have two large companies considering providing sponsorship for new activities next year.  We’ll also be looking at improving our fundraising – widening the range of sources that we approach for funding, and allowing us to hire staff for the first time. 

Open Data

We’re long-term champions of open data as many of you will have read in previous posts. We’ve identified the need to strengthen the Open Data community in Scotland and to contribute beyond our own activities. Not only has Ian joined the Civic side of Open Government Partnership, and is leading on Commitment three of that to improve open data provision, but he has also joined the board of the Data Commons Scotland programme at Stirling University. 

Scottish Open Data Unconference

Beyond that we have created, and we are going to run, the Scottish Open Data Unconference in March. This promises to be a great coming together of the data community including academia, government, developers, and publishers. If you haven’t yet signed up please do so now – there are only 11 tickets of 90 still available. We’ll also need volunteers to help run it: scribes for sessions, helping to orientate new visitors, covering reception, photography, blogging etc. Let us know how you could help. 

We look forward to working with you all in the New Year and wish you all a peaceful and relaxing time over the festive period. 

 

Ian, Steve, Bruce and Andrew

[Photo by Eric Rothermel on Unsplash\