AQ – what’s next?

For more background read this post and this one. 

Last weekend we hosted the second Aberdeen Air Quality hack weekend in recent months. Coming out it there are a number of tasks which we need to work on next. While some of these fall to the community to deliver, there are also significant opportunities for us to work with partners.

The Website

While the Air Aberdeen website is better, we still need to apply the styling that was created at the weekend.

draft web design
Draft web design

Humidity Measurement

We’ve established that the DHT022 chips which we use in the standard Luftdaten device model have challenges in working in our maritime climate. They get saturated and stop reporting meaningful values. There is a fix which is to use BME380 chips in their place. These will continue to give humidity and temperature readings, plus pressure,  but due to the different technology used will handle the humidity better. Knowing local humidity is important (see weather data below). So, we need to adapt the design of all new devices to use these chips, and retrofit the existing devices with the new chips. 

Placement of new devices

We launched in February with a target of 50 sensors by the end of June and 100 by the end of the year. So far attendees have built 55 devices of which 34 are currently, or have recently been, live. That leaves 21 in people’s hands that are still to be registered and turned on. We’re offering help to those hosts to make them live.

Further, with the generous sponsorship of Converged,  Codify, and now IFB we will shortly build 30 more devices, and that will take us to a total of 85. We’ve had an approach by a local company who may be able to sponsor another 40. So, it looks like we will soon exceed the 100 target. Where do we locate these new ones? We need to have a plan to strategically place those around the city where they would be most useful which is where the map, above, comes in.

Community plus council?

We really want to work with the local authority on several aspects of the project. It’s not them versus us. We all gain by working together. There are several areas that we could collaborate on, in addition to the strategic placement of future devices.

For example, we’ve been in discussions with the local authority’s education service with a view to siting a box on every one of the 60 schools in the city. That would take us to about 185 devices – far in excess of the target. Doing that needs funding, and while the technology challenge to get them on the network is trivial, ensuring that the devices survive on the exterior of the buildings might be a challenge.

Also, we’ve asked but had no response to our request to co-locate one of our devices on a roadside monitoring station which would allow us to check the correlation between the outputs of the two. We need to pursue that again.

Comparing our data suggests that we can more than fill in gaps in the local council’s data. The map of the central part of Aberdeen in the image above, shows all of the six official sensors (green) and 12 of the 24 community sensors that we have in the city (in red). You can also see great gaps where there are no sensors which again shows the need for strategic placement of the new ones.

We’ve calculated that with a hundred sensors we’d have 84,096,000 data observations per year for the city, all as open data. The local authority, with six sensors each publishing three items of data hourly, have 157,680 readings per annum – which is 0.18% of the community readings (and if we reach 185 devices then ACC’s data is about 0.10% or 1/1000th of the community data) and the latter of course, besides being properly open-licensed, has much greater granularity and geographic spread.

Weather data

We need to ensure that we gather historic and new weather data and use that to check if adjustments are needed to PM values. Given that the one-person team who was going to work on this at CTC16 disappeared, we need to first set up that weather data gathering, then apply some algorithms to adjust the data when needed, then make that data available.

Engagement with Academia

We need to get the two local universities aboard, particularly on the data science work. We have some academics and post-grads who attend our events, but how do we get the data used in classes and projects? How do we attract more students to work with us? And , again we need to get schools to only hosting the devices but the pupils using the data to understand their local environment?

The cool stuff

Finally, we when we have the data collected, cleaned, and curated, and APIs in place (from the green up through orange to red layers below) we can start to build some cool things (the blue layers).

AQA Data Layers
AQA Data Layers

These might include, but are not limited to:

  • data science-driven predictive models of forecast AQ in local areas,
  • public health alerts,
  • mobile apps to guide you where it is safe to walk, cycle, jog or suggest cleaner routes to school for children,
  • logging AQ over time and measuring changes,
  • correlating local AQ with admissions to hospital of cases of COPD and other health conditions
  • inform debate and the formulation of local government strategy and policy.

As we saw at CTC16, we could also provide the basis for people to innovate using the data. One great example was the hacked LED table-top lamp which changes colour depending on the AQ outside. Others want to develop personalised dashboards.

The possibilities, as they say, are endless.

Aberdeen Air Quality

Update: A write-up of this event which took place on 16-17th February 2019 is available on this page.

How much do you care about the quality of the air you breathe as you walk to work or university, take the kids to school, cycle or jog, or open your bedroom window?

How good is the air you are breathing? How do you know? What are the levels of particulates (PM2.5 or PM10) and why is this important?

pm25_comparison
pm25_comparison

When do these levels go up or down? What does that mean?

Who warns you? Where do they get their data, and how good is it?

Where do you get information, or alerts that you can trust?

We aim to sort this in Aberdeen

Partnering with community groups, Aberdeen University and 57 North Hacklab, we are working on a longterm project to build and deploy community-built, and hosted, sensors for PM2.5 and PM10. We aim to have fifty of these in place in the next few months, across Aberdeen. You can see some early ones in place and generating data here.

The first significant milestone of this will be the community workshop we are holding on 16-17 February 2019. If you want to be part of it, you can get a ticket here. But, be quick; they are going quickly.

Weekend activities

There are loads of things you can do if you attend.

Sensor Building

For a small cost, you can come along and build your own sensor with someone to help you, and take it home to plug into your home wifi. It will then contribute data for your part of the city.

But we will be doing much more than that.

Working with the data

If you have experience in data science or data analysis, or if you want to work with those who do, there are loads of options to work with the data from existing and future sensors.

These include

  • Allow historical reading to be analysed against the official government sensors for comparison
  • Use the data; wind speed, humidity… to build live maps of readings to identify sources of emissions.
  • Compensate readings from sensors against factors which affect pollution levels to attempt to understand the emissions of pollutants in a given area.
  • Build predictive models of future pollution
  • Fix a minor issue with the existing data Collected Data (see https://github.com/opendata-stuttgart/madavi-api/issues/8 )
  • Build an API for the access of the Luftdaten sensor data to allow querying of the sensor data

Software development

If you are a software developer or studying to be one, you could

  • Create alerts systems to warn of anticipated spikes in pollutants, perhaps using Twitter, or email.
  • Add to the code for the Luftdaten sensors to allow connection over LoRaWAN interface.
  • Create LoRaWAN server code to allow sensors to feed up to the Luftdaten website.
  • Security testing of the IoT Code used by the Luftdaten sensors.

Community Groups / Educators / Activists / Journalists

You don’t have to be a techie! If you are a concerned citizen, and community activist, a teacher, or a journalist there is so much you could do. For example:

  • How can you understand the data?
  • Identify how this could assist with local issues, campaigns, educational activities.
  • Help us capture the weekend by blogging, or creating digital content

Even if you just want to be part of the buzz and keep the coffees and teas flowing, that is a great contribution.

See you there!

Ian, Bruce, Andrew and Steve

Header image by Jaroslav Devia on Unsplash

2018 – A year in review

2018 has been a really busy year for us. Here are all the things that we delivered.

Open Data Camp

We hosted UK Open Data Camp’s first ever visit North of the border in November. Over a hundred people travelled to Aberdeen for two days of unconferencing where there were 44 sessions run on a variety of data-related topics. Some people went for an Aberdeen version of the Joy Diversion walk around old Aberdeen, and others discovered the pleasure of logging Open Benches. The feedback was overwhelmingly positive and there were loads of write-ups.

Code the City Hack Weekends

We had two great Code The City Events: CTC13 – Hacking our Relationship with Alcohol, and CTC14 – Archaeology. Both were well attended and produced some very interesting results. The first saw us tackling some interesting real-world problems, helping people to overcome problems, and build a machine learning model to predict whether a beer can design would be more likely to be perceived as alcoholic or not. A report of the weekend is being written as an academic paper for a forthcoming health conference!

 

The second weekend saw us scanning and creating 3D renders of six real skeletons with mobile phones. We also began to create a 3D model of the church in which the dig took place, and generate data from written logs to populate that.

Well done to all who participated. We got some great feedback on each event.

Data Meetups

Wearing our ODI hats, we launched the new monthly Data Meetups in April – and managed to squeeze in nine of them this year. These are really well attended, and saw over 300 people in total coming out on a Tuesday night to hear speakers from across the country on a diverse range of data topics. These ranged from Creating a Data Culture in your business, to public Open Data; from the data of Scottish Football to the use of blockchain in Oil and Gas; and from the use of IoT in Agriculture to extracting data from photos published on Flickr in order to assist conservation.

Open Data

We’ve also been lobbying the Scottish Government and the city council on Open Data, as Ian has been writing on our sister site. That is starting to bear fruit. Aberdeen City Council have soft-launched a new open data platform, and are recruiting a manager for their open data work. While this is good, it is not as impressive as Dundee and Perth‘s new platforms, yet. The Scottish Cities Alliance are recruiting a new programme manager, and Ian has been invited to be part of a round table discussion on the way forward for Open Data hosted by the Scottish Government next February. It sounds like things will start to move in the right direction in 2019!

Research

Ian and Andrew have worked with ODI HQ to run two local workshops, contributing to two national pieces of research: the first on the effects of Peer to Peer markets on accommodation, and a second on what barriers there are to the better use of Ordnance Survey data and services.

Here’s to an equally successful 2019! Have a great festive break folks!

Ian, Andrew, Steve, Bruce

 

An open letter to Aberdeen City Council

It has been well documented that there is a problem with Aberdeen City Council and their approach to Smart City and Open Data in particular. See these posts, these requests and this github page from a project at CTC11, where we tried to help fix things. Today, a Finnish researcher on Smart Cities posted this on Reddit!  International reputation? What international reputation!

Now it appears that in the relaunch last week of the Aberdeen City Council website, the council has ditched masses of content. This includes the city-wide What’s On which was until recently the most heavily-used part of the council website and which provided an extremely useful community resource.

More digging – well Googling of some popular terms for council website content  and functions – returns nothing but 404 errors. See the list below for some examples.

When, in 2006 when when the site last underwent a major update, the small team took just six months on the transition, beginning to end. No content was lost or broken, and with URL rewriting and redirects they ensured that everything worked on day one.

The council have been working on the current relaunch – on and off as managers were swapped around or were dispensed with – for two years! And the mess of the site, with massive holes in content and functionality,  far outweighs the much-improved look and feel.

So, what is the plan to restore content, much of which is a matter of public record?

We, as tax-payers, have paid for the creation of functionality and information which is of significant public use. So, where has it gone?

For example where is:

Don’t the citizens of Aberdeen deserve better than this?

Maybe someone would care to make an FOI request to the city council – to ask what data the decision-making on transfer of content and functionality was based on, and get a copy of the website stats for the last three months? I think they are fed up of me.

Ian

Final presentations at CTC10 – Perth

We had four presentations at the final pitch session at CTC10.

We have uploaded these to Vimeo below (trimming them for time to just the core presentations, and eliminating intros and questions):

Team one: https://vimeo.com/236647324

Team two: https://vimeo.com/236648125

Team three: https://vimeo.com/236649327

Team four: https://vimeo.com/236650501

These brought an enjoyable and productive couple of days to a close.

Well done to all participants involved!.

CTC9 – Near the finish line

Here’s a quick update before the big show-and-tell later on.

Team: ALISS API database

The team has developed a draft version of the website tucked away on a test server. They have established the first functional search using the category ‘social isolation’. It returns a list of service providers in the area that is drawn from the three source databases. This is a big step forward, as we now know how to program a search and are able to deliver visible results on a user interface.

The team is also working on searches based on location by postcode or radius.

One expected challenge is the extraction of information from differently formatted data sources. For example, one source database does not provide contact details in dedicated address fields but in a more general description box.

Team: Soul Cats

This group went back to focusing on the public end users. They came up with various names for this new website that make it easy to find. They played with words from Scots dialect and proper King’s English. All suggestions were googled to see whether they exist already or are buried in amongst a ton of other results. Ideally, we want something unique!

The team suggested to submit a selection of words to a public forum in order to collect opinions or votes.

Team: The Professionals

The Professionals are a spin-off group from the Soul Cats. It’s a rollercoaster with those Cats! They went back to focusing on the value this website for health care professionals. In a structured approach they answered 4 key questions:

  1. Who are key stakeholders?
  2. What are key relationships?
  3. What are key challenges?
  4. What are the gains right now if this project went live?

team-gathering

CTC9 – Sunday Morning

What a beautiful sunny morning for making my way over to CTC9 HQ. It’s a slow start today. Hey, it’s Sunday…

Since we didn’t have a close-out meeting last night, we caught up with everybody’s progress in a kick-off meeting this morning. Make sure to read the update from yesterday afternoon beforehand.

Team: ALISS API

Geek MuffinThe data is flowing! We now have access to all 3 data sources: ALISS, GCD and MILO. MILO too? Yes! As it turns computing student Mikko has been working on hooking up MILO to the project as part of Team ALISS API.

Linking up GCD encountered a stumbling block after the initial success because the WiFi network ended up blocking the website used for our API. By the sounds of it, this is in hand though.

screenshot demoNow that we are connected to all databases, they are being combined by matching titles, identifying duplicates etc. The result will provide access to searchable data from all sources via one URL. James has already launched a temporary live demo page that connects to the databases. The first rough draft is based on story boards James designed with input from the user-focused teams last night. The website is currently at an early stage; so some buttons will work, some won’t. Feel free to rummage around.

There is also a shared file repository on github. It harbours user interface code, the backend REST API and photos from our brain storming sessions.

The next big goal is to develop the visual interface further to make search results visible to the website user. At the moment results appear only in code. The team also suggested that functionalities for location-based search and prioritising search results will require more development.

Sunday team photo

Team: Soul Cats

Teams Stripy Tops and Access All Areas have merged under the new name ‘Soul Cats’ (inspired by a T-shirt). This move made sense because both have been targeting user groups – the professional user (Stripy Tops) and the public (Access All Areas) – and now felt that their paths were converging.

The teams have drawn up more specific suggestions on user requirements based on the needs of different target groups. It’s quite impressive how yesterday’s wide-roaming discussions are now funneling into concrete scenarios and solutions. The obvious conclusion is to make the web interface simple – clear language, natural keywords, self-evident icons, sensible menu structure etc.

There was some discussion around: user cases

  • options for geo-location of service providers relative to user addresses
  • including info on mobility/access issues e.g. stairs
  • including info on parking, public and community transport connections
  • including photos of the service location, exteriors and interiors, so that people easily recognise the place once there

The next steps will involve working closer with our coders and coming up with names for the page, categories etc.