We can use open data to make smart cities “searchable”

Lahore, Pakistan: the writer's hometown, and just one of the cities that could use some smart data to help with its traffic problems.

As we all rush to surf upon the sweeping tide of smarter cities, it pays to just pause and think – what exactly is a smart city? Or to put it another way, what is it that makes a city "not so smart" to start with?

As urbanisation has been growing, distances have also been increasing between people. Communication systems have found ways to link people who are geographically distant; but they still have to catch up with the problem of how exactly to recreate the relationship between people and the environment, at a time when both are constantly changing.

That particular aspect is pretty stable in rural or close-knit environments, mainly due to the availability of information about one’s surrounding at all times. But metropolitan areas lack that – and that's what I believe is what makes them "not so smart". So the problem to solve here is one of providing, or rather exchanging, information. That is the crux of the idea of “smart cities”, especially in the digital domain.

Most of this information is already available, published or unpublished. But it’s usually holed up in silos, unavailable to the people when they might require it most. The recent push by many public organizations to publish this information as Open Data has helped, a bit; the next step, if it’s to be useful, is to make it available when it is actually required.

Unfortunately, a major chunk of this published open data is in archaic formats, or formats which are extremely difficult to use online: data published in PDF, for example, or in formats only readable by propriety software. This data serves well for one time research or study purposes, but is absolutely useless when a system needs a steady flow of information.

Open Data comes in all shapes and sizes, but, in context of smarter cities, probably the most relevant dimensions are space and time. That’s why I founded Noustix, a system which collects and stores spatio-temporal data from multiple open sources, and turns them into a single stream for third parties. In this way, we can combine multiple data sets to give fresh insights. Road networks data can be combined with mobile phone use data to show traffic congestion, for example. Or you can combine traffic accidents data with demographic data to observe patterns in areas with varying population and resource densities.

But the most interesting applications come when this open data is further combined with crowd sources data for a whole new set of applications. Geo-tagged tweets, for example, can be used to see the patterns of conversation at any place, or any time. There is an argument for using the tweets people send when, say, they are frustrated at traffic jams, to identify the congestion hot spots in a city.

This can be done by semantic and text analysis, but a much more accurate way of doing it is asking people to use particular hashtags for specific purposes: when people are stuck in traffic, alongside the usual string of frustrated rants and obscenities, they can maybe add an unimaginative tag #stuckinTraffic.

Over a period of weeks or months, when this particular tag is evaluated, it will reveal the traffic congestion patterns in city. Comparing and combining this with published open data of traffic and road networks will give you a very precise picture of troubled areas. Moreover, by using social data, apps can stream in data live and see congestion hot spots in real time. You can even throw news feeds from the media into the mix, too.

Combining open data with crowd sourcing is a powerful way to organise, sort, filter and analyse data, in a way that makes it more useful. The "crowd" part actually helps to validate the authenticity and relevancy of the data, in a way no individual or organization could.

As an analogy, consider comparing cities with the web. Until search engines came along, the web was all about making and keeping extensive lists and bookmarks of useful websites or forums.  But search engines, especially those which use human input to improve results, changed all that. Suddenly, all that you needed to find relevant information was a query.

That's our aim for smart cities as well. A city is currently browse-able; we need to make it searchable. And social and crowd sourced data combined with open data can do that.

Areeb Kamran is British Council Global FUTR Lab winner and chief executive of the smart city platform, Noustix.

He is one of ten cultural innovators that took part in the first Global FUTR Lab, a partnership between The British Council – the UK’s international organisation for cultural and educational opportunities – and FutureEverything, the award-winning innovation lab for digital culture and annual festival. 

 

 
 
 
 

Podcast: Global Britain and local Liverpool

Liverpool. Image: Getty.

This week, two disparate segments linked by the idea of trading with the world. Well, vaguely. It’s there, but you have to squint.

First up: I make my regular visit to the Centre for Cities office for the Ask the Experts slot with head of policy Paul Swinney. This week, he teaches me why cities need businesses that export internationally to truly thrive.

After that, we’re off to Liverpool, with New Statesman politics correspondent Patrick Maguire. He tells me why the local Labour party tried to oust mayor Joe Anderson; how the city became the party’s heartlands; and how it ended up with quite so many mayors.

The episode itself is below. You can subscribe to the podcast on AcastiTunes, or RSS. Enjoy.

Jonn Elledge is the editor of CityMetric. He is on Twitter as @jonnelledge and on Facebook as JonnElledgeWrites.

Skylines is produced by Nick Hilton.

Want more of this stuff? Follow CityMetric on Twitter or Facebook.