The Georgian vicar whose ideas could have saved Thameslink passengers from misery

London Blackfriars: not a Thameslink train in sight. Image: Getty.

The Reverend Thomas Bayes was born in Hemel Hempstead, Hertfordshire, in 1701. He grew up in London’s Southwark, and died in Tunbridge Wells, Kent in 1761. Had he lived 300 years later, a railway running from Hertfordshire to Kent via London Bridge would have been rather useful to him. And if the people who currently run that railway had paid more attention to him, everyone on the route would be a lot happier.

The Thameslink service links commuter towns to the north and south of London via the city centre. After a major timetable change this May, the network descended into chaos. Instead of the intended massive increase in services, the service through London collapsed.

Things got so bad that Govia Thameslink Railway (GTR) had to hire extra security staff to defend train crew from angry passengers. GTR’s CEO announced his resignation, although he’ll stay in place until the company finds someone willing to take on the poisoned chalice.

So what happened? First, some background. In the 1980s, British Rail (BR) reopened a disused freight line across London. This allowed BR to shift commuter services away from terminal stations, and free up peak hour space at St Pancras and Blackfriars.

This scheme worked so well that the railway went for a second round. This programme was called Thameslink 2000, after the year it was supposed to be finished. It’s nearly finished now (that’s another story). The timetable change was supposed to benefit from the new infrastructure.


Instead it collapsed. London Reconnections has outlined the underlying issues: in short, new trains were delivered late, so drivers didn’t know how to drive them; when GTR took over the franchise in 2014 the previous operator hadn’t been training new drivers, so it’s been playing catch-up; GTR’s training programme relies on drivers working overtime, which many of them don’t want to do; some new tunnels didn’t get handed over until far too late; and GTR didn’t transfer drivers to new depots in time. This meant that many drivers weren’t qualified to drive the new trains along the new routes in time for the change.

Some people might have decided to cancel at this point. But GTR had a cunning plan.

For a train to carry passengers, it needs to have a driver qualified to drive the route that it’s on, a driver qualified to drive the train, and a driver qualified to carry passengers. These don’t have to be the same person, so if you must, you can have three people in the cab, one of whom is qualified to do each. This isn’t ideal; but it’s safe, and it works.

GTR worked out that – between the drivers it had who were trained on the new trains, the drivers it had who were trained on the new routes, and the not-passenger-qualified drivers who had tested the new trains before they entered passenger service – it had enough drivers to run the new timetable by doubling or tripling up in the cab.

But it didn’t. Which is where the Reverend Bayes comes in.

The Reverend Thomas Bayes. Image: Wikimedia Commons.

If you’re working out the number of drivers you need based on traditional probabilities (statisticians call this ‘frequentism’), you look at five factors: the total number of trains needed, the number of drivers qualified for each part of the route, the numbers qualified for the right trains, the number qualified to carry passengers, and sickness/absenteeism rates.

Then you can work out the number of trains to run, based on the number of people likely to be around and qualified. On the evidence we’ve seen so far, GTR appear to have done this, and found that they were, narrowly, capable of running the service.

But there’s a problem here: people don’t come in percentages. Either you have a whole train driver or no train driver at all. And if you don’t have a train driver qualified to drive the train to Finsbury Park when it arrives at London Bridge at 7:30am on a Monday, then your whole timetable is stuffed.

Agent-based modelling is a more complicated way of looking at things than simple probability. But it has a huge advantage over simple statistical models, which is that it can deal with lumpy problems like train drivers. It requires a lot of hard maths, of the sort pioneered by the Reverend Bayes.

You use this maths to set up simulations of what will happen if you try and run the trains you have on the routes you have, using the drivers who you have. So your computer becomes a gigantic nerdy train simulator game, running the entire train timetable thousands of times, and seeing what happens each time you try to run it.

The conditions are slightly different each time: on run 3, the driver who’s off sick is Alan from Luton who is qualified to drive to Brighton but not Maidstone; on run 15, it’s Barbara from Brighton, who is qualified to drive to London Bridge but not Cambridge. The closer you can match the simulated agents to your real roster, the more accurate the simulation is.


Using this model, GTR would have found that having the right number of qualified crew is no use in itself: one person in the wrong place at the wrong time can make the whole thing fall over, even if there’s another qualified person on shift, because that qualified person is an hour’s cab ride away.

Because they didn’t do this kind of modelling, they took false reassurance from their data showing that they had enough crew. The first time their assumptions were put to the test was the first day of the real timetable – when it all fell to pieces.

If GTR had used agent-based modelling to test the new timetable, they would have had to ditch it at the last minute, which would have been horribly embarrassing. Maybe that’s why they didn’t do it. But looking back, it would have been much less embarrassing than what actually happened.

Want more of this stuff? Follow CityMetric on Twitter or Facebook.   

 
 
 
 

Self-driving cars may be safe – but they could still prevent walkable, liveable communities

A self-driving car, driving itself. Image: Grendelkhan/Flickr/creative commons.

Almost exactly a decade ago, I was cycling in a bike lane when a car hit me from behind. Luckily, I suffered only a couple bruised ribs and some road rash. But ever since, I have felt my pulse rise when I hear a car coming up behind my bike.

As self-driving cars roll out, they’re already being billed as making me – and millions of American cyclists, pedestrians and vehicle passengers – safer.

As a driver and a cyclist, I initially welcomed the idea of self-driving cars that could detect nearby people and be programmed not to hit them, making the streets safer for everyone. Autonomous vehicles also seemed to provide attractive ways to use roads more efficiently and reduce the need for parking in our communities. People are certainly talking about how self-driving cars could help build more sustainable, livable, walkable and bikable communities.

But as an urban planner and transportation scholar who, like most people in my field, has paid close attention to the discussion around driverless cars, I have come to understand that autonomous vehicles will not complement modern urban planning goals of building people-centered communities. In fact, I think they’re mutually exclusive: we can have a world of safe, efficient, driverless cars, or we can have a world where people can walk, bike and take transit in high-quality, human-scaled communities.

Changing humans’ behavior

These days, with human-driven cars all over the place, I choose my riding routes and behavior carefully: I much prefer to ride on low-speed traffic, low-traffic roads, buffered bike lanes or off-street bike paths whenever possible, even if it means going substantially out of my way. That’s because I’m scared of what a human driver – through error, ignorance, inattention or even malice – might do to me on tougher roads.

But in a hypothetical future in which all cars are autonomous, maybe I’ll make different choices? So long as I’m confident self-driving cars will at least try to avoid killing me on my bike, I’ll take the most direct route to my destination, on roads that I consider much too dangerous to ride on today. I won’t need to worry about drivers because the technology will protect me.

Driverless cars will level the playing field: I’ll finally be able to ride where I am comfortable in a lane, rather than in the gutter – and pedal at a comfortable speed for myself rather than racing to keep up with, or get out of the way of, other riders or vehicles. I can even see riding with my kids on roads, instead of driving somewhere safe to ride like a park. (Of course, this is all still assuming driverless cars will eventually figure out how to avoid killing cyclists.)

To bikers and people interested in vibrant communities, this sounds great. I’m sure I won’t be the only cyclist who makes these choices. But that actually becomes a problem.

The tragedy of the commons

In the midsize midwestern college town I call home, estimates suggest about 4,000 people commute by bike. That might not sound like many, but consider the traffic backups that would result if even just a few hundred cyclists went out at rush hour and rode at leisurely speeds on the half-dozen arterial roads in my city.

Technology optimists might suggest that driverless cars will be able to pass cyclists more safely and efficiently. They might also be directed to use other roads that are less clogged, though that carries its own risks.

But what happens if it’s a lovely spring afternoon and all those 4,000 bike commuters are riding, in addition to a few thousand kids and teenagers running, riding or skating down my local roads? Some might even try to disrupt the flow of traffic by walking back and forth in the road or even just standing and texting, confident the cars will not hit them. It’s easy to see how good driverless cars will enable people to enjoy those previously terrifying streets, but it also demonstrates that safety for people and efficiency for cars can’t happen at the same time.


People versus cars

It’s not hard to imagine a situation where driverless cars can’t get anywhere efficiently – except late at night or early in the morning. That’s the sort of problem policy scholars enjoy working on, trying to engineer ways for people and technology to get along better.


One proposed solution would put cars and bicycles on different areas of the streets, or transform certain streets into “autonomous only” thoroughfares. But I question the logic of undertaking massive road-building projects when many cities today struggle to afford basic maintenance of their existing streets.

An alternative could be to simply make new rules governing how people should behave around autonomous vehicles. Similar rules exist already: Bikes aren’t allowed on most freeways, and jaywalking is illegal across most of the U.S.

Regulating people instead of cars would be cheaper than designing and building new streets. It would also help work around some of the technical problems of teaching driverless cars to avoid every possible danger – or even just learning to recognize bicycles in the first place.

However, telling people what they can and can’t do in the streets raises a key problem. In vibrant communities, roads are public property, which everyone can use for transportation, of course – but also for commerce, civil discourse and even civil disobedience. Most of the U.S., however, appears to have implicitly decided that streets are primarily for moving cars quickly from one place to another.

There might be an argument for driverless cars in rural areas, or for intercity travel, but in cities, if driverless cars merely replace human-driven vehicles, then communities won’t change much, or they may become even more car-dependent. If people choose to prioritise road safety over all other factors, that will shift how people use roads, sidewalks and other public ways. But then autonomous vehicles will never be particularly efficient or convenient.

The Conversation

Daniel Piatkowski, Assistant Professor of Community and Regional Planning, University of Nebraska-Lincoln

This article is republished from The Conversation under a Creative Commons license. Read the original article.