Uncertainty is the new normal: the case for resilience in infrastructure

Members of the New York Urban Search and Rescue Task Force One help evacuate people from their homes in Fayetteville, North Carolina, in September 2018. Image: Getty.

The most recent international report on climate change paints a picture of disruption to society unless there are drastic and rapid cuts in greenhouse gas emissions. And although it’s early days, some cities and municipalities are starting to recognise that past conditions can no longer serve as reasonable proxies for the future.

This is particularly true for America’s infrastructure. Highways, water treatment facilities and the power grid are at increasing risk to extreme weather events and other effects of a changing climate.

The problem is that most infrastructure projects, including the Trump administration’s infrastructure revitalisation plan, typically ignore the risks of climate change.

In our work researching sustainability and infrastructure, we encourage and are starting to shift toward designing man-made infrastructure systems with adaptability in mind.

Designing for the past

Infrastructure systems are the front line of defense against flooding, heat, wildfires, hurricanes and other disasters. City planners and citizens often assume that what is built today will continue to function in the face of these hazards, allowing services to continue and to protect us as they have done so in the past. But these systems are designed based on histories of extreme events.

Pumps, for example, are sized based on historical precipitation events. Transmission lines are designed within limits of how much power they can move while maintaining safe operating conditions relative to air temperatures. Bridges are designed to be able to withstand certain flow rates in the rivers they cross. Infrastructure and the environment are intimately connected.

Now, however, the country is more frequently exceeding these historical conditions and is expected to see more frequent and intense extreme weather events. Said another way, because of climate change, natural systems are now changing faster than infrastructure.

How can infrastructure systems adapt? First let’s consider the reasons infrastructure systems fail at extremes:

  • The hazard exceeds design tolerances. This was the case of Interstate 10 flooding in Phoenix in fall 2014, where the intensity of the rainfall exceeded design conditions.

  • During these times there is less extra capacity across the system: When something goes wrong there are fewer options for managing the stressor, such as rerouting flows, whether it’s water, electricity or even traffic.

  • We often demand the most from our infrastructure during extreme events, pushing systems at a time when there is little extra capacity.

Gradual change also presents serious problems, partly because there is no distinguishing event that spurs a call to action. This type of situation can be especially troublesome in the context of maintenance backlogs and budget shortfalls which currently plague many infrastructure systems. Will cities and towns be lulled into complacency only to find that their long-lifetime infrastructure are no longer operating like they should?

Currently the default seems to be securing funding to build more of what we’ve had for the past century. But infrastructure managers should take a step back and ask what our infrastructure systems need to do for us into the future.


Agile and flexible by design

Fundamentally new approaches are needed to meet the challenges not only of a changing climate, but also of disruptive technologies.

These include increasing integration of information and communication technologies, which raises the risk of cyberattacks. Other emerging technologies include autonomous vehicles and drones as well as intermittent renewable energy and battery storage in the place of conventional power systems. Also, digitally connected technologies fundamentally alter individuals’ cognition of the world around us: consider how our mobile devices can now reroute us in ways that we don’t fully understand based on our own travel behavior and traffic across a region.

Yet our current infrastructure design paradigms emphasise large centralized systems intended to last for decades and that can withstand environmental hazards to a preselected level of risk. The problem is that the level of risk is now uncertain because the climate is changing, sometimes in ways that are not very well-understood. As such, extreme events forecasts may be a little or a lot worse.

Given this uncertainty, agility and flexibility should be central to our infrastructure design. In our research, we’ve seen how a number of cities have adopted principles to advance these goals already, and the benefits they provide.

A ‘smart’ tunnel in Kuala Lumpur is designed to supplement the city’s stormwater drainage system. Image: David Boey/creative commons.

In Kuala Lampur, traffic tunnels are able to transition to stormwater management during intense precipitation events, an example of multifunctionality.

Across the U.S., citizen-based smartphone technologies are beginning to provide real-time insights. For instance, the CrowdHydrology project uses flooding data submitted by citizens that the limited conventional sensors cannot collect.

Infrastructure designers and managers in a number of U.S. locations, including New York, Portland, Miami and Southeast Florida, and Chicago, are now required to plan for this uncertain future – a process called roadmapping. For example, Miami has developed a $500m plan to upgrade infrastructure, including installing new pumping capacity and raising roads to protect at-risk oceanfront property.

These competencies align with resilience-based thinking and move the country away from our default approaches of simply building bigger, stronger or more redundant.

Planning for uncertainty

Because there is now more uncertainty with regard to hazards, resilience instead of risk should be central to infrastructure design and operation in the future. Resilience means systems can withstand extreme weather events and come back into operation quickly.

Microgrid technology allows individual buildings to operate in the event of a broader power outage and is one way to make the electricity system more resilient. Image: Amy Vaughn/U.S. Department of Energy/creative commons.

This means infrastructure planners cannot simply change their design parameter – for example, building to withstand a 1,000-year event instead of a 100-year event. Even if we could accurately predict what these new risk levels should be for the coming century, is it technically, financially or politically feasible to build these more robust systems?

This is why resilience-based approaches are needed that emphasise the capacity to adapt. Conventional approaches emphasise robustness, such as building a levee that is able to withstand a certain amount of sea level rise. These approaches are necessary but given the uncertainty in risk we need other strategies in our arsenal.

For example, providing infrastructure services through alternative means when our primary infrastructure fail, such as deploying microgrids ahead of hurricanes. Or, planners can design infrastructure systems such that when they fail, the consequences to human life and the economy are minimised.

The Netherlands has changed its system of dykes and flood management in certain areas to better sustain flooding.

This is a practice recently implemented in the Netherlands, where the Rhine delta rivers are allowed to flood but people are not allowed to live in the flood plain and farmers are compensated when their crops are lost.

Uncertainty is the new normal, and reliability hinges on positioning infrastructure to operate in and adapt to this uncertainty. If the country continues to commit to building last century’s infrastructure, we can continue to expect failures of these critical systems, and the losses that come along with them.

The Conversation

Mikhail Chester, Associate Professor of Civil, Environmental, and Sustainable Engineering, Arizona State University; Braden Allenby, President's Professor and Lincoln Professor of Engineering and Ethics, School of Sustainable Engineering and the Built Environment, Ira A. Fulton Schools of Engineering, Arizona State University, and Samuel Markolf, Postdoctoral Research Associate, Urban Resilience to Extremes Sustainability Research Network, Arizona State University.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 
 
 
 

In many ways, smart cities are really very dumb

Rio de Janeiro’s control centre. Image: Getty.

It’s not news that anything and everything is increasingly being prefaced with “smart”: phones, watches, homes, fridges, and even water (yes, smartwater exists). And it’s not unintentional either. 

Marketeers know that we, the public, are often stupid enough to believe that thanks to their technology, life is better now than it was way back in, say, the primitive Nineties. Imagine having to, like a Neanderthal, remember how to spell words without an autocorrecting algorithm, or open the fridge door to check if you’d run out of milk, or, worse still, interact with actual people.

So it’s hardly surprising that we’re now also witnessing the rise of the so-called “smart cities”; a concept which presupposes that cities that are not technologically  “smart” are dumb, which, as anyone interested in the millennia-old history of cities — from the crypto-currency grain storage algorythms of ancient Mesopotamia to the complex waste infrastructure of ancient Rome, to London’s public transport infrastructure — will know, is not true.

Deployed in these smart cities are cameras and other networked information-gathering devices, load cells and other “sensing devices” detecting passing pedestrians and vehicles, audio surveillance devices listening for gunshots – and even vending machines equipped with biometric sensors to recognise your face. This is not to mention beacon technology — tiny anonymous looking black boxes hidden in trees and on lampposts — which transmits advertising, offers and other information directly to smart phones in the vicinity. 

If that doesn’t seem sinister enough, take, for example, Rio de Janeiro, where, in 2014, the International Business Machines Corporation designed a mammoth “control centre” that integrates data from 30 agencies for the city’s police. 

Described by the Guardian as having “the functionality of a Bond villian’s techno lair”, the then local mayor, Eduardo Paes, claimed the centre was making the city safer while using technology to deploy its “special” police unit to carry out the state’s “pacification programme”. Launched in 2008, the programme, which aims to push out drug gangs from Rio’s favelas, has been criticised by Amnesty International: “in January and February 2017 in Rio de Janeiro alone, at least 182 people were killed during police operations in marginalized neighbourhoods (favelas) – a 78 per cent increase in comparison to the same period in 2016”.

Sinister or not, as smart cities grow, they create new problems. For example, as urbanist Adam Greenfield writes in Radical Technologies: The Design of Everyday Life, neither the algorithms nor their designers are subject to the ordinary processes of democratic accountability – a problem that international academics are currently attempting to tackle.  


“We need to understand that the authorship of an algorithm intended to guide the distribution of civic resources is itself an inherently political act,” writes Greenfield. “The architects of the smart city have utterly failed to reckon with the reality of power.”

The Real Smart Cities project, founded by Dr Gerald Moore, Dr Noel Fitzpatrick and Professor Bernard Stiegler, is investigating the ways in which so-called “smart city” technologies present a threat to democracy and citizenship, and how digital tools might be used create new forms of community participation.

Fitzpatrick is critical of current discourses around smart cities, which he says “tend to be technical fixes, where technology is presented as a means to solve the problems of the city.” The philosophy underpinning the project is “that technologies function as forms of pharmacology”, he adds, meaning that they can be both positive and negative. “The addictive negative effects are being felt at an individual and collective level.” 

An example of this lies in the way that many of these smart cities replace human workers with disembodied voices — “Alexa we need more toilet roll” — like those used to control the Amazon Echo listening device — the high priestess of smart home. These disembodied voices travel at the speed of light to cavernous, so-called “fulfilment centres”, where an invisible workforce are called into action by our buy-it-now, one-click impulse commands; moving robotically down seemingly endless aisles of algorithmically organised products arranged according to purchase preferences the like of which we never knew we had — someone who buys a crime novel might be more likely to go on and buy cat food, a wireless router, a teapot and a screwdriver. 

Oh to be the archeologists of the future who while digging through mounds of silicon dust happen upon these vast repositories of disembodies voices. That the digital is inherently material and the binary of virtual/real does not hold — there is no cyberspace, just space. Space that is being increasingly populated by technologies that want to watch you, listen to you, get to know you and sense your presence.

One project looking to solve some of the problems of smart cities is that of the development of a “clinic of contribution” within Pleine Commune in greater Paris (an area where one in three live in poverty).This attempts to deal with issues of communication between parents and children where the widespread use of smartphones as parental devices from infancy is having effects on the attention of young children and on the communicative abilities between parents and children. 

This in turn forms part of a wider project in the area that Stiegler describes as “installing a true urban intelligence”, which moves beyond what he sees as the bankrupt idea of smart cities. The aim is to create a “contributory income” in the area that responds to the loss of salaried jobs due to automation and the growth and spread of digitisation. 

The idea being that an income could be paid to residents, on the condition that they perform a service to society. This, if you are unemployed, living in poverty and urban deprivation, sounds like quite a simple and smart idea to try and solve some of the dumb effcts of the digital technology that's implemented in cities under the ideology of being “smart”.