Peter M. Sandman. Worst Case Scenarios. www.psandman.com
Your doctor says you have a suspicious looking lump and she wants to run some tests. Your plumber says he’s not sure how much wall he’ll have to take down to find and fix that leak. Your boss says there may be more layoffs on the way.
Only time will tell how bad these pieces of bad news really are. While you endure the uncertainty, there is a key question you may or may not want to ask: “What’s the worst that can happen?”
What determines whether you want to know the worst case scenario? Part of the answer is utilitarian — whether there are decisions you need to make or actions you need to take to prepare for the worst. Do you need to spruce up your r閟um?in case you’re laid off, or look for a loan to pay the plumbing bill, or (the true worst case) put your affairs in order? But there’s also an emotional and characterological side to the answer. If the issue isn’t especially emotional for you, you’ll probably want to know the worst case. But if you’re pretty frightened already, you may not want to know; you may not want to have an all-too-vivid image of pending disaster to live with. Or you may be living with vivid images already; you may figure the truth can’t be worse than your imaginings; you may want to prepare yourself mentally for what might (or might not) be on the way.
And what if the worst case is extremely unlikely? It probably won’t happen anyway. Does that make you less afraid of knowing ?or less interested in knowing?
Now put yourself in the shoes of your doctor, plumber, or boss. Should s/he tell you the worst case scenario whether you want to know or not? Wait for you to ask? Raise the issue proactively and ask if you want to know? Decline to tell you even if you ask? Tell you, but in language so technical and unemotional you don’t realize how bad it could be?
Not complicated enough? Add some more factors:
- Maybe the source of the information is also the source of the problem — it’s not your doctor, plumber, or boss, but the management of a nearby factory telling you how bad the emissions might get.
- Maybe your information source is the government agency responsible for fixing the problem or preventing the worst case — and the agency hasn’t decided yet what to try.
- Maybe you’re already outraged that the problem exists in the first place.
- Maybe you don’t trust the information you’re getting anyway.
- Maybe the source already has a reputation for pooh-poohing serious risks — or, conversely, for exaggerating trivial ones.
- Maybe good answers simply aren’t available. Maybe how bad the worst case scenario might get and how likely it is are both hotly debated guesstimates.
Communicating about worst case scenarios, in short, is a can of worms. But wait! It gets worse.
Worst case scenarios are one of relatively few risk communication challenges that apply to all kinds of risk communication. I distinguish risk communication situations according to how serious the risk is technically (“hazard”) and how frightened, angry, or otherwise upset people are (“outrage”). If you conceptualize the worst case scenario problem as coping with unnecessarily fearful people fixated on vanishingly unlikely possibilities, it’s a problem of outrage management (low hazard, high outrage). If you conceptualize the problem as helping rightly fearful people keep their fears in perspective as they climb the ladder from normal conditions to what may turn into a real emergency, it’s a problem of crisis communication, or at least pre-crisis communication (high hazard, high outrage). And if you conceptualize the problem as alerting unduly apathetic people to possible catastrophes in time to take preventive action, it’s a problem of precaution advocacy (high hazard, low outrage).
When Monsanto tries to decide whether or not to mention the remote possibility that genetically modified corn could precipitate an eco-catastrophe; when the U.S. Department of Homeland Security notches its terrorism index from yellow to orange and tries to explain why; when Greenpeace warns that global warming might someday inundate the world’s coastal cities and island nations with seawater; when your local factory talks about what’s stored on site that could explode or your local health department talks about an infectious disease outbreak that could turn into an epidemic ?they are all addressing the complex dilemma of how to communicate about worst case scenarios.
Magnitude versus Probability: The Tradeoff and the Seesaw
As traditionally defined, risk is the product of two factors: magnitude (also called consequence, how bad it is when it happens) multiplied by probability (or frequency, how likely it is or how often it happens).
In mathematics, of course, it doesn’t matter whether you multiply a big number by a small number or a small number by a big number. You get the same answer. Mathematically, high-magnitude low-probability risks and low-magnitude high-probability risks are equivalent. Consider a technology that has one chance in a million of killing ten million people. The “expected” number of deaths from this technology is 10. That is, it has the same risk as another technology with one chance in a thousand of killing ten thousand people, or a technology with one chance in ten of killing a hundred people, or one that’s absolutely certain to kill ten people. They all have an “expected mortality” of ten. From a technical perspective, they are equal risks.
If you think like a mathematician, in other words, you are equally concerned about a one-in-a-million chance of killing ten million people and the sure death of ten people. But no one, not even mathematicians, thinks like a mathematician. Normal people figure a technology with one chance in a million of killing ten million people will probably kill nobody but just might kill ten million people. The number ten has nothing to do with it!
Worst case scenarios are by definition high-magnitude risks; they’re the worst thing that can happen. They are almost always low-probability — the worst that can happen doesn’t happen very often. So they are mathematically equivalent but not “humanly” equivalent to alternative scenarios that are not so bad but a good deal more likely.
So which is “humanly worse,” the improbable worst case scenario or the likelier not-so-bad scenario? It depends.
In particular, it depends on which of the two characteristics of the worst case scenario people are thinking about, its (high) magnitude or its (low) probability. Conceptually, there is nothing incompatible about high magnitude keeping company with low probability; it’s the norm, not the exception. But they are psychological antagonists. High magnitude means you should take precautions. Low probability means you should shrug the problem off. So most of us have trouble focusing simultaneously and equally on the two. We pick one. If we choose to focus on magnitude, then even a vanishingly unlikely scenario is unacceptable, because it’s so awful. If we choose to focus on probability, then even the prospect of worldwide catastrophe is tolerable, because it’s such a long shot.
So what determines which one we choose to focus on?
Part of the answer is prior outrage. If I’m already fearful about the technology in question or angry at you for imposing it on me, I am primed to focus on its high magnitude. “So what if the research says genetically modified foods probably won’t cause widespread ecological disaster. It’s possible. There are still unanswered questions. How dare you take any risk at all with the future of the ecosphere!” But if I’m apathetic, in denial, or feeling hopeless, then focusing on the low probability gives me my rationale for not getting involved, not taking precautions. “The experts haven’t found any proof that genetically modified foods are dangerous to the environment. Most of them say the risk is probably low. There’s always some alarmist throwing around doomsday scenarios!”
The other main factor is the risk communication seesaw. People are often ambivalent about the high magnitude and low probability of a worst case scenario. And when people are ambivalent, they tend to focus on the side of their ambivalence that isn’t getting enough focus elsewhere in the communication environment. So if you’re out there warning me how awful that worst case scenario really is, I tend to respond that it’s too unlikely to bother with. If you’re trying to reassure me about how unlikely it is, on the other hand, I tend to respond that it’s too awful to bear.
Highly outraged people and determinedly apathetic people, of course, aren’t on a seesaw. They see the worst case scenario the way they see it, and it doesn’t matter much what anybody else says. But people in the middle are riding a seesaw. They tend to take the seat you leave vacant. They may not stay in that seat — they’re still ambivalent, after all — so this use of the seesaw is mostly a short-term strategy. But its short-term effects can be stunning.
A spectacular example of the seesaw principle at work has been the Risk Management Plan (RMP) regulation administered by the U.S. Environmental Protection Agency. Under RMP, manufacturing facilities are required to figure out the worst possible accident they could have — and then tell their neighbors about it. Managements that insist the risk is low-probability, “so unlikely it’s not worth worrying about,” usually find their neighbors insisting worriedly on its high magnitude. Often they end up in contentious negotiations over what steps they must take to reduce that one-in-a-million catastrophic risk down to zero. Managements that appreciate the possible uses of the seesaw, by contrast, keep their focus on the risk’s high magnitude: “If this happens and this happens and this happens, just look how many people we could kill!” After a stunned half-minute staring at the plume map, someone raises his or her hand and asks, “But isn’t that really unlikely?” “Well, yes,” the smart company replies. “But just look at how many people we could kill!” The typical community audience soon piles onto the calm seat of the seesaw, uniting behind the principle that the company should grow up and stop wasting everybody’s time with these vanishingly unlikely worst case scenarios.
One key to communicating about a worst case scenario, then, is to ride the seesaw. Provide information about both aspects of the risk, its low probability and its high magnitude. But put your focus, paradoxically, where you don’t want your stakeholders to put theirs. If you want to keep people calm, your core message should be: “Even though it’s really unlikely, look how awful it is.” Then stakeholders can use your low-probability information to argue back, “Yeah, but it’s really unlikely.” If they’re a little less alarmed then you want them to be, on the other hand, switch seats on the seesaw. “Even though it’s really awful, look how unlikely it is,” you should assert, leaving your stakeholders to respond, “Yeah, but it’s really awful.”
For a more stable outcome, you need to teach your stakeholders to bear their ambivalence, rather than just picking one half or the other. This is the hardest and most desirable way to use the seesaw. Your goal is to get stakeholders to pay about equal attention to both aspects of the worst case scenario, so they keep its high magnitude and its low probability in mind at the same time, balancing on the fulcrum of the seesaw. Your best shot at accomplishing this is to pick your seat first. Then after your stakeholders are well-ensconced in the other seat, slide toward the fulcrum from your end; with any luck they will make a parallel move from their end. The closer you both are to the fulcrum, the easier it is to switch sides periodically, each of you reminding the other of whichever half of the ambivalence is being neglected. Don’t do this if you think it’s too manipulative. But remember, you are “manipulating” people toward the complex, hard-to-hold-onto two-sided truth.
But that’s the advanced course. For starters, just try using the seesaw when you have a worst case scenario to communicate. Tell people both how unlikely it is and how awful it is — but put your focus where you don’t want their focus to be.
Sometimes there are seesaws within seesaws, complicating things considerably. In 2002–2003, for example, officials at the U.S. Centers for Disease Control and Prevention (CDC) were trying to figure out how to talk to health care workers about the option of getting a smallpox vaccination. The obvious seesaw (obvious to public health people, anyway) was the high magnitude versus the low probability of vaccine side effects. I think they handled that seesaw quite well, emphasizing that the vaccination worst case was awful even though it was unlikely. But there was another seesaw at work, a more complex one: the high-magnitude low-probability risk of getting vaccinated and suffering a serious “adverse event” versus the high-magnitude low-probability risk of not getting vaccinated and facing a smallpox attack unprotected. Health care workers pondering whether or not to roll up their sleeves generally weren’t told about that other worst case scenario, the one that might result (for them and the nation) if a smallpox attack were to materialize. I think this contributed substantially to the small number of volunteers for smallpox vaccination. (See the section on the seesaw in my column on “Smallpox Vaccination: Some Risk Communication Linchpins.”) But that’s the advanced course too.
The Temptation to Withhold Worst Case Scenarios
Why communicate worst case scenarios at all?
My clients are often trying to reassure their stakeholders, and are understandably reluctant to talk about potential disasters. (Activist groups and others trying to warn the public are obviously happy to talk about potential disasters. They have different problems. See the Postscript at the end of this column.) Their reasons boil down to three: People don’t need to know the worst case; people might panic if they knew it; and it’s purely speculative anyhow.
“People don’t need to know.” The first reason makes sense — but only very rarely. Here are the technical specifications for a situation where people really don’t need to know:
- There are no precautions you want them to take, nothing they ought to do to get ready in case the worst case happens.
- They don’t need any psychological preparation either — no emotional rehearsal; they’re as ready as you want them to be.
- You don’t need their advice, their cooperation, or their support (for a budget increase, for example) to get your own organization ready to cope.
- If you did tell them, there is nothing they would want to do or say or even feel (nothing at all, not even something you would consider futile or unwise).
- If the worst case materializes, they will agree in hindsight that there were no precautions they needed to take, no psychological preparations they needed to make, nothing they needed to say to you — in short, that you were right not to tell them.
- They’re not already aware of the worst case, waiting for the other shoe to drop and wondering why you’re hiding the truth.
- No one else will tell them, and they won’t find out on their own — or, if they do, they won’t mind that you didn’t tell them.
In my judgment, these tech specs are very rarely satisfied. Usually there is something people ought to do to help get ready for a possible crisis, or at least in hindsight they’re bound to think there was. And usually they have an inkling already of what might go wrong, and there are critics and whistleblowers all too delighted to spill the beans.
“People might panic.” The second reason, that people might panic, holds even less water than the first. What crisis management experts mean by panic is emotion so ungovernable it leads to behavior damaging to oneself and perhaps also to others. By that definition, panic is rare — and panic just because somebody was candid about a high-magnitude low-probability risk is virtually unheard of. Panicky feelings are common enough, though usually temporary. But people almost always manage to react sensibly to the prospect of a future crisis. (And time to get used to that prospect makes it easier, not harder, for them to react sensibly to the crisis itself, if it happens.)
Of course what people think is sensible may not be what a government official or corporate executive hoped they’d think. People may think a particular worst case scenario is less acceptable than you think it is. They may take precautions you didn’t recommend; they may demand that you take precautions you don’t want to take. That’s not panic. It is disobedience and disagreement. (For more on the mistaken supposition that people are panicking or about to panic — which I sometimes refer to as “panic panic” — see Sandman and Lanard, Fear of Fear: The Role of Fear in Preparedness ... and Why It Terrifies Officials.)
When my clients worry that people might panic about a worst case scenario, in other words, either they’re worrying about something that almost certainly won’t happen (genuine panic) or they’re worrying about something that isn’t panic — that people might disobey or disagree. Usually, this boils down to a worry that people might take the risk more seriously than you want them to. True enough, they might. But as a matter of principle, that’s not a reason to blindside them; it’s a reason to level with them — so they can base their decisions and actions on their own judgment rather than being stuck with yours, and so they can urge you to take their opinions into consideration. And as a matter of practicality, it’s not a reason to blindside them either. People are likeliest to reject your advice and challenge your plans when they feel you have been less than candid with them. So if they’re already aware of the worst case, talking about it is the best way to preserve your credibility. If they’re likely to find out later, talking about it is the best way to preserve your credibility. And of course if it actually materializes, whether you blindsided us all or warned us this might happen will be central to your ability to lead us through the crisis.
Paradoxically, talking candidly about worst case scenarios is likelier to reassure people than to frighten them (far less panic them). More often than not, they are already pondering what might go wrong, imagining the worst and wishing there were some way to get it out onto the table and get the facts. Of course some stakeholders may be shocked to find out what could happen, and some activists may grab hold of the issue as a juicy new bit of ammunition. But the most common reaction when you come out of the closet with your worst case is a sigh of relief. Often it’s not as bad as people imagined. Even if it is, the other shoe has dropped at last; now they have company in their worrying and the reassurance that you’re not ignoring the problem.
Almost a decade ago, the U.S. chemical industry was fretting about new government requirements to discuss worst case scenarios with factory neighbors. So the National Institute for Chemical Studies did some research on how such discussions impacted local communities. The study found that advising plant neighbors what to do in the event of an emergency that spread off-site, and telling them what plant and community emergency responders would do in such a case, tended to calm their fears, not exacerbate them. Talking about the worst case actually led to reduced public estimates of its likelihood.
People simply don’t tend to overreact to honest information about high-magnitude low-probability risks. There may be a brief period of “overreaction” as they get used to the idea. But a public that has adjusted to the possible worst case is more stable than a public that doesn’t know about it yet — and more calm than a public whose worst case worries are solitary, secret, and uninformed.
“It’s wrong to speculate.” As for speculation, the notion that you can talk about risk without speculating is self-contradictory. Risk is bad things that might happen; speculation is talking about what might happen. (See Lanard and Sandman, It Is Never Too Soon to Speculate.) So if you’re going to do risk communication at all, you’re going to speculate.
Interestingly, companies and agencies routinely speculate anyway. Sometimes they do so irresponsibly — usually by sounding overconfident or over-optimistic or both. Sometimes they do so responsibly, incorporating worst case scenarios and stressing that they’re far from sure. From hurricane forecasting to market forecasting, speculative communication is everywhere; a significant percentage of every newspaper and newscast is about what might happen. Then a new situation comes along where those in charge feel uncomfortable speculating. For whatever reason (their own fears, pressure from outside, nervousness about the public’s reactions), this time they don’t want to mention the worst case scenario. Suddenly they insist that “one should never speculate” — as if they hadn’t been speculating all along.
The temptation not to mention worst case scenarios isn’t confined to corporate “bad guys.” Government officials face the same temptation, and are about equally likely to succumb to it. And the underlying motives aren’t always self-serving. Whether they work for companies or for government agencies, most risk managers really don’t intend to keep the public in the dark about serious risks, not even potentially serious risks that are highly unlikely. They’re willing enough in principle to inform the public. What intimidates my clients is the fear that they may wake the sleeping giant and then have to deal with it. They feel they can deal with the risk okay; they don’t want to have to deal with an aroused, interested, opinionated public. If they could only inform people without having to listen to them....
Again and again, I find that my clients have an unarticulated mental model of the ideal public: uninterested and uninvolved unless told to do something, then blindly obedient. The typical factory management wants its employees to follow the prescribed precautions and pay attention at safety trainings and drills, but not to ask awkward questions about why the flare looked strange yesterday or what’s in the solvent that smells different. It wants its neighbors to be as apathetic as possible about possible plant hazards, but still poised to evacuate or shelter-in-place or do whatever they’re told to do if something bad actually happens. Similarly, the local health department wants everybody to use DEET and get rid of old tires and other potential breeding grounds for mosquitoes — all without getting unduly exercised about West Nile Virus and demanding a more active (or less active) spraying program. And the Department of Homeland Security wants Americans to pack their go kits and call an 800 number if they see anything suspicious — but not to criticize the precautions that have and haven’t been taken at airports, stadiums, power plants, and other potential targets.
In other words, risk managers want a public that is simultaneously paying no attention and ready to act. This weird combination of apathy and risk tolerance on the one hand, preparedness and obedience to precautions on the other, simply can’t be achieved. Getting people ready to take care of themselves means telling them the truth about what might happen, worst case scenarios and all. An unavoidable side effect is that they will have their own opinions about how best to prevent and prepare for the risk in question.
The Temptation to Downplay Worst Case Scenarios
Quite often my clients look for a compromise. They don’t withhold the worst case scenario entirely, but in one or another way they downplay it.
The downplaying can take several forms. One common approach is to mention the worst case scenario once or twice in a low-circulation technical report, or even in a news release, so it’s on the record — and then stop talking about it. With a little luck maybe nobody will pick up on it much. Repetition is a key signal to journalists and stakeholders that something is important. If you don’t repeat it, they may not notice it, or may not realize its importance.
Another strategy is to say it, even to say it often, but without much drama. If you phrase the worst case technically enough, maybe reporters and the public won’t know what you mean; if you phrase it boringly enough, maybe it won’t quite register. Surprisingly often, even in real crisis situations, journalists and the public do miss the point when dramatic information is cast in technical language. And sources tend to speak more complexly when they’re talking about anxiety-provoking worst case scenarios. Some of this is unconscious; your own anxiety makes you hide behind big words and fancy sentences. Some of it is intentional.
At the 1979 Three Mile Island nuclear plant accident, for example, Nuclear Regulatory Commission officials were worried (mistakenly, as it turned out) that a hydrogen bubble in the containment might explode and cause a meltdown. When they shared this possibility with journalists, they did it in such polysyllabic prose that reporters thought they were denying it, not acknowledging it. The story got out anyway, because a source back at NRC headquarters alerted an Associated Press reporter that this was a terrifying prospect, worthy of aggressive coverage. Until then reporters at the accident site missed it, even though sources at the site had said it ?sort of.
The level of technical jargon was actually higher at Three Mile Island when the experts were talking to the public and the news media than when they were talking to each other. The transcripts of urgent telephone conversations between nuclear engineers were usually simpler to understand than the transcripts of news conferences. They said things to each other like: “It looks like we’ve got a humongous amount of core damage” — then made the same point to the media in phrases so technical that not one reporter got the message.
Three Mile Island was a big story; it dominated newspapers and newscasts for weeks. If reporters and the public can miss the scariest aspect of a big story, then obviously when the story isn’t so big it’s quite easy to miss the significance of a mildly framed worst case scenario. This strategy — issuing dry, for-the-record warnings that are easily missed by technically ill-trained journalists or ignored by a busy public — is a compromise. As the situation unfolds, you get both the benefits and the drawbacks of an uninvolved public: less support and preparedness than you might like, but less interference and anxiety than you might fear. If the situation abates without a crisis, the gamble will have paid off, and you will have avoided looking like the boy who cried wolf. If a crisis materializes, on the other hand, people will be less prepared than they might have been, and inevitably they will feel, with some justice, that they weren’t properly warned. At that point you will be able to point to your prior statements in mitigation, as evidence you didn’t actually suppress the relevant information. If you’re wise you will also admit you didn’t push the information as hard as you could have.
If you’re really wise, I believe, you might consider a different strategy in the first place, and ramp up the intensity of your worst case communications.
I have been talking about downplaying the worst case scenario as a conscious strategic decision, and sometimes it is. News releases about minor risk controversies, for example, often go through three or four drafts (and occasionally through many more). Typically, the actual information doesn’t change all that much from draft to draft. What changes is the tone. When internal reviewers object to a particular passage as too alarming, the information isn’t usually removed — it’s just rephrased. The release that finally goes out is technically truthful about the worst case scenario, but so steadfastly unemotional that it bores readers instead of alerting them. The very phrase “worst case” is sometimes removed on the grounds that it might alarm people, leaving behind a hyper-technical description of the worst case without even an explicit acknowledgment that it’s bad.
As a risk communication consultant without much technical expertise, one of the main things I do is read such drafts carefully, figure out what they mean, and then propose rephrasing them so they sound like what they mean. I don’t know enough to catch my clients when they’re lying. I catch them when they’re telling the truth in misleading ways. Then they get to decide whether or not to change their language to match their meaning.
One of the things I have learned from this process is that quite often the “strategy” of downplaying the worst case scenario isn’t really a conscious strategy. Phrasing awful possibilities in unemotional language comes naturally to my clients. They don’t think they’re trying to mislead. They’re being “professional.” When I suggest different language that highlights the scary truth their language is obscuring, their first reaction is that my version is somehow inaccurate. It takes a while for them to accept that the two mean the same thing, that the only difference is whether the audience is likely to get it or miss it. Only then can they make the conscious decision whether they want the audience to get it or miss it.
Even if you want the audience to get it — even if your conscious goal is to alert people to the worst case scenario, not just to get it onto the record — it turns out that “blowing the whistle” on a worst case that hasn’t happened isn’t so easy. The best way to get the job done is by mobilizing the anger component of outrage, not just the fear component; it’s much easier to get people alarmed about a risk if you can get them enraged at the people responsible. Second best is to pull out all the emotional stops: quote people who are terrified, find some heart-wrenching video footage, paint word pictures of what the worst case will be like if it happens, etc. Activists can do all that. If you work for a public health agency, on the other hand, you probably can’t. So you may wind up downplaying the worst case when you really wanted not to.
Over the past several years, for example, the world has faced an increasingly serious threat from avian influenza (bird flu). Like many other non-human species, birds can get viruses like the flu. As long as humans are immune to a particular bird flu virus, the problem is mostly an economic one for the poultry industry. But if a certain strain of bird flu manages to transmit itself from birds to humans, then obviously there is a human health risk. (That has happened already with the H5N1 strain of bird flu currently prevalent in Asia.) If it passes from one human to another human, the risk is greatly increased. (The experts aren’t completely sure if that has happened for H5N1 or not.) If it passes efficiently from human to human, and has a high human mortality rate, now we’re talking about a possible flu pandemic — that is, a worldwide epidemic that could easily kill millions. (That hasn’t happened yet for H5N1, but it has happened before with other strains of flu, most memorably with the “Spanish Flu” of 1918. And it will happen again, sooner or later.) Humans would have no resistance built up against a flu strain they’d never had before, so until a new vaccine was developed, tested, and mass-produced for the new strain, the death toll could be huge, even in the West. In the developing world it would stay huge, even after a vaccine was available.
One worst case scenario is that somebody, probably a poultry worker, gets the bird flu from a chicken. He or she then passes the disease on to somebody else, probably a family member or a nearby patient in the respiratory disease ward of a hospital. The newly infected person already has a human flu virus (that’s especially likely in the respiratory disease ward, of course). The genes in the two different strains of flu “reassort” (mix and match), producing a brand new flu virus with some of the characteristics of the bird flu and some of the characteristics of the human flu. The worst possible combination: no human resistance, no known vaccine, efficient human-to-human transmission, and a high human mortality rate. That’s the brand new flu that starts riding airplanes and infecting hundred of millions of people worldwide.
Alternatively, the “mixing vessel” for bird flu and human flu could be a pig; pigs live in close proximity to birds and people and have been the disease path from one to the other before. That’s why the revelation in mid-August 2004 that China had identified pigs infected with H5N1 a couple of years earlier was big news to infectious disease experts. Although H5N1 has been wreaking havoc with several Asian nations’ poultry flocks for the last eight months, it has accounted so far for only a handful of human deaths, and no confirmed human-to-human transmissions. But it’s easy to imagine a pig simultaneously infected with H5N1 and some common strain of human flu. One unlucky reassortment later, the world could face a new, virulent flu strain against which people have no natural resistance.
The magnitude of the bird flu worst case scenario is obviously enormous. Its probability is unknown and unknowable. There have been flu pandemics since 1918, but they lacked one or more of the characteristics needed for a perfect storm. So it has been 86 years since the last worldwide influenza disaster. On the other hand, infectious disease experts point out that there have been at least three near misses in the past few years — two avian flu outbreaks and SARS (not a strain of flu, but a different virus that passed first from animal to human, then from human to human).
I talked in June with a Canadian infectious disease expert who said that she and most of her colleagues around the world were increasingly nervous. So far, she said, neither SARS nor avian flu has turned into a massive human catastrophe. Both looked like they might, but each time the catastrophe was averted. She wasn’t sure whether to attribute this to the gargantuan effort put forth by health authorities around the world ... or to luck. She stopped short of claiming as a matter of science that the risk of a disastrous human flu pandemic is higher today than it has been before. She’d say only that her hunch was it’s higher, and that most of her peers had the same hunch.
Now, here’s the important question: How much of the information in the previous five paragraphs did you know already? How much of it do you think the typical North American or European newspaper reader knows? How much does the typical Asian chicken farmer know?
If the answers to these three questions are “not much,” then the follow-up question is obvious. Why isn’t this information widely known?
It’s certainly not because of some conspiracy to keep the news secret. The basic information is all out there. On August 18, 2004, I did a Google search using the search terms “avian flu” and “pandemic.” On Google News I got 73 hits — 73 news stories within the previous 30 days warning that avian flu could lead to a human pandemic. On the Web I got 7,510 hits! — including the websites of the World Health Organization, The U.S. Centers for Disease Control and Prevention, and Health Canada. All these hits mention avian flu and a human pandemic in the same news story or website, if not necessarily in the same sentence. Searching “avian flu” and “1918” did about half as well — Google News had 38 hits and the Web had 3,150 — all stories and sites that mention both the recent avian flu outbreaks and the 1918 human pandemic, though again without necessarily connecting the dots.
So it’s not a secret. But it’s not a big story either, though it got perceptibly bigger after the August 20 announcement that some Chinese pigs had been infected with bird flu.
I very much doubt that the world’s national and international health agencies are trying to downplay the risk of bird flu. They really want the world to pay attention. They want Asian farmers (and North American and European farmers, too) to move quickly to kill diseased flocks, and nearby healthy flocks as well. They want serious quarantine measures to try to keep the disease from spreading to flocks that are okay so far. They want bird handlers protected as much as possible from contact with the disease, and if the handlers get sick anyway they want everyone else — and especially people with ordinary human flu — protected as much as possible from contact with the handlers. To accomplish all this, they want the cooperation of all levels of governments, and of affected people and industries; and they want money from the developed world to help the developing world do the job.
As always, there’s got to be some ambivalence — concern about scaring people, damaging the poultry industry, stigmatizing countries or parts of countries. But for the most part, the world’s health agencies want to alert us. If we’re not feeling especially alert to the bird flu worst case scenario, it’s not because they haven’t tried.
But most of the time their efforts have been a little bloodless. Look at two emblematic language choices. When a chicken flock is found to be infected with avian flu, that flock and all nearby flocks have to be killed — a process that is emotionally and economically devastating to all concerned, not just the chickens. The term most health agencies have used to describe this massacre is “cull.” The second example gets to the heart of the matter, the worst case scenario itself. The word health agencies use here is “pandemic.” The experts certainly know that in a human flu pandemic millions might die; in a serious human flu pandemic, millions and millions will die. Most of the reporters covering the story probably have figured that out by now too. But much of the public probably hasn’t. Health agency spokespeople may have forgotten, or may not realize, that millions and millions of people don’t know what the word “pandemic” really means.
But even for those who know, intellectually, that a “cull” may mean killing millions of chickens and a “pandemic” may mean killing millions of people, the words don’t necessarily conjure up the appropriate images of death and devastation.
We need to be told, graphically and explicitly, that millions of people might very well die, as they did in 1918. We need to be told that the disease would spread like wildfire, and that we would be resorting to quarantines and closed schools and laws against public meetings and other draconian measures — not to stop the pandemic but just to slow it down in hopes of buying time while the experts scrambled to come up with a vaccine. We need to be told that the developed world would be doing triage with its limited supply of antivirals, while the developing world simply wouldn’t be able to afford antivirals at all. We need to be told that figuring out how to dispose of the bodies quickly enough would be a serious problem, as it has been in every other major pandemic. Instead of all that, too often we are told that there is a possibility genetic material might reassort and lead to a human flu pandemic.
An August 18 Google search for “avian flu,” “millions” and “deaths” got me only seven hits on Google News — all of them referring to the deaths of millions of birds, not people. On the Web it’s not hard to find sites that explicitly link the ongoing bird flu outbreaks with the possible death of millions of people, but it’s mostly blogs, not official sites.
Since the current avian flu crisis arose, in short, the world’s health agencies have been candid but not very emotionally emphatic about the public health worst case scenario. In fairness, some agencies and some spokespeople have been consistently more emphatic than others. And in the last week or two, as the crisis appeared to worsen, the emphatic statements have become more common. There are still a few national governments that don’t want to talk about bird flu worst case scenarios at all — not to mention the governments that don’t even want to admit they have birds (or pigs) with bird flu. Most health authorities that talk about bird flu do address the worst case, though they do so in a more muted way than they might.
What’s important to notice is that panic isn’t the problem with bird flu worst case scenarios. Apathy is the problem. People aren’t frightened enough. They aren’t cautious enough or prepared enough.
It is also worth noticing that so far the media have not been sensationalizing the pandemic risk posed by avian flu. Even when they talk about the H5N1 “killer virus,” they don’t tend to dwell much on what life in a 21st century flu pandemic would — “will,” probably — be like. Journalists are likeliest to sensationalize when a risk is dramatic, photogenic, emotionally moving, and geographically convenient ?but not all that serious. Real health and safety crises get lots of coverage, of course, but it is cautious coverage; reporters are as fearful as their sources of sowing the seeds of panic. Less serious risks — such as a worst case scenario that isn’t very likely to materialize — are more vulnerable to media sensationalism. Is avian flu too serious for the media to hype? Probably not — if it were, they’d be giving it much more intensive though steadfastly sober attention. No, the problem for Western journalists is that the avian flu story still lacks drama, good pictures, a heartthrob, and a news peg in the West. As of mid-August 2004, coverage in the West has been both sober and scanty.
Most public health authorities are probably ambivalent about their bird flu worst case scenarios; they don’t want to panic people or look excessively alarmist or provoke too much second-guessing and interference, but they do want to alert people and get their support for some serious precautions. A company considering what to say about the worst that might happen if its factory or product malfunctions badly is probably much less ambivalent. It would rather nobody knew; if that’s not possible, it hopes nobody cares. So the temptation to understate the worst case is compelling. But it is profoundly unwise.
For governments facing a possible crisis, underplaying the worst case may be a conscious compromise, or just habit or a professional aversion to evocative language. But for companies facing potentially outraged stakeholders, underplaying the worst case — whether it’s conscious or unconscious — is a desperation move. Because public skepticism tends to be stronger and critics tend to be harsher, understating the worst case is much more likely to backfire on a corporation doing outrage management than on a government agency doing crisis (or pre-crisis) communication.
If your stakeholders are already worried about the possible worst case and knowledgeable enough (or motivated enough) not to miss it, the option of mentioning it in language they’re likely to overlook won’t fly. They’re sure to pick up on any mention at all. Many companies have long and painful experience with what happens when you issue technical reports into which you have carefully inserted all-but-invisible warnings about worst case scenarios you don’t want to discuss but daren’t omit. At the next public meeting, sure enough, a neighbor with no technical training at all starts asking follow-up questions about that alarming paragraph in Appendix H. Even if the strategy works for a while — as long as outrage stays fairly low — a sotto voce acknowledgment of a worst case scenario is a time bomb waiting to explode. When outrage rises and stakeholders start looking for evidence of malfeasance, they will be all the more frightened about the worst case and all the more disinclined to believe your reassurances because you “revealed” the truth so quietly.
Even more often, you have well-informed critics who already know a good deal about what might go wrong at your plant. They are just waiting to see how you handle your communication dilemma. They may be poised to pounce if you ignore it or underplay it. Or they may plan to bide their time until the moment of maximum impact. In such situations, you have no real option but to put the worst case scenario out there in all its glory. Underplaying the worst case is almost as bad as ignoring it altogether.
And if no one knows about the worst case, but there are plenty of hostile opponents and worried publics around, then revealing it in an understated way may be the most damaging of the options. If you keep the worst case secret it might just stay secret, if you’re lucky — that’s a dangerous sort of brinkmanship, but not a sure loser. And if you announce it with drums and trumpets, you at least get credit for candor and the right seat on the seesaw. Revealing it in a hyper-technical appendix to a formal report is in some ways the worst of both worlds. Your secret is out, but your critics still get to claim credibly that you didn’t really tell people. Right now, many health officials around the world think the Chinese government did exactly that by publishing its data about bird flu in pigs in a Chinese veterinary journal, without informing the appropriate international agencies (or even most of its agriculture ministry).
Some ways of communicating about worst case scenarios are simultaneously dramatic and, somehow, calming. One of my industry clients some years ago faced the obligation to tell its neighbors what might happen off-site in a really serious plant accident. It decided to go whole hog. It built its worst case scenarios into a user-friendly computer model, almost a computer game. And then it installed the software in the computers at the local public library. With a few clicks of the mouse people could summon up plume maps complete with mortality and morbidity estimates, then check out which of the worst case scenarios they’d probably survive and which ones would probably kill them. Even I thought the company might be going too far. I was wrong. For the people who chose to play with it, the very interactivity of this computer simulation was strangely comforting. “Hey,” users could be heard whispering excitedly to each other. “This one gets your house but not mine!” The game-like quality of the experience drove home better than any lecture or PowerPoint presentation that the company’s worst cases were indeed horrific ?but not very probable.
Public Perception of Worst Case Scenarios
So far I have advanced two fundamental arguments: You’re usually better off revealing the worst case than suppressing it. And if you’re going to reveal it at all, you’re usually better off not downplaying it — stressing its high magnitude at least as much as its low (or unknown) probability, using appropriately alarming language rather than a tone that is reassuring or overly technical, making the point often rather than just once or twice, etc.
I want to end this column with a checklist of guidelines for communicating about worst case scenarios. But first I need to say some things about how the public perceives worst cases.
WARNING: I rarely say or write much about risk perception. My clients are all too tempted already to think their problems are attributable to the public’s stupidity, its foolish tendency to misperceive risks. And their natural response to this diagnosis is to want to ignore or mislead their stakeholders, rather than leveling with them: “If people are going to misperceive what you tell them, why talk to them in the first place?” In general, I think, it is much closer to the truth — and much more conducive to good risk communication — to attribute risk controversies to justified public outrage or to genuine differences in values and interests than to public “misperception.” But sometimes there is no escaping the scientific research on how people (including the people who imagine they’re exempt because they’re experts) perceive risk. This is, briefly, one of those times.
Let’s start with the complicated relationship between mathematical probability and perceived probability — a question explored in depth by Daniel Kahneman and Amos Tversky over decades of research that ultimately won psychologist Kahneman the Nobel Prize for Economics in 2002. Compare three gambles: (a) one chance in two of winning $100; (b) one chance in two hundred of winning $10,000; and (c) one chance in twenty thousand of winning $1,000,000. Mathematics and traditional economics insist the three gambles have equal value. The “expected outcome” in each case is $50, and if you can get any of these wagers for much less than $50 you should jump at the chance. But almost everybody finds (b) a more attractive gamble than (a), while (c) is either the most or the least attractive of the three. As Kahneman and Tversky explain, “low probabilities ?are overweighted,” while “very low probabilities are either overweighted quite grossly or neglected altogether.” Most people would rather keep their $50 than take the first gamble. But many will fork over the money for the second gamble. And there are enough of us greatly attracted to the third gamble to keep the lottery business booming.
The pattern for losses is the same (on this characteristic, though not others) as the pattern for gains. An unlikely worst case scenario seems likelier than it is; a very unlikely worst case scenario either seems much, much likelier than it is or seems impossible.
Not surprisingly, there is a relationship between outrage and how we respond to very unlikely worst case scenarios. If the risk is especially dreaded; if I have no control over what precautions are taken; if it’s your fault; if it’s unfair because I’m getting none of the benefits; if you have lied to me about it for years ?these and other “outrage factors” dispose me to see that very-low-probability worst case scenario as quite likely.
There is one important exception to this outrage-increases-perceived-probability rule: denial. Fear (and to a lesser extent anger, hurt, guilt, and the other emotions that often accompany outrage) can be hard to bear. And when people cannot bear their emotions, an emotional circuit breaker is tripped and they go into denial instead. So the women who are most terrified of breast cancer may deny their fear, underestimate their chances of getting breast cancer, and “not bother” to check for lumps. At the height of the Cold War, similarly, many people underestimated the probability of a nuclear exchange between the United States and the Soviet Union, not because they weren’t concerned but because they found the prospect too awful to contemplate. (For more on the complexities of denial, see the section on denial in my column on 9/11. See also the discussion in Sandman and Lanard’s Duct Tape Risk Communication and Sandman and Valenti’s Scared Stiff — or Scared into Action. Finally, check out Beyond Panic Prevention: Addressing Emotion in Emergency Communication.)
Denial aside, we usually over-estimate the probability of a low-probability worst case, especially if it upsets us. So we are willing to pay more than the math says we should to reduce the probability to zero, taking it off the table once and for all. But we’re unwilling to pay as much as the math says we should to reduce the probability to some number other than zero — an even lower non-zero chance of disaster doesn’t feel like much of an improvement over the already low non-zero chance of disaster we’re fretting about now.
Imagine two equally dangerous diseases. Vaccine A provides perfect protection against one of the two, but doesn’t touch the other. Vaccine B is 50% effective against both. The two vaccines prevent an equal number of deaths. But when Kahneman and Tversky studied situations like this, they found that people would pay substantially more for Vaccine A than for Vaccine B, because it eliminates one of their two worries altogether.
The implications of all this for risk communication are pretty obvious. A worst case scenario that your company or agency risk managers see as too unlikely to deserve much attention is likely to strike the public — especially the outraged public — as deserving a lot of attention. If you fail to address the issue, or address it too casually or hyper-technically or (worst of all) mockingly, that will increase people’s outrage, which in turn will increase their sense that the worst case isn’t all that unlikely — launching a cycle of increasing concern you can’t afford to ignore. Your offer to reduce the risk with additional precautions will alleviate people’s concern less than the math says it should. But the activists’ proposal to eliminate the risk altogether by shutting down the factory or banning the technology will alleviate people’s concern more than the math says it should.
So far we have focused only on the probability of the worst case scenario — and on people’s tendency to over-estimate its probability. Just as important is people’s tendency to pay more attention to the magnitude of a high-magnitude low-probability risk than to its probability.
Some of this is attributable to what Kahneman and Tversky call the “availability heuristic”: Memorable images get more attention than they deserve. Events that are recent, heart-rending, horrifying, visual — that stick in the mind for whatever reason — naturally come to preoccupy us, even if they are (and we know they are) statistically unlikely. For similar reasons they preoccupy the media, which amplify their impact on us through drama and repetition. Will my child get kidnapped the way her child did? Will I receive an anthrax-laden letter the way he did? Will the tornado strike my house too? Many worst case scenarios are psychologically vivid. Even if we manage not to overestimate their probability, we are bound to over-focus on their magnitude.
But this isn’t just a perceptual distortion. Most people feel that high-magnitude low-probability risks deserve more attention than the “risk = magnitude × probability” formula dictates. Consider two power generation technologies. The first is solar power. Let’s assume that solar power kills 50 people a year; they die falling off their roofs while installing or repairing their solar installations. (I am making up the data here.) The second technology is a nuclear power plant, which generates, let’s say, the same amount of electricity as all those solar units. We will assume that the plant has one chance in a hundred of wiping out a nearby community of 50,000 people sometime in the next decade. Now, one chance in a hundred over a ten-year period of killing 50,000 people is an expected annual mortality of 50. Mathematically, in hazard terms, the two technologies have the same risk. Nonetheless, our society (and any sane society) is far likelier to accept a technology that kills 50 people a year, spread out all over the country and all across the year, than it is to allow the Sword of Damocles to hang over a community of 50,000 with anything like one chance in a hundred of wiping them out in the next ten years.
That’s not because we’re stupid, not because we don’t understand the data, not because we can’t multiply, and not because we’re misperceiving the risk. It is because we share a societal value that catastrophe is more serious than chronic risk. The same number of deaths rip the fabric of the universe more when they come all together than when they come one by one in different times and places. Possible catastrophes gnaw at people’s lives; actual catastrophes are intrinsically unfair and hard to recover from. Worst case scenarios, in other words, really are more serious than the magnitude-times-probability calculation suggests.
Once again, outrage matters. In assessing our individual, voluntary risk, we usually pay more attention to probability than to magnitude. The low-magnitude high-probability risk of getting a speeding ticket deters drivers more than the high-magnitude low(er)-probability risk of crashing. But in assessing risks that others impose on us, we are interested chiefly in magnitude. The possibility that something you do might destroy me, my family, even my whole neighborhood is bound to generate a lot of attention — no matter how slim the odds. And of course if it has happened before someplace else, with lots of attendant media coverage, so much the worse.
One of the implications here for risk communicators and risk managers is that you must talk about your efforts to reduce the magnitude of the worst case scenario, not just its probability. As noted earlier, your stakeholders would be happier if you could eliminate catastrophic risks entirely. But assuming you can’t, your efforts to make the risk smaller if it happens may matter more to people than your efforts to make it less likely to happen. I think many of my clients focus excessively on their prevention activities (aimed at reducing probability), and pay too little attention to preparedness (which can reduce magnitude).
A more general implication, obviously, is that you should anticipate the public’s focus on worst case scenarios, and defer to it by putting a lot of your own focus there too. As the risk communication seesaw suggests, this can have the paradoxical effect of reminding people that the worst case is, after all, pretty unlikely.
The comparative salience of unlikely worst case scenarios and likelier but not-quite-so-bad scenarios does vary some from individual to individual, from culture to culture, and from decade to decade. In the U.S., the period from the mid-1980s to the mid-1990s was a time of unusual focus on chronic as opposed to catastrophic risks. Living forever was the unstated goal; dieting and jogging and health clubs were in. Cancer was the arch-enemy, and neighbors of industrial facilities were more worried about the carcinogens coming out of the stacks than about the possibility that the plant might explode. The U.S. government’s Toxic Release Inventory program, requiring factories to reveal their emissions, was inaugurated in 1986. It was in large measure a response to the Bhopal disaster. Talk about non-sequiturs! The worst industrial accident in history led to a law regulating not accidents but chronic emissions.
A decade later, in 1996, the more normal focus on disasters was reasserting itself, and the U.S. launched its Risk Management Plan program, requiring factories to reveal their worst case accident scenarios. Virtually everything that has happened since then has trended in the same direction — most notably the arrival of the new millennium and the terrorist attack of September 11, 2001. “Cancer” is still a scary word, of course, but “disaster” is once again a scarier one.
Back in the 1980s, a chemical plant manager I worked with in Texas used to meet routinely with contingents of neighbors worried about emissions. He would point out the window of the conference room at the sphere in which elemental chlorine was stored. “That’s what I’m worried about,” he’d say. “I’m certainly prepared to talk about the steps our company is taking to reduce chronic emissions, but what keeps me up at night is that chlorine sphere. If that sucker goes, so does half the town.” Back then, people heard him out but stayed focused mostly on the chronic emissions. Today I’ll bet the plant’s chlorine spheres are getting a lot of attention, not just in terms of accident scenarios but as a possible target for terrorism as well.
Worst case scenarios are back.
Guidelines for Communicating Worst Case Scenarios
Here’s a checklist of additional guidelines.
1. Put the worst case in context by discussing less devastating possibilities. I am focusing on worst case scenarios because that’s the information my clients are most reluctant to talk about. But obviously the less awful alternative scenarios also deserve discussion. The “best case” may or may not need to be addressed, but the likeliest case certainly does, along with a couple of middling possibilities less devastating than the worst case and less probable than the likeliest case. Obviously there are an infinite number of possible futures. But figuring out which ones to talk about isn’t that tough. Focus mostly on the ones you’re talking about and planning for internally. (And when stakeholders ask about a disaster scenario you’ve been discussing internally, don’t dismiss the question as “speculation”!)
2. Make it clear that you’re talking about a possible future. The worst case isn’t a fantasy; it’s a real possible future that may (or may not) justify taking precautions now. But the worst case isn’t a prediction either; it’s what might happen, not necessarily what you think will happen. And the worst case certainly isn’t the current reality. In an emerging situation — which may in hindsight turn out to have been a pre-crisis situation — the right message is typically something like this: “Even though the news is pretty good so far, there may be bad news coming. We hope not, but we should be prepared for the worst.”
3. Keep risk magnitude and risk probability in close proximity. As a rule, worst cases are unlikely. People who are focused on the risk’s low probability need to be reminded about its high magnitude. People who are focused on its high magnitude need to be reminded about its low probability. Never say “unlikely” without “awful.” Never say “awful” without “unlikely.”
4. Don’t understate the worst case. I’m not talking about language here; we’ve covered that already. I’m talking about which scenario you pick to address. A sure way to get into trouble is to offer people a “worst case” that is actually not your worst case, but something likelier and not so awful. It’s not necessary for reality to turn out more devastating than the direst of your warnings for you to be attacked for understatement. All that’s necessary is for somebody to come up with a hypothetical scenario that isn’t too far-fetched but is nonetheless worse than the one you offered. As noted earlier, you should certainly talk about scenarios that are lower-magnitude and higher-probability than your worst case. But don’t ever give anyone the impression you think they are your worst case.
5. Don’t take the words “worst case” too literally. The previous guideline notwithstanding, you probably can’t and certainly shouldn’t focus on the literal worst case. Whatever disaster scenario one conjures up, after all, it is almost always possible to imagine one still worse (and still less likely). The public doesn’t want to hear about invasions from Mars, certainly not about invasions from Mars that occur coincidentally at the same time as your plant accident. We will experience too implausible a worst case scenario as a diss, a mocking reductio ad absurdum. Still, the problem of too extreme a worst case scenario is far less common than one that’s too moderate. Certainly any scenario that your organization has worked on, any scenario you have contingency plans for, any scenario someone inside your organization has argued you should have contingency plans for, is a scenario you should be willing to discuss publicly. If it wasn’t too far-fetched to talk about internally, it’s not too far-fetched to share with your stakeholders. Even if a scenario has never been seriously considered internally, if critics are talking about it, you should be talking about it too. Just steer away from scenarios that are going to sound sarcastic rather than alarming.
6. Don’t use the absence of a solution as an excuse not to go public. My clients sometimes tell me they can’t discuss a particular worst case scenario because they don’t have anything to propose to reduce the risk. If you just need a few days to work out your plans, the delay may make sense (unless the prospect is imminent or the stakeholders are impatient). But quite often you simply can’t think of a feasible way to make a particular worst case scenario less likely or less awful. Or, even more often, you don’t think the available risk reduction strategies are worthwhile for such an unlikely risk. Then that’s what you ought to be telling people. Hiding a particular worst case scenario from your stakeholders is a good way to ensure that if they ever find out on their own they will demand that you take preventive action. As the seesaw predicts, if you discuss your worst case scenario openly, without lowballing the risk, you stand a far better chance of convincing your stakeholders that prevention money should be spent on likelier cases.
7. Don’t use the absence of answers to people’s questions as an excuse not to go public. One you start discussing a worst case scenario, people may well have follow-up questions about its probability, its magnitude, ways to reduce the risk, etc. You may not have the answers. More specifically:
- The question may be unanswerable.
- It may be answerable, but you may not have collected the data yet.
- You may not want to collect the data ever — because you don’t think it’s worth the effort and expense, or possibly because you don’t want to know the answers.
- You may have the answers in hand but prefer not to share them.
8. Pay more attention to reducing the risk than to measuring it. What you’re doing to prevent the worst case scenario from happening is more important to stakeholders than your estimate of its probability. What you’re doing to be prepared to cope if it happens is more important than your estimate of its magnitude. Debates over risk estimation are a lot less productive than discussions of risk reduction: what you have done, what you could do, what you don’t want to do, what we want to make you do, what we can do ourselves.
9. Open your emergency planning files. As already noted, any risk that’s serious and plausible enough to justify internal planning is serious and plausible enough to justify public discussion. And much of that discussion should focus on the content of your planning. There may be occasional exceptions for reasons of security (especially when your worst case is a terrorist attack), but on the whole your stakeholders deserve to be told and consulted. If you have mortality estimates, so should the public. If you have plume maps, so should the public. If you are trying to decide how much vaccine to stockpile, so should the public.
10. Treat the allocation of resources as an open question on which your stakeholders are entitled to an opinion. Worst case scenarios raise two sorts of resource allocation questions: (a) How much to focus on the worst and unlikeliest scenarios, and how much on less extreme scenarios; and (b) for each scenario, how much to focus on prevention (reducing its probability of happening) and how much on preparedness (reducing its magnitude if it happens). There are strong arguments for not over-investing in very unlikely worst cases. Most cars carry spare tires but not spare engines; most homes have band-aids and aspirins but not heart-lung machines. And there are strong arguments for not over-investing in prevention. Too much focus on prevention can deter much more cost-effective strategies of preparedness, leaving us unprotected when prevention fails. These are values questions as much as technical questions. You’re entitled to your opinions, and entitled to express them (though you would be wise to bear the seesaw in mind). So are your stakeholders. Launch the discussion, rather than trying to preempt it.
11. Acknowledge that risk assessments of catastrophic risks are often extremely uncertain. Uncertainty isn’t a good excuse for hiding your worst case scenario. But it is a good reason for emphasizing how unsure you are about that scenario. You’re not sure it’ll happen, obviously. You’re not sure it won’t. You’re not sure you’re on target in your estimate of how likely it is, or how bad it will be if it happens. You’re not sure how best to prevent it, or how best to prepare for it in case prevention fails. You’re not completely in the dark on any of these subjects, either. You have some judgments and suggestions and plans to share. You know your audience will too. (For more on ways to talk about uncertainty, see my handouts on Crisis Communication: Avoiding Over-Confidence in Uncertain Situations and on Dealing with Uncertainty.)
12. Acknowledge that risk assessments of catastrophic risks aren’t especially conservative. Many risk assessment experts have spent most of their careers assessing chronic risks — for example, how many additional people will get cancer as a result of plant emissions. Methodologies for assessing chronic risks are designed to be conservative, to overestimate the risk as a guarantee against underestimating it. (Note that many activists disagree with this claim.) Methodologies for assessing catastrophic risk — that is, the magnitude and probability of worst case scenarios — are not similarly conservative. Event trees tend to be missing many of their branches; the precise accident that actually happens usually isn’t there. Hundred-year floods occur every decade or so. More generally, things get screwed up more routinely than we imagine they will, and in ways we never imagined at all. The late Aaron Wildavsky argued on these grounds that crisis planners should focus less on prediction and more on resiliency, on being ready to cope with disasters they never imagined. A good discussion of worst case scenarios admits all this. So leave your stock speech on conservativeness home; it doesn’t apply to worst case scenarios.
13. Don’t neglect risks attributable to intentional behavior. You’d think that the recent experience of terrorism would have cured us all of focusing our worst case scenarios on accidents alone. But the U.S. regulations requiring companies to go public with their worst case scenarios explicitly exempt intentional acts, whether outside terrorism or sabotage at the hands of a disgruntled employee. There are some genuine security issues here, of course — but there are ways to acknowledge your non-accidental worst cases without giving terrorists a helping hand. And they are worth acknowledging. From Bhopal to 9/11, the most devastating worst cases are often intentional. If you’re doing a good job of accident prevention, moreover, the only disasters left that aren’t exceedingly unlikely may be the intentional ones. Nor can you get away with the claim that other people’s evil actions aren’t your fault. A company is as responsible for preventing sabotage as for preventing accidents; a government infectious disease lab is as culpable if someone steals a microbe as if someone spreads it accidentally. For more on this important topic, see my 1995 article, When Outrage Is a Hazard.
14. Don’t neglect risks that are someone else’s responsibility. Corporations and government agencies are far more attuned to the boundaries between organizations than the public is. I once worked with a factory whose actual worst case was a possible explosion in the rail yard next door, where the tank cars of explosive chemicals were stockpiled till needed. Management didn’t want to discuss this scenario, arguing that transportation accidents were the responsibility of the railway company, not the plant. Similarly, government agencies responsible for human health sometimes decline to talk about risk to animals, while the veterinary agencies refuse to discuss possible spread to humans. And city governments sometimes estimate risk as if they had no suburbs. All of this strikes the rest of us as bureaucratic at best, evasive at worst. Talk with your stakeholders about the whole risk, not just the parts in your job description.
15. Look for opportunities to involve your publics, stakeholders, and critics. The contemplation of worst case scenarios should not be a spectator sport. Of course, one-way communication is better than no communication at all; it’s worth the effort to tell people what you think even if you’re not going to ask what they think. But it’s obviously better to ask what they think — not only because they may have some good ideas but also because they’re less likely to overreact if they’re not being ignored. For best results, involve your key stakeholders, and especially your critics, early in the process, before you have reached even preliminary conclusions. Ask them to help you decide which scenarios and which risk reduction options are worth assessing, and then ask them to help you plan (and even implement) the assessment process. By the time you’re ready to go public, they should have some confidence in what you found. Then they can help you explain it to everybody else.
16. Look for opportunities to give away credit. If your stakeholders and critics have played an important role in helping you think through your worst case scenarios, by all means say so. Even if you did it all yourself, try to give them credit for making you go public. Remember, longtime critics usually can’t afford to say you did the right thing on your own. They have only two possible roles vis-à-vis your worst case scenario efforts (or anything good you do): Either they trash it as inadequate or they take credit for it. The latter is far better for you. The more credit they get for what you did, the less compelled they will feel to criticize what you did. It’s amazing how easily my clients “forget” to give away credit. A crucial precursor of the Risk Management Plan program in the U.S. was the chemical industry’s 1994 “Safety Street” initiative in the Kanawha Valley of West Virginia. The local activist most responsible for forcing the issue was Pam Nixon. Early plans for Safety Street ignored her altogether, virtually daring her to say it was too little too late. Only at the last minute were the plans amended to acknowledge her role and her much-deserved victory.
17. Tell people which risks worry you the most. The easiest way to persuade people to worry less about X is to urge them to pay more attention to Y instead. Telling us which worst case scenarios to worry about works a lot better than telling us not to worry about any. Don’t think this means you can get away with ignoring the scenarios that worry your stakeholders more than they worry you. If we’re worried about X, you have no choice but to address our concerns, and address them seriously. Only if you are doing this well can you get value out of adding that Y actually worries you more than X. You need to pay attention to our “worry agenda,” in short, but you can also share your own.
18. Pay the most attention to the most concerned people. Many stakeholders will probably ignore your discussion of worst case scenarios altogether; others will listen, feel reassured, and leave. And some will become fixated. As with any other risk controversy, it is important not to give in to the temptation to ignore the handful of “fanatics,” figuring that if most people are calm that’s good enough. There are three good reasons to keep interacting with your most involved stakeholders, no matter how few and unrepresentative they may be.
- They’re the most interested people, and therefore the likeliest to come up with useful suggestions — scenarios or solutions you never even considered.
- They’re the most worried people (assuming nobody has gone beyond fear into denial), most in need of guidance and reassurance.
- They have an invisible constituency; their less concerned followers and friends are watching to see how well you respond to their concerns.
19. Tell people about “warning signals” that have occurred. In the wake of every catastrophe, regulators, journalists, and activists look for warnings that were missed — and nearly always find them. This is sometimes very unfair; we never look for the precursors of disasters that didn’t happen, and what looks like a clear warning signal in hindsight may not have looked like one beforehand. But now is “beforehand.” Small accidents, near-misses, alarming internal memos, and the like are precisely the warnings you should be identifying and discussing. (See my column on Yellow Flags: The Acid Test of Transparency.) If stakeholders are already pointing to such a signal, then you have been doubly warned: warned about the warning. Ignoring it will generate substantial outrage now ?and, of course, incredible outrage if disaster strikes.
20. Tell people about “warning signals” that may occur. One of the reasons worst case scenarios generate so much anxiety is that they seem to materialize out of thin air; everything’s fine and then suddenly disaster strikes. Sometimes that’s just the way things are. But often there are warning signals you can anticipate, and teach your stakeholders to anticipate. “If the hurricane veers around to the north, then our area will be at risk.” Of course you don’t want to lean too heavily on these signals; there are often false positives (“chatter” without a follow-up terrorist attack, for example) and false negatives (a terrorist attack without preceding chatter). Even so, knowing that if X or Y happens we should gear up for a possible crisis helps us stay loose so long as X and Y haven’t happened. For similar reasons, emergency warning equipment can be a significant source of reassurance. Neighbors who know a siren will sound if there’s an emergency at the plant will tend to stay relaxed until the siren sounds. (Of course if there’s a site emergency and no one sounds the siren, you’re in deep trouble.)
21. Keep telling people where they are on the “ladder” of warning signals. Some worst case scenarios come with a clear-cut “ladder” of warning signals — B is scarier than A, C is scarier than B, etc. In talking about avian flu, for example, health officials can identify the rungs on the ladder on the way to a human flu pandemic: bird-to-human transmission, human-to-human transmission, efficient human-to-human transmission, spread to humans in areas where birds are so far uninfected, etc. When another rung is reached, there are four key messages:
- The situation got more alarming yesterday because C happened.
- We were already on guard because A and B had happened earlier.
- Our level of concern will increase further if D and E happen.
- Eventually we will (or in some cases, “may”) encounter a “perfect storm” — we’ll climb the ladder all the way to the top and our worst case scenario will occur — but there is no way to know yet if this is that time. So far we’re only at C.
22. Tell people what they can do to protect themselves. Worst case scenarios are as chilling as they are in part because we usually feel so passive. Whether the scenario is likely or unlikely, we’re just the prospective victims, with no role but to wait and worry. So give us a role. Our role in prevention is probably confined to some sort of public involvement process, advising you on what prevention strategies we’d like to see you pursue. But you can probably give us more concrete things to do about emergency preparedness and emergency response. Whenever you talk about what you are doing to prepare for the worst, tell us what we can do; whenever you talk about what you would do if the worst were to occur, tell us what we should do. To the extent you can, offer us choices of things to do, so we can pick the ones that best match our level of concern and our need for action. “Shelter in place,” for example, is an awfully passive action recommendation; people find it incredibly frustrating to be told to stay put while the plume passes overhead. Even when it’s the right thing to do technically, can’t we at least pack a go kit or tape our windows while we wait?
23. Avoid euphemisms. We have already discussed the temptation to make your worst case scenario sound less alarming than it actually is by using hyper-technical language. Euphemisms are part of the same temptation — calling an explosion a “rapid oxidation,” for example. Or recall the now-famous first description of the Challenger disaster; NASA called it a “major malfunction.” Key words in most worst case scenarios are “kill” and “death.” If you’re modeling plant accidents, for example, you probably have plume maps complete with LD-50s (half the people in the shaded area would die). Hand them out. And then explain them vividly, not euphemistically: “Look how many people would die!”
24. Don’t blindside other authorities. Preventing, preparing for, and responding to disasters are usually collaborative activities. You’re part of a network of emergency responders, police, firefighters, hospital officials, regulators, politicians, etc. (Things are worse for the World Health Organization; it’s part of a network of countries, and can do only what its member states permit it to do.) When you go public with your worst case scenarios, it affects the rest of your network too. This isn’t an excuse for not going public — but your partners deserve some forewarning and a chance to prepare (and perhaps a chance to join you in the communication rollout). Three mistakes are especially worth avoiding (though sometimes they are unavoidable):
- Telling the public things you haven’t yet told your partners — or things you told them but they missed or forgot.
- Telling the public things your partners should have been saying but haven’t said.
- Telling the public about inadequacies in your partners’ emergency planning.
25. Be prepared to make concessions. It is theoretically possible to go public with your worst case scenarios and your plans for coping with them, hear out your stakeholders, and not encounter any suggestions worth taking. But it’s not likely, and it’s not credible. The growing obligation to talk about worst case scenarios is part of the growing democratization of emergency planning. People expect the communication to be two-way, and they expect the communication exchange to yield real changes in your plans. One important corollary: Don’t solve all the solvable problems before you communicate. Just as you don’t go into labor-management negotiations and immediately put your best offer on the table, you shouldn’t go into worst case scenario discussions with your best and final answer to catastrophic risk already in hand. Instead of struggling to have the best possible emergency planning before you communicate, start communicating now — and let us help you improve your emergency planning.
Postscript for Sources that Like Talking about Worst Cases
An underlying assumption in this column has been that sources prefer not to talk about worst case scenarios, and do so only if they feel they must. Most of the time that’s an accurate assumption for corporations and government agencies. But it certainly isn’t true of activist groups. And there are exceptions even among corporations and government agencies. Like activists, competitors have incentives to over-stress worst case scenarios, not to ignore them — A tells prospective customers what might go wrong if they do business with B. The same goes for those in the business of selling protection, from private guard services to insurance companies to physicians. And of course government agencies are also sometimes in the business of selling protection — for example when they’re trying to persuade people to get their vaccinations or wear their seatbelts. Or consider the issue of whether Saddam Hussein’s Iraq possessed weapons of mass destruction prior to the U.S. decision to march on Baghdad. Criticism of the U.S. intelligence agencies has focused on what looks to be excessive and over-confident emphasis on worst case scenarios that justified the war and ultimately turned out false.
Greenpeace, Zurich Insurance, the local health department, and the CIA, in other words, are all organizations that don’t need to be urged to pay enough attention to how bad things might get. How bad things might get is their stock-in-trade. What have I got to say to them?
First of all, overstating risk tends to lead to a lot less public outrage than understating risk. There are certainly exceptions. The Iraq-related criticism of U.S. intelligence is one, though even there the attacks have been at least as angry for underreacting to the threat of Al Qaeda as for overreacting to the risk of Iraqi WMDs. And whole industries, sometimes even whole countries, get angry when public health experts issue worst case scenario warnings that cause economic damage. But in general, people are more forgiving when warnings turn out to be false alarms than when bad things happen and they feel they weren’t adequately warned. And people are usually more tolerant of a warning they suspect is over-the-top than of a reassurance they suspect is downplaying the dangerous truth.
This fact really irritates my corporate clients. It seems profoundly unfair to them that activists exaggerate with impunity both the magnitude and the probability of worst case scenarios, while companies are in deep trouble if they’re caught minimizing those scenarios. It’s not that people don’t realize activists exaggerate, by the way. In surveys, people routinely acknowledge that activists overstate risks. It’s just that they don’t mind; they think activist exaggeration is a useful way to get problems attended to. But when companies play down those same worst case scenarios, that’s dishonest and dangerous.
This asymmetry is fundamental and universal. It’s a kind of conservativeness. When a smoke alarm goes off even though there’s no fire, that’s a fairly minor problem; when there’s a fire and the smoke alarm didn’t go off, that’s a big deal. Similarly, we “calibrate” activists to go off too much rather than not enough. It is possible to get into trouble for over-warning people — especially if the warnings seem intentionally misleading and the cost of having paid attention to them is especially high. But it’s a lot easier to get into trouble for over-reassuring people.
It follows that organizations whose raison d’être is protecting people and warning people are going to be under a lot of pressure to overstate the worst case — with comparatively little counter-pressure to avoid doing so. By contrast, organizations that are reluctant to discuss their worst case scenarios have to balance competing incentives. The managers of a factory, for example, would rather not admit what might go wrong, but they know they’re vulnerable to criticism if they don’t. And as previously discussed, the World Health Organization has reasons to want to emphasize bird flu worst cases and reasons to want not to. But organizations like Greenpeace and the U.S. Central Intelligence Agency face incentives that are all in the same direction. Their mission, their personal preferences, their ideology, and their aversion to criticism all lead them to want to make sure nothing bad happens that they didn’t warn someone about first. If they warn about bad things that don’t actually happen, that’s usually a much smaller problem.
I don’t see any practical way to change all this. Greenpeace and the CIA, the insurance industry and your doctor, are all likelier to overstate a worst case scenario than to neglect it. That’s probably for the best, all things considered.
But how they overstate the worst case scenario matters. In particular, I want to argue two closely related points: that warnings should not overstate the probability of the risk, and that warnings should not overstate the confidence of the people issuing the warning.
Consider three entirely different horrific scenarios: Saddam Hussein (when he was still in power) acquiring nuclear weapons, terrorists launching a massive smallpox attack on the United States, and global warming leading to huge temperature changes and worldwide death and dislocation. All three are (or were) high-magnitude, low-probability risks. The cases for regime change, smallpox vaccination, and the Kyoto treaty were all three grounded in precaution, in the judgment that it makes sense to take steps to prevent awful outcomes even if they are unproved — in fact, even if they are fairly unlikely. Insisting that those outcomes were proved or likely, when the data showed only that they were possible and exceedingly dangerous, was not good risk communication.
As with virtually all worst case scenarios, less extreme versions of these three were all likelier than the worst cases that tended to get most of the attention. Iraq’s government clearly had a history of using chemical weapons, for example; a localized smallpox outbreak (resulting from laboratory carelessness) isn’t as improbable as a massive attack; global warming will surely have real impacts on agriculture, health, and ecosystem succession. Warnings that focus on these more moderate predictions are entitled to claim higher levels of probability and higher levels of confidence.
But it’s often hard to motivate action, especially political action, by warning about small dangers. And it is arguable that the best reason for taking action in all three situations isn’t (or wasn’t) the higher-probability, higher-confidence, lower-magnitude scenario. It’s that low-probability, low-confidence, high-magnitude worst case. This is the very essence of the precautionary approach to risk: Disastrous possibilities don’t have to be highly likely to justify preventive action. And if hindsight shows that the disastrous possibility was never really possible after all, that doesn’t necessarily cast doubt on the wisdom of the precautions. You couldn’t be sure beforehand, and so your precautions that turned out unnecessary made sense at the time.
So it is justifiable, I believe, for the advocates of precaution to stress the worst case — to dramatize it; to emphasize it; to argue that we need to take steps now to prevent it. Even if these communications look overwrought to opponents, and even if they turn out mistaken in hindsight, fervent warnings about high-magnitude low-probability risks are legitimate.
But it is not legitimate to pretend that the worst case is the likeliest outcome, or that you are confident it is going to happen. Be as vivid as you want about the high magnitude of the worst case, but be straight about its low probability and the intrinsic uncertainty of predicting the future.
I have to acknowledge a significant problem with this recommendation — a problem grounded once again in the research of Kahneman and Tversky. People tend to be risk-averse about gains; the vast majority would rather have a $100 gift than a one-in-ten chance at a $1,000 gift. But we are risk-tolerant, even risk-seeking, with respect to losses. The same vast majority will roll the dice on a one-in-ten chance of having to pay $1,000 rather than accept a certainty of paying $100. It is hard to get people to gamble on gains, and hard to get them to buy insurance against losses. The argument implicit in all worst case scenario warnings is that we should take a voluntary hit now — a war in Iraq, a smallpox vaccination program, laws against greenhouse gases — in order to prevent a worse outcome that may never happen anyway. That’s a tough sell, and Kahneman and Tversky tell us why.
But there are ways of overcoming our natural tendency to take the chance of a possible future loss rather than pay a smaller but unavoidable price now. Insurance companies, for example, often represent themselves as selling peace of mind; you’re not buying protection against disaster so much as a good night’s sleep. Activists often emphasize blame, morality, trust, and similar outrage factors; you’re not just protecting yourself, you’re getting even with the bad guys who dared to endanger you.
Furthermore, it is often possible to decide whether you want your stakeholders to think in terms of gains or losses. Let me give you an the example that Kahneman and Tversky made famous. This is slightly heavy going, but it’s worth it. (If you don’t want to struggle, skip to the end of the indent.)
Assume a disease that is expected to kill 600 people. Now give the authorities a choice between two responses: One drug will save 200 for sure; the other has a one-in-three chance of saving all 600, and a two-in-three chance of saving nobody. These are statistically equivalent, but most people pick the first option (72% to 28% in one study). The reference state is 600 dead people. We have a choice of two gains, either saving 200 lives for sure or gambling on maybe saving them all. We’ll take the sure thing. We are risk-averse about gains.Now reframe the problem. If one drug is adopted, 400 people will die. If the other is adopted, there is a one-in-three chance that nobody will die, and a two-in-three chance that 600 will die. Now the reference state is nobody dying. We have a good shot at preserving that desirable status quo. People greatly prefer that over a sure loss of 400 lives (78% to 22% in one study). We’re risk-seeking about losses.
Even doctors react this way. In fact, we react this way even when given both problems at the same time. We figure out that the two sets of choices are equivalent. We realize that there isn’t any rational reason for responding differently to “saving 200” than to “letting 400 die” — but we remain committed to our choices. Kahneman and Tversky write: “In their stubborn appeal, framing effects resemble perceptual illusions more than computational errors.”
The implications are manifold. “Discount for cash” (gain) goes over better than “surcharge for credit card users” (loss). A gamble to keep Saddam Hussein’s Iraq from acquiring WMDs (to prevent a loss) goes over better than a gamble to bring democracy to that country (to achieve a gain). And as Kahneman and Tversky wrote in 1984: “A physician, and perhaps a presidential advisor as well, could influence the decision made by the patient or by the President, without distorting or suppressing information, merely by the framing of outcomes and contingencies.”
Especially relevant to worst case scenarios is this research finding: Our tendency to be risk-averse about gains and risk-seeking about losses gets weaker for very extreme values. Lotteries are popular because very small costs are seen as tantamount to zero — so a very small probability of a huge payout is attractive. And insurance against disastrous outcomes is easier to sell than insurance against more moderate (but likelier) outcomes — hence the popularity of saving money with higher deductibles.
The bottom line here: A really, really bad worst case scenario can overcome people’s tendency to gamble with losses, and can thereby motivate precautionary action. A worst case scenario that’s not so bad is much tougher to sell. Whether motivating precautionary action is a desirable or undesirable outcome depends, of course, on your assessment of the risk ... and on your propensity to gamble or not gamble with disaster. But even exaggerated claims about risk magnitude don’t necessarily backfire on the exaggerator. If the disaster doesn’t happen, that doesn’t prove you were wrong. It doesn’t even suggest you were exaggerating. You never said disaster was likely; you said how terrible it could be.
By contrast, exaggerating the probability of the worst case, or your confidence that it will happen, has a distinct downside for the exaggerator. It works; it raises the alarm and makes people likelier to act. But when the worst case doesn’t materialize, we feel misled and mistreated.
Outrage at confident warnings that turned out to be false alarms isn’t as great as outrage at confident reassurances that left us naked when disaster struck. But it’s still real. Ask the Club of Rome (“Limits to Growth”) people what happened to their credibility when their doomsday predictions of the 1970s didn’t materialize. Or ask the U.S. intelligence agencies what happened to their credibility when no weapons of mass destruction were found in Iraq.
And it’s completely unnecessary. Warnings may have to be graphic and dramatic to be effective, but they don’t have to be cocksure. They’re more effective as warnings, not less, if they’re not proved wrong when the disaster is averted. So tell us how awful things might get — as graphically and dramatically as your ethics and your judgment permit. And keep telling us that the worst case is reason enough to take preventive steps now. But keep telling us also that we could get lucky, that you’re far from sure the worst case is going to happen, that in fact it’s less likely than some milder bad outcomes.
And so we have come full circle. My advice to those who wish to warn us is to acknowledge how unlikely the worst case scenario is, even as they insist that it is too awful to bear. My advice to those who wish to reassure us is to acknowledge how awful the worst case scenario is, even as they insist that it is too unlikely to justify precautions. If both sides do good risk communication, they’re going to come out sounding a great deal more alike than they usually do today.
See Also:
Latest articles in those days:
- Emergence of HPAI H5N6 Clade 2.3.4.4b in Wild Birds: A Case Study From South Korea, 2023 2 days ago
- Age-Dependent Pathogenesis of Influenza A Virus H7N9 Mediated Through PB1-F2-Induced Mitochondrial DNA Release and Activation of cGAS-STING-NF-κB Signaling 2 days ago
- Genotypic Clustering of H5N1 Avian Influenza Viruses in North America Evaluated by Ordination Analysis 2 days ago
- Protocol for enhanced human surveillance of avian influenza A(H5N1) on farms in Canada 3 days ago
- Evolutionary analysis of Hemagglutinin and neuraminidase gene variation in H1N1 swine influenza virus from vaccine intervention in China 3 days ago
[Go Top] [Close Window]