Saturday, January 15, 2011

Risk, Primitive Reactions, and Human Health Behaviors

.
NPR’s “All Things Considered” recently reviewed several scary events from 2010 (“The year in fear: fright or fallacy?”). Reporter Jon Hamilton spoke with Dr. David Ropiek, Director of Risk Communication at the Harvard Center for Risk, about what made these events (Toyota’s acceleration problems, the Deepwater Horizon oil spill and the use of chemical dispersants, etc.) particularly frightening. Dr. Ropiek said that people tend to make decisions, and react positively or negatively, based on very simplistic (and usually unconscious) criteria, rather than on careful critical analysis of the relative benefits of one course of action over another, with the most important criterion being “is there immediate danger?” The reason he gave was that our basic neurobiology was unchanged over human history while our culture and society was remarkably more complex than when quick decisions were mostly about achieving immediate results (fight or flee). “We use a risk-perception system that evolved in simpler times, when the risks were bad guys with clubs, and the dark, and wolves. It's quick. But quick isn't necessarily the best for the complicated stuff we face in modern society.” Thus, for example, even though the evidence would show that the human and environmental danger of the oil spill was in the oil much more than any risk from chemical dispersants, “Just the word chemicals in your listeners' minds is currently setting off a little organ in their brain called the amygdala, which is the 24/7 radar in our brain that says - is there danger in that data?”; that is, our fears are triggered by the word (chemicals) which we have come to associate with danger.

Similarly, people can grasp the specific, and feel the pain, for an individual more easily than for a large, amorphous population. Thus, the outpouring of concern for “Baby Jessica” falling down a well in 1987, or for the child dying of leukemia, is much stronger than that for thousands of people, especially those in other countries, dying of war, disease, or even more abstract, structural violence. It is not just the one versus the many; it is the suddenness of it. We feel for the trapped Chilean miners, or the victims of a bombing; Ropiek says “… a chronic risk doesn’t ring our alarm bells the way a catastrophic, all-at-once one does. Because it concentrates the mind to see a bunch of the tribe all whacked at once.” So a particularly gory battle or atrocity is horrifying, but when there are chronic, repeated bombings and battles (as in Iraq or Afghanistan), even though they lead to much more death, we feel less.

We can see a murderer as a bad person, but it is harder to identify the members of the “grifter class” (coined by Matt Taibbi, “Griftopia”[1] ) who are responsible for the financial system that has visited so much evil on all of us. When people hear about something they know little or nothing about, especially if it is very complex and hard to understand, they often deal with it by putting a “frame” around it, tying it to something that seems similar enough (at least in one dimension) that they feel they can hang their hat on the analogy and judge it. For example, “chemicals=bad” in the example above is such a frame; so is dealing with universal health insurance by framing it as “socialized medicine”=”socialism”=”bad”. Unfortunately, the world is far more complex than this, and more unfortunately unscrupulous politicians and opinion-makers (my frame = “selfish evil people”) take advantage of this to obscure complexity and buy into often nonsensical self contradictions (taxes=bad, deficits=bad; let’s not have either!)

When it comes to health and medicine, the same issues come into play. People perceive immediate distress with acute problems (e.g., cough, fever, and most especially pain!) and know how much they would appreciate relief. The impact of conditions that do not cause appreciable symptoms right now but will cause really bad outcomes (death, morbidity, poor quality of life) if untreated in the future, are much harder to get people to make high priorities. The doctor sees untreated hypertension in terms of a future outcome (stroke, kidney failure), but this is more difficult for the patient. Even when s/he believes and understands it intellectually, it is much less likely that the treatment of a largely asymptomatic condition will rise to the top of life’s many more urgent priorities (food, clothing, housing, childcare, work) than if it were, say, pain.

The problem is even greater for public health, as I discussed in Public Health and Changing People's Minds (Saturday, May 15, 2010) where populations are huge, timelines are long and risk is relative. Public health addresses risks for populations, not me, or my family; translating population risk into individual prior probability is fairly difficult. For most people, even the concept of risk – that a given event will not definitely have or definitely not have a particular result, but will be somewhere on the continuum between them – is something they are not accustomed to thinking about, although they use it all the time (deciding whether to cross on a red light, for example). Consciously comparing the relative risk of different actions is very difficult, especially when the results have very different timelines. A definite immediate benefit (have that tasty fried or sweet food; throw a wrapper out the window, get a big gas-guzzler, have unprotected sex) has a lot more weight than the possibility of a bad long-term outcome (besides, next time, in the future, I’m going to go on a diet, give up smoking, use condoms). Dr. Ropiek notes that because events that cause “a bunch of the tribe to be all whacked at once” happens relatively rarely, “…we tend to downplay chronic risks like car accidents, diabetes, heart disease and the flu.” Sometimes public health officials can create that fear and mobilize the attention of the populace, as with concern about the swine flu of 1976, but that is also an example of how, when predicted risk of bad outcomes doesn’t happen, it reinforces the tendency to downplay those chronic risks.

In making decisions about medical care, this sort of perception can cut either way, depending on how a person looks at it based on personal and familial experience, cultural beliefs, and the way they “frame” medical interventions, as well as how urgent or important a solution is. Some people do not trust doctors or medicines, based on these criteria, and prefer to not take medicines or advice, even when an analysis of the relative risk shows the treatment to be definitely beneficial. Others have unrealistic expectations of what medicine can do (fueled, of course, by both doctors and direct-to-consumer drug advertising), and are angry when the doctor cannot cure their viral illness, make their back pain disappear, or compensate for all of the other parts of life that are bad and make them happy. At times of serious illness, where both treatment and non-treatment have real risks, or at end of life when people are not ready to accept that it is the end of life, even a professional evaluation of relative risk/benefit is difficult, so it is hardly surprising that people return to simpler methods of decision making (will I be able to live another day? Will it end my/his/her pain?).

Hamilton ends the interview segment with: “So Ropiek says we need to acquire a new fear - the fear of getting risk wrong.” I wish us luck on that.


[1] Taibbi, M. Griftopia: Bubble Machines, Vampire Squids, and the Long Con That Is Breaking America. Random House. New York. 2010
.

1 comment:

Unknown said...

Well, I love some of this, especially "where both treatment and non-treatment have real risks, or at end of life when people are not ready to accept that it is the end of life, even a professional evaluation of relative risk/benefit is difficult, so it is hardly surprising that people return to simpler methods of decision making (will I be able to live another day? Will it end my/his/her pain?)". I hadn't thought of the turning to these last questions (in the parentheses) as similarly simply a hard-wired plunge to "simpler methods of decision"--and in honesty am not sure I'd agree with that conclusion. Actually, I'd like to see you discuss all of these points--re the medically hard, and near end of life, decisions--more in your blog.
As for generally making decisions, here's one for you. Suppose a wo/man needs a biopsy of a probably benign cyst, but, this being in the mouth and so(hey, where dentists lurk) not covered by Medicare/Medicaid, must travel 60 or 70 snowy/stormy miles for treatment, or else risk an oral surgeon's office that (I think this is a true case) uses, for hand cleaning, not soap-and-hot-water but squirt bottles of (at opening) 62.2%-alcohol; how does s/he determine the risks?
Finally, I wonder to how variations in imagination and empathy affect people's responses to /perceptions of /balancing of immediate/longterm risks.

Hamilton ends the interview segment with: “So Ropiek says we need to acquire a new fear - the fear of getting risk wrong.” I wish us luck on that.

Total Pageviews