Sunday, January 25, 2015
This leads to a lengthy discussion of why both states have dropped, mainly attributed to a lack of investment in public health, and how there is a geographic disparity, with states on the coasts doing overall better than those in the Midwest: “What explains this dramatic difference between the coasts and the Midwest is broad investments on the coasts in things that make communities healthy,” Bavley quotes Patrick Remington of the University of Wisconsin. What this misses, however, is the even worse news that is hidden by “rankings” data. While in rankings of states there will always be a #1 (in this case, Hawaii) and a #50 (you guessed it, Mississippi) this hides the fact that, overall, states have gotten worse over this 25-year period. The graphs in the print edition of the Star (not included in the on-line edition) show the decrease in rankings noted above for the two states over time. However, on the “America’s Health Rankings” website one can not only look at the map showing relative state rankings but also click on each state and see how its absolute health ratings have changed over time.
Hawaii, ranked #1 in 2014 (Vermont is ranked #1 for the whole 25-year period), has nonetheless had its health status drop quite dramatically since 1990, while Mississippi, #50, has actually slightly improved. Locally, Kansas’ health status has dropped significantly consistent with its slippage in the rankings, but Missouri’s, after a big dip in the intervening years, is about the same as it was in the mid-1990s, despite its lower ranking. How can this happen? How can Missouri drop 12 places in the rankings despite having about the same health status if the top-ranked states are getting worse? The only explanation is that the gap was even greater in the past, and that some states in the middle, such as Illinois (#30) and Pennsylvania (#28) have gotten better while Missouri has stayed the same. Hawaii has dropped from a rating of +0.7 to +0.3, while Mississippi has gone from -0.4 to -0.3. Dr. Remington’s comments may be accurate, but they were more accurate in 1990, and since then states have seen a race to the middle, if not the bottom, in terms of public health.
The rankings above are the “all outcomes” rankings from the United Health Foundation studies. They are composed of several subcategories. One component lowering these overall outcomes is the obesity rates, which have risen nationally from 11.6% in 1990 to 29.4% in 2014 (!) as well as in every individual state. Diabetes has risen nationally from 4.4% to 9.6%. Physical inactivity has stayed relatively constant, but distressingly high, at nearly 75%. On the other hand, the last measure, smoking, has gone down nationally from 29.5% to 17.6%, but has tended to stay the same over many years more in lower-ranked states, such as Mississippi, Missouri, and even Kansas. The study ranks senior health separately, but this tracks pretty well with overall health; Hawaii is the best, Kansas is 25, Missouri is 42, and Kentucky replaces Mississippi (#47) as the worst. The study also examines rankings for a variety of other characteristics, some of which are different for the overall population and for seniors. They include chronic drinking (seniors), binge drink (all adults), depression (seniors), etc., as well as societal measures which might impact or “confound” health status including education level, percent of “able bodied” (no disability) adults and percent of children in poverty.
The study also provides us with information on health disparities, obesity levels by different sub-populations, based on education, race/ethnicity, age, gender, urbanicity, and income. Two non-surprises: the South and South Central regions do the worst, and the problem is greater for those with lower education, non-white race/ethnicity, and lower income; urban status and age have less impact. In terms of educational impact on health disparity (the difference between the highest and lowest educated in terms of health status), things change: Hawaii is still #1 but Mississippi is #2, while California is #50! Unfortunately, for many of the states with both low overall health status and low disparity, it means that even the better-educated have poor health status.
So what do we learn? Yes, as Dr. Remington points out, some parts of the country generally do better than others (although identifying these as the Northeast , West, and North Central regions is more accurate than saying “the coasts”), and the South and South Central regions tend to be worse. Yes, as Mr. Bavley highlights, both Kansas and Missouri have significantly slipped in the relative rankings. But we also see the whole country getting worse, particularly with regard to conditions such as obesity and diabetes. And we see the most dramatic drops in certain states, not only Kansas but Wisconsin (down from +.38 to barely positive at all, +.07). The people interviewed for the Bavley article in Kansas and Missouri, as noted above, cite inadequate, and decreasing, spending on public health as the reason.
It is certainly one of the big reasons, along with a consumer society that encourages consumption of high-calorie, low nutrition foods. And a car-based society that makes exercise a specialty activity, more available to some than others, rather than part of life. And a terrible economy where a shocking number of people don’t have jobs and others have to hold down two or more to make ends meet so have little time for exercise. The other huge reason are those “social determinants of health”; the impact of poverty, racism, poor education, inadequate housing and food. The social structure and social support for the most needy in the US has never been adequate, and is eroding, more in some states than in others, sometimes on purpose (because of political beliefs) and sometimes by a (possibly) more benign neglect.
Some of it is the chronic problem of public health, that its successes are the absence of disease and thus less obvious. It is easier to feel grateful for treatment of a disease we have contracted than, say (as I have often said before) to be grateful each morning that we don’t have cholera because we have clean water. It is, perhaps for some, easier to think we don’t need to vaccinate our children when diseases that the vaccines prevent are no longer in evidence. But it is a fatally flawed analysis. When a good has resulted from doing effective preventive efforts, the solution is to keep up our efforts, whether vaccination or public health.
And cutting back on our social safety net is a good prescription for worse health.
Sunday, January 18, 2015
The massacre at the French magazine Charlie Hebdo was shocking and horrible, as are the massacres and atrocities that occur regularly with less immediacy to those in the West, such as those committed by Boko Haram in Nigeria. The most positive result was the massive outpouring of support for free speech, for being able to say and print what you want even if it offends people. And, I would add, particularly if it offends the powerful, which Charlie Hebdo also did. More than a million in the streets of Paris saying “Je suis Charlie” (“I am Charlie”), with more than 40 heads of state in attendance, even if they didn’t actually lead the march, but were photographed together on a protected side street. And even if many of them sponsor severe repression of free speech in their home countries.
The inclusion of Israeli Prime Minister Benjamin Netanyahu was particularly problematic given the violently repressive policies of his government, but given that the companion attack was on a kosher supermarket where four Jews were killed, the symbolism was important even if a lightning rod for (largely just) criticism of Israeli government policy. Less appreciated was the message from Netanyahu that French Jews should all come to Israel, and more appreciated were the sentiments of French Prime Minister Manuel Valls that ‘France Without Jews Is Not France’, and the demonstrators, most of whom were not, who carried signs that said “Je suis juif” (“I am Jewish”).
But the necessary condemnation of terror, and moves to avert it, along with the necessary condemnation of anti-Semitism and the conflation of Jews with the actions of the government of Israel (or the conflation of Islam with the actions of Islamic terrorists) does not solve the problem of communication, that people see “truth” so differently. I don’t know that I can offer much more insight into the conflict of seeing truth through the lens of religious doctrine (and of course some people and groups’ interpretation of religious doctrine) and a “liberal” concept of the value of free speech. I was interested in the perspective of Maajid Nawaz, a British Muslim who became a radical Islamist at 16, served 4 years in an Egyptian jail where his readings changed his perspective and later founded Quilliam, an anti-jihadist think tank in London, expressed on NPR’s Fresh Air. Asked by host Terry Gross how he saw himself as the same person, given his loss of relationships including family and friends since his “conversion”, Nawaz spoke about commitment to justice. He said it was the blatantly unjust treatment of Muslims that motivated him to fight as an Islamist, and the same commitment to justice that makes him oppose terrorism. Ideologically, I think that this is a good start.
Most countries, including France and the US, have a mixed relationship with free speech. In the US (which I know much better), many people not only support free speech for positions that they agree with but also positions that they can tolerate listening to. Of course, however, true support for free speech means support for speech you abhor, hate, despise, think dangerous. Not, of course, the same as action (“your free speech stops just short of my nose”), but certainly includes free assembly and demonstrations to express views. If one’s religious views include opposing anyone’s right to criticize your religion (or, even more, as illustrated by the Inquisition or ISIL’s massacres of Yazidis, not adopt your religion), you are clearly endorsing a society antithetical to free speech. And, of course, with the grossly immoral series of US Supreme Court decisions that money is speech and that corporations are people who can exercise that “speech”, the entire concept of free speech in our country is perverted.
Closer to home, and closer to the usual themes of this blog, health and social justice, we see again how beliefs not only threaten free speech but threaten our ability to act as an honorable and just society because groups of people see things so differently. The reasons given are many: our social isolation from groups of people unlike us (residential segregation by race and class and age and educational level), our ability to receive “customized” news, where what we watch on TV or find on the Internet is that which agrees with what we already believe. When people hold views based on their faith, it may be difficult or even unreasonable to expect to change it; this is what “faith” is. However, when people hold views that are not religious and are demonstrably wrong in the face of the facts, and those beliefs are held as firmly as those that are religious, and those beliefs threaten the core well-being of other parts of our society, we would hope that they could change.
I have often written about the Social Determinants of Health. These are the conditions of people’s lives that make them more vulnerable to illness, less likely to be able to prevent it through both health screening and living in places and circumstances in which prevention is possible. For example, not near areas of high pollution, not in poor quality cold housing, not in no housing. To have shelter, and decent food, and the opportunity for education for themselves and their children. All the things that characterize their lives and come before their access, or lack of access, to the health system comes into play. If we are to improve the health of the American people, we must not only provide equitable access to health care geographically, financially, and socially (with language access and caring and actual interest in people’s health) but also address those social determinants that disadvantage so many in the pursuit of their health.
And then I read the results of a survey by the Pew Research Center that says a majority of well-to-do Americans think that poor people “have it easy”. Widely reported, including by the Washington Post which leads with “There is little empathy at the top”, and CNN, which reports “54% of those with the greatest financial security believe that ‘poor people today have it easy because they can get government benefits without doing anything in return’…Only 36% of the wealthiest say ‘poor people have hard lives because government benefits don't go far enough to help them live decently.’" I want to say this is unbelievable, but I have to believe it is true that they think this. I am, nonetheless, aghast that they could think this. What world do they live in? Is it really true that their only contact with poor people is on TV news, Fox News at that? Have it easy?
Would they want to test that? Live like poor people for a while? Even knowing that – unlike real poor people – they could return to their comfort in a month or a week, would they be able to tolerate it? Not being able to pay their bills, not have heat, not have decent or sufficient food, not be able to afford the doctor, not be able to take off work without losing pay to go to one even if they had health insurance? I think – I know – that if they did they would feel differently about it being easy to be poor. But while there is great value to “walking a mile in someone else’s shoes”, there is a way to know what is going on without even doing that. It is called opening your eyes, looking at the facts.
Sunday, January 11, 2015
In a fascinating article in the “Medicine and Society” section of the New England Journal of Medicine, “Beyond belief—how people feel about taking medication for heart disease”, Lisa Rosenbaum discusses some of the reasons that people do not take medicines prescribed for them by doctors, really for any condition, not just heart disease. These reasons go beyond the obvious ones of personally experiencing side effects and not being able to afford them; indeed, she starts out discussing the fact that folks don’t use aspirin, a very cheap drug, even after having been diagnosed with coronary heart disease, for which the evidence of benefit is very strong.
Rosenbaum addresses a number of reasons, beginning with simple belief. A friend tells her that “My parents [whom Rosenbaum describes as “brilliant and worldly”] are totally against taking any medication”. Another person she meets, prescribed a “statin” (an anti-cholesterol drug), has no intention of taking it and indeed expresses disdain that is “raw and bitter” (the disdain, not the pill). For him, it is tied to the suffering he saw his sister endure when taking toxic anti-cancer drugs. Her hairdresser suggests another reason: taking medication means acknowledging that you are sick, and people don’t want to acknowledge that. He says that he gives his grandmother her nightly medication by telling her they are vitamins—after all, vitamins are to make you healthier, not treat your sickness.
Rosenbaum tells more stories, relating more reasons, but most come down to a belief, almost to an unchangeable worldview. Some of the issues seem to be semantic. People do not want to take “chemicals”, but will take vitamins. Connotation, and the “frame” that people put around words and concepts (sickness, drugs, natural, chemical, etc.) are very important. Of course, they’re all chemicals, and of course anything (“natural” or produced in a laboratory) that can have a biologic effect (good or bad) can have other effects (good or bad). People sometimes cite the side effects of drugs even when they haven’t experienced them but have read or heard about them, and credit them with more importance than the beneficial effects. While some people have always made decisions based on creating a parallel to what happened to someone they know, the Internet has probably magnified the universe of people they “know” and stories that they “hear”.
Perhaps the scariest reason Rosenbaum points out is that the success of medical treatment has led people to minimize, in some cases, the seriousness of the disease. As a cardiologist, she points to acute myocardial infarction (heart attack), which used to require 4-6 weeks of hospitalization, and now often has people out of the hospital in 24 hours. She talks to a person who contrasts it to the flu, which “can knock you down for days or a week or two, [while]the heart attack, once they do the thing, you’re in good shape.” And yet, “once they do the thing”, whatever it is, stents or clot lysing (presumably not yet bypass, which does require a longer hospitalization) and you feel better, you still have the disease; only the use of certain drugs along with diet and lifestyle changes can modify the trajectory of the disease. But the latter are hard, and maybe we don’t want to take drugs. Because, you know, we are feeling better.
I admit to initially feeling anger, hostility, as I read the “reasons” that these people would not take medicine, feeling that they were stupid. I don’t mean that I was angry that they don’t take medicine; this is their decision. In addition, there are lots of important reasons to be wary of taking medicines that go beyond personal experience with side effects. Not the least of these is the fact that they are heavily marketed by drug manufacturers, who are in business solely to make a profit, and regularly invent new “diseases” that “need” treatment in order to market their drugs and make money. In addition, “indication creep” (which I have discussed before, The cost of health care: Prevention and Indication “creep”, drugs, and the Sanders plan, June 25, 2011, particularly citing a piece by Djulbegovic and Paul, “From efficacy to effectiveness in the face of uncertainty: indication creep and prevention creep”). This means that a drug, which is found to be effective and relatively safe for a certain condition, at a certain severity level, in certain people, starts to be used by physicians (often encouraged by the manufacturers) for other people with less severe levels of conditions, and sometimes for other indications for which efficacy has not been proven. For example, starting drugs for cholesterol at levels below which treatment has been shown to reduce mortality, or putting younger (or older) people on treatments only shown to benefit older (or younger) people, or men or women.
Indeed, this appeals to another system of beliefs common in people (including doctors), that if a little is good, more is better; if reducing cholesterol in people whose level is above “X” is good, why not in people whose cholesterol is a little below “X”; if getting your average blood sugar below “Y” is good, why not a little lower still; if aspirin is good prevention and reduces death in men who have coronary heart disease, why not use it in men who don’t but otherwise look a lot like men who do? This sort of belief may lead to behavior opposite of that described by Rosenbaum (that is, taking medication when it is not of value rather than not taking medication that is likely to be of value) but it stems from same root—making decisions based on beliefs rather than evidence. And it is not uncommon to see both behaviors manifested in the same people: someone who would “never” take “artificial chemicals” (regulated drugs) into their body who ingests large amounts of unregulated chemicals (labeled as “natural”). The apparent contradiction is non-rational to me but makes sense to them.
I often—maybe usually—agree with those who say “less is better”, such as Ezekiel Emanuel in his New York Times op-ed “Skip your annual physical”. But I hope that I do this when, as in the case of the annual physical, the evidence does not demonstrate benefit, and the cost is high, as it is for many heavily-marketed drugs. And, of course, my anger subsides as I realize that I often feel the same things, and maybe even sometimes act on them. I don’t want to be a sick person, certainly not one with a chronic disease (it’s bad enough to have the flu!) and taking a medicine for a condition labels me as such. I don’t want to take medicines just because they “might” help (prescription or over-the-counter, made by traditional pharmaceutical manufacturers or “natural” companies) if there is not good evidence, and I don’t want to experience unpleasant side effects. But I do take the medicines that have been shown to benefit people like me, with the same or similar risk factors, and even put up with some side effects (e.g., mild myopathy from the statin).
I am not going to change anyone’s worldview, no more than Dr. Rosenbaum is likely to change that of the “brilliant and worldly” friends of her parents. And I am certainly not going to become an advocate for treating for the sake of treatment, or being a flak for drug companies. But if there is strong evidence that taking a drug (in the lowest effective dose) for a condition that I in fact have (denial or not) is likely to have a “patient-important” (meaning lower risk of premature death or better quality of life) outcome, and I personally do not experience serious side effects, I will take the drug.
The key issue here is not making decisions to do, or not do something (have a physical or take a drug) because of a general belief that such things are good or bad for you, but rather to evaluate the evidence of how it might benefit or harm you, and to make a decision that balances these filtered through your own value system, how much you value the potential benefit or harm that might come.
To me, this is a rational approach.
 Rosenbaum L, “Beyond belief—how people feel about taking medications for heart disease”, NEJM 8 Jan 2015;372(2):183-87
 Djulbegovic B, Paul A., From efficacy to effectiveness in the face of uncertainty: indication creep and prevention creep”, JAMA. 2011 May 18;305(19):2005-6..
 Emanuel E, “Skip your annual physical”, New York Times, January 9, 2015.
Monday, January 5, 2015
Thursday, January 1, 2015
One of the relatively new and growing movements in family medicine is “direct primary care”, or DPC. The term seems to have a lot of different meanings, depending upon who is talking about it (or, often, it is talked about in very vague terms, as are many things we want to have only thought about in positive ways; if we get too specific people can criticize!). In general, however, it is about primary care doctors taking direct payment from patients for their services rather than getting reimbursed by insurers (including Medicare and Medicaid). This is touted to be a panacea for doctors tired of “bureaucracy” (often referring to the “government”, but certainly at least as painfully insurance companies); of too many forms to fill out and rules to follow and loss of autonomy. The primary care doctor provides the service that s/he is capable of and the patient pays, just like in the old days (maybe barter is included, but don’t know about paying in chickens – on visit to the vet the other day I saw an old sign on the wall advertising a vet’s services, indicating both cash and barter—but no poultry.)
There is a certain attraction to the simplicity of this arrangement. The doctor provides the services that s/he can provide (presumably not including most laboratory tests or medicines or immunizations) for a fee that is collected in cash. The patient can even apply to their insurance company for reimbursement. Voilà! Everyone is happy! The patient gets the service, the doctor does what s/he likes to do, and is freed from bureaucratic regulations and thus can operate his/her business more efficiently and with lower overhead, presumably (this is not always explicit) passing the savings on to the patient. But there are a few concerns.
The first, obviously, involves people who are too poor to pay. This may not concern some of the DPC doctors, but does others, and should concern our society as a whole. We know these people; we see them regularly in our student-run free clinic (except there they do not pay anything). I have pointed out that this need not be a problem; one of the advantages of not taking insurance is that the doctor is free to charge different people different amounts. The Center for Medicare and Medicaid Services (CMS) requires physicians accepting it to not charge anyone less than the amount they charge Medicare (not the amount Medicare actually pays). Not accepting Medicare means a doctor could charge a well-heeled person $100, and another poorer one $25 for the same service. Or $5. Or a chicken. Or nothing. And those people with Medicare (or another insurer) could still submit a request for reimbursement for what they actually paid. Don’t know if they would be reimbursed or not. And it might be tough for the senior who can barely accomplish their basic functions to submit directly to Medicare. It all depends, as I pointed out to a colleague considering such a practice, on how much you want to make. If you are willing to make less, you can charge people less. I have no idea how many of those physicians currently practicing or planning to practice DPC are charging such a sliding scale, or taking all comers, or are willing to earn less. But it is at least theoretically possible to do this.
A second concern is “what is the scope of care provided by the DPC provider?” Sometimes discussions of DPC seem to focus on treating colds, high blood pressure, sprains, etc., all the things that are currently taken care of by the increasingly common Urgent Care Centers in drug stores and big box stores. Many of these things are problems that do not need to see a provider (your mother can tell you to drink plenty of fluids, rest, and eat chicken soup – perhaps a better use for that chicken than paying the doctor!). Otherwise, it is not clear what advantages DPC offers over Urgent Care Centers, except that the latter are often staffed by Nurse Practitioners, not physicians. If you care. If the services being offered are within the scope of practice of the provider, what difference does it make? And the Urgent Care Center will take your insurance, not a small matter when it comes to the cost of immunizations, for example.
Clearly, this DPC model cannot work for problems that need to be cared for in the hospital, or require facilities. The doctor cannot choose to be DPC only for their outpatient practice but be on insurance for inpatient care, so won’t do it. Or probably deliver babies. Or provide any beyond the simplest of office-based procedures. Including the critical ones of providing long-acting reversible contraception (LARC), IUDs and implants, which have very high up-front costs, except for quite well-to-do patients. Again, it is getting hard to see the benefit of DPC over Urgent Care, except, possibly, the provision of continuity of care with the same provider. Unless, of course, you need something that cannot be done in the office. Metaphors abound; one DPC provider is quoted as saying “you don’t use auto insurance to buy your gas; why should you use health insurance to buy primary care?” I leave this question up to you, including whether the metaphor is apt. However, it clearly minimizes the scope of what primary care doctors can do.
This is a potential challenge for family medicine and other primary care providers, especially as family medicine moves into its “Health is Primary: Family Medicine for America’s Health” campaign. For a long time, other specialists have derided PC for only taking care of simple problems. Many, including me, have argued the contrary, that primary care is difficult and complex (see, for example, my 2009 blog post “Uncomplicated Primary Care”, and my recent Graham Center One-Pager “Accounting for Complexity: Aligning Current Payment Models with the Breadth of Care by Different Specialties”), but quotes like the one above seem to indicate a retrenchment, away from “full-scope” practice. Obviously, like DPC, “full-scope” can be defined in various ways, but usually means things like caring for people in the hospital (another thing I have argued is a strength of US family medicine), delivering babies, caring for children, doing a variety of procedures, and even caring for people in intensive care. At the recent North American Primary Care Group (NAPCRG) meeting, several papers from the American Board of Family Medicine (ABFM) and Graham Center indicated that in most cases greater scope of practice of family physicians led to lower cost. The ABFM developed a 0-30 scale for scope of practice, and found significantly lower costs for patients cared for by FPs with 15-16 scores than those of 12-13 (a relatively small difference in scores). Presumably this is because those with lower scope of practice are referring more to higher-cost specialists. The interesting exception was integrated practices (like Kaiser) where the scores for FPs were low (~11.5) but costs were low, as a result of the other surrounding services available to patients from those integrated systems. These would not be characteristic of small DPC practices.
Finally, there is the concern about “who is health care for?” Much of the interest in DPC among residents, it seems, is to make their own lives less stressed, less busy, less frustrating. Not bad things. But the ultimate and only real measure of whether our society should embrace such a trend is whether it enhances the health of our people. All our people. Rich and poor. Rural and urban. White, Black, Asian, Hispanic. Over 150 years ago, Rudolf Virchow (the Father of Social Medicine) wrote “Medical education does not exist to provide students with a way of making a living, but to ensure the health of the community.… If medicine is really to accomplish its great task, it must intervene in political and social life.”
I hope that we still believe this to be true.
Happy New Year!
 Phillips RL, et al., “Health is Primary: Family Medicine for America’s Health”, Ann Fam Med October 2014 vol. 12 no. Suppl 1 S1-S12.
 Freeman J, Petterson S, Bazemore A, “Accounting for Complexity: Aligning current payment models with the breadth of care by different specialties”, Am Fam Physician. 2014 Dec 1;90(11):790.