Saturday, November 23, 2013

Outliers, Hotspotting, and the Social Determinants of Health

We all have, or will have, our personal health problems, and the health problems that confront those close to us in our family and among our friends. Some are relatively minor like colds, or are temporary like injuries from which we will heal. Some are big but acute and will eventually get all better, like emergency surgery for appendicitis, and others are big and may kill us or leave us debilitated and suffering from chronic disease. Some of us have more resources to help us deal with these problems and others fewer. Those resources obviously include things like how much money we have and how good our health insurance is, but also a variety of other things that have a great impact in our ability to cope with illness, survive when survival is possible, and make the most of our lives even when afflicted with chronic disease. 

These other things are often grouped under the heading of “social determinants of health”. They include factors clearly related to money, such as having safe, stable and warm housing, having enough to eat, and otherwise having our basic needs met. They also include support systems – having a family and friends that is supportive and helpful, or alternatively not having one or having family and friends whose influence is destructive. It includes having a community that is safe and livable and nurtures and protects us and insulates us from some potential harm. This concept, “social capital”, is most well-described in Robert Putnam’s “Bowling Alone”[1] and its health consequences in Eric Klinenberg’s “Heat Wave”[2], discussed in my post “Capability: understanding why people may not adopt healthful behaviors”, September 14, 2010.

How this affects communities is a focus of the work of Dr. Jeffrey Brenner, a family physician who practiced in one of the nation’s poorest, sickest, and most dangerous cities, Camden, NJ, and is a founder of the “Camden Coalition”. I have written about him and his work before (“Camden and you: the cost of health care to communities”, February 18, 2012); his work drew national attention in the New Yorker article by Dr. Atul Gawande in January, 2011, “The Hot Spotters”. Brenner and his colleagues have taken on that name to describe the work that they do, and have collaborated with the Association of American Medical Colleges (AAMC) to focus on “hotspotting” (www.aamc.org/hotspotter) and produce a downloadable guide to help health professionals become “hot spotters” in their own communities in ten not-easy steps. The focus of this work is on identifying outliers, people who stand out by their exceptionally high use of health care services, and develop systems for intervening by identifying the causes of their high use and addressing them to the extent possible, activities for which traditional medical providers are often ill-suited and health care systems are ill-designed.

The essential starting point in this process, emphasized by Brenner in two talks that he gave at the recent Annual Meeting of the AAMC in Philadelphia (his home town) in early November, 2013, is identifying “outliers”.  The concept of recognizing outliers was the topic of a major best seller by Malcolm Gladwell a few years ago (called “Outliers”[3]), and Brenner notes that they are the “gems” that help us figure out where the flaws, and the costs, in our system are. As described in Gawande’s article, Brenner was stimulated by looking at work done by the NYC Police Department to identify which communities, which street corners, and which individuals were centers of crime; rather than developing a police presence (and, hopefully, pro-active community intervention) for the “average” community, they were able to concentrate their work on “hot spots”. Moving out of a crime-prevention and policing model, Brenner and his colleagues were able to link to hospital admissions data that was tied to people and performed a “utilization EKG” of their community, looking at who had the highest rates of admissions, ER visits, 911 calls and sought to determine what the reasons were.

Unsurprisingly, the individuals identified most often had the combination of multiple chronic diseases, poverty, and a lack of social supports – pictures of the impact of poor social determinants of health. Sometimes there were individual, specific issues – like the person who called 911 multiple times a day and was found to both live alone and have early Alzheimer’s so that he couldn’t remember that he already had called. Often, there were predictable community and poverty related issues, related to inadequate housing , food, transportation, and poor understanding of the instructions given them by the health care providers that they had seen.

One example of such an effort is “medicine reconciliation”, in which (usually) pharmacists review the medications that a patient entering the hospital, clinic or ER is supposed to be on (per their records) and what they say they are taking. It sounds like a good idea, and it has received a great deal of emphasis in the last several years, but it is one that Brenner calls a “fantasy” because it doesn’t involve going into people’s homes and (with them) searching through their medicine cabinets and drawers to find the piles of medications they have, and often have no idea of how to take, which ones are expired, which ones have been replaced by others, which ones are duplicated (maybe brand vs. generic names or from samples). He showed a slide of a kitchen table piled high with medicines found in one house, and says that his group has collected $50,000 in medicines found in people’s houses that their current providers did not know they were taking or wanted them to take.

Brenner notes that continuous ongoing stress weakens the body and the immune system, enhancing production of cortisol (a stress hormone) that has effects like taking long-term steroids, increasing the probability of developing “metabolic syndrome” and a variety of other physical conditions. He also cites the work of Vincent Felitti[4] and his colleagues that have identified Adverse Childhood Events (ACEs), such as abuse, neglect, etc., being associated with the presence of being a high-utilizer sick person in middle age (and, if they reach it, old age). This is, he indicates, exactly what they have found doing life histories of these “outliers”. It suggests that while interventions at the time of being identified as a high utilizer can be helpful for the individual patient, for the cost to the health system, and even to the community; but it also reinforces what we should already know – that the interventions need to occur much earlier and be community-wide, ensuring safe housing and streets, effective education, and adequate nurturance for our children and their families.

We need, Brenner says, half as many doctors, twice as many nurses, and three times as many health coaches, the intensively trained community-based workers who do go out and visit and work with people at home. I do not know if those numbers are true, but it is clear that we need to have comprehensive interventions, both to meet the needs of those who are sickest now and to prevent them from developing in the future. We are not doing it now; Brenner says “Like any market system, if you pay too much for something you’ll get too much of it, and if you pay too little you’ll get too little.”

We need to have a system that pays the right amount for what it is that we need.





[1] Putnam, Robert D. Bowling Alone: The Collapse and Revival of American Community.Simon & Schuster, New York, NY. 2000.
[2] Klinenberg, Eric. Heat Wave: A Social Autopsy of a Disaster in Chicago. University of Chicago Press, Chicago. 2002.
[3] Gladwell, Malcolm. Outliers: the story of success. Little Brown. New York. 2008.
[4] Felitti, V et al., “Relationship of Childhood Abuse and Household Dysfunction to Many of the Leading Causes of Death in Adults The Adverse Childhood Experiences (ACE) Study”, Am J Prev Med 1998;14(4) (and many subsequent publications).

Sunday, November 17, 2013

Dead Man Walking: People still die from lack of health insurance

At the recent meeting of the Association of American Medical Colleges (AAMC) meeting in Philadelphia, Clese Erikson, Senior Director of the organization’s Center for Workforce Studies, gave the Annual State of the Workforce address. It had a great deal of information, and information is helpful, even if all of it is not good. She reported on a study that asked people whether they had always, sometimes or never seen a doctor when they felt they need to within the last year. On a positive note, 85% said “always”. Of course, that means 15% -- a lot of people! – said “sometimes” (12%) or “never” (3%). Of those 15%, over half (56%) indicated the obstacle was financial, not having the money (or insurance). There are limitations to such a survey (it is self-report, so maybe people could have gone somewhere, like the ER; or maybe they asked your Uncle George who would have said always because he never wants to see a doctor even though you think he should for his high blood pressure, diabetes, and arthritis!) but it is not good news.

Of course, as former President George Bush famously said in July, 2007, "I mean, people have access to health care in America. After all, you just go to an emergency room." Many of us do not think that this is a very good solution for a regular source of care in terms of quality. Also, if you have had to use the ER regularly for your care and already have a huge unpaid stack of bills from them, it can make you reluctant to return. This likely contributes to the “sometimes” responses, probably often meaning “sometimes I can ride it out but sometimes I am so sick that I have to go even though I dread the financial result.” Following this ER theme, another leading Republican, Mitt Romney, declared repeatedly during the 2012 Presidential campaign, that “No one dies for lack of health insurance,” despite many studies to the contrary. And despite the fact that as Governor of Massachusetts he presumably thought it was a big enough issue that he championed the passage of a model for the federal Affordable Care Act in his state.

People do, in fact, die for lack of health insurance. They may be able to go to the ER when they have symptoms, but the ER is for acute problems. Sometimes a person’s health problem is so far advanced by the time that they have symptoms severe enough to drive them to the ER that they will die, even though the problem might have been successfully treated if they had presented earlier. Or, the ER makes a diagnosis of a life-threatening problem, but the person’s lack of insurance means that they will not be able to find follow-up care, particularly if that care is going to cost a lot of money (say, the diagnosis and treatment of cancer). If you doubt this still, read “Dead Man Walking”[1], a Perspective in the October 12, 2013 New England Journal of Medicine, by Michael Stillman and Monalisa Tailor (grab a tissue first).

We met Tommy Davis in our hospital's clinic for indigent persons in March 2013 (the name and date have been changed to protect the patient's privacy). He and his wife had been chronically uninsured despite working full-time jobs and were now facing disastrous consequences.

The week before this appointment, Mr. Davis had come to our emergency department with abdominal pain and obstipation. His examination, laboratory tests, and CT scan had cost him $10,000 (his entire life savings), and at evening's end he'd been sent home with a diagnosis of metastatic colon cancer.

Mr. Davis had had an inkling that something was awry, but he'd been unable to pay for an evaluation...“If we'd found it sooner,” he contended, “it would have made a difference. But now I'm just a dead man walking.”

The story gets worse. And it is only one story. And there are many, many others, just in the experience of these two physicians. “Seventy percent of our clinic patients have no health insurance, and they are all frighteningly vulnerable; their care is erratic.”  And the authors are just two doctors, in one state, a state which (like mine) starts with a “K” and (like mine) is taking advantage of the Supreme Court decision on the ACA to not expand Medicaid, and which (like mine) has two senators who are strong opponents of ACA, which means, de facto, that they are opposed to ensuring that fewer people are uninsured. I cannot get their thinking, but it really doesn’t matter, because it is ideology and they have no plan to improve health care coverage or access. So people like Mr. Davis will continue to die. This same theme is reflected in a front-page piece in the New York Times on November 9, 2013, “Cuts in hospital subsidies threaten safety-net care” by Sabrina Tavernise:

Late last month, Donna Atkins, a waitress at a barbecue restaurant, learned from Dr. Guy Petruzzelli, a surgeon here, that she has throat cancer. She does not have insurance and had a sore throat for a year before going to a doctor. She was advised to get a specialized image of her neck, but it would have cost $2,300, more than she makes in a month. ‘I didn’t have the money even to walk in the door of that office,’ said Ms. Atkins.

In a recent blog about the duration of medical education, I included a graphic from the Robert Graham Center which show the increased number of physicians that the US will need going forward, mostly as a result of population growth but also from the aging of that population, along with a one-time jump because of the increased numbers people who will be insured as a result of ACA (this will, I guess, have to be adjusted down because of the states that start with “K” and others that are not expanding Medicaid). Ms. Erikson included this graphic in her talk at AAMC, with numbers attached. Just from population growth and aging, we will require about 64,000 more physicians by 2025 (out of 250,000-270,000 total physicians).The one-time jump because of the ACA is about 27,000, bringing the number to 91,000.

But, of course, there is a big problem here. The projection that we will need more doctors because we have more people, or because our population is aging and older people need more medical care, is one thing. But the need for more doctors because more people will be insured? What is that about? Those people are here now, and they get sick, and they need care now, no less than they will when they are covered in the future. I do not mean to be critical of the Graham Center or Ms. Erikson for presenting those data. I do, however, think that we should emphasize how offensive is the idea that we will need more doctors just because more people will have coverage. They didn’t need doctors before, when they didn't have insurance?

If there are people who cannot access care, we need to be able to provide that care. We will need more health care providers, including more doctors, especially more primary care doctors. We need health care teams, because there will not be enough doctors, especially primary care doctors. We need the skills of health workers who can go to people’s homes, and identify their real needs (see the work of Jeffrey Brenner and others (see Camden and you: the cost of health care to communities, February 18, 2012). We need to ensure that people have housing, and food, and heat, and education – to address the social determinants of health.

Decades ago, I heard from someone who visited Cuba a few years after the revolution. He said he mentioned to a cab driver the dearth of consumer goods, such as shoes, in the stores. The cab driver said “we used to have more shoes in the stores, but now we first make sure that they are on children’s feet before we put them in stores windows.” There was enough before the revolution, enough shoes and enough milk, as long as a lot of people were not getting any. The parallel is that now, in the US, if we seem to have enough health clinicians, it is because there are lots of people not getting health care.

This is not ok. It isn’t ok with the ACA, and it isn’t OK without it.






[1] Stillman M, Tailor M, “Dead Man Walking”, Michael Stillman, M.D., and Monalisa Tailor, M.D.
October 23, 2013DOI: 10.1056/NEJMp1312793

Sunday, November 10, 2013

Does quality of care vary by insurance status? Even Medicare? Is that OK?

While the Affordable Care Act will not lead to health insurance coverage for everyone in the US (notably poor people in the states that do not expand Medicaid, as well as those who are undocumented), it will significantly improve the situation for many of those who are uninsured (see What can we really expect from ObamaCare? A lot, actually, September 29, 2013). The hope, of course, is that health insurance will lead to increased access to medical care and that this access will improve people’s health, both through prevention and early detection of disease, and through increased access to treatment when it is needed, including treatment that requires hospitalization. Implicit in this expectation is the assumption that the quality of care received by people will be adequate, and that the source of their insurance will not affect that care.

This may not be true. I spent a large portion of my career working in public hospitals. I absolutely do not think that the care provided by physicians and other staff in those hospitals was different for people with different types of insurance coverage (many or most patients were uninsured), and indeed for many conditions the care was better. But the facilities were often substandard since they depended upon the vagaries of public funding rather than the profit generated from caring for insured patients. The physical plants were older and not as well maintained, staffing levels were lower, and availability of high-tech procedures often less. There are changes; the Cook County Hospital I worked in through the late 1990s, with antiquated facilities including open wards and no air-conditioning, has been replaced by the very nice (if overcrowded) John P. Stroger, Jr. Hospital of Cook County. University Hospital in San Antonio, where I worked in the late 1990s, may have been seen by the more well-to-do as a poor people’s hospital, but in many areas, including nurse turnover and state of the art imaging facilities, it outdid other hospitals in town. Still, the existence of public hospitals suggests two classes of care, and as we know separate is usually unequal.

But what about the quality of care given to people with different insurance status in the same hospital? Surely, we would expect there not to be differences; differences based on age, yes; on illness, yes; on patient preference, yes. But who their insurer is? Sadly, Spencer and colleagues, in the October issue of Health Affairs, call this assumption into question. In “The quality of care delivered to patients within the same hospital varies by insurance type”[1], they demonstrate that the quality of care measures for a variety of medical and surgical conditions are lower for patients covered by Medicare than for those with private insurance. Because Medicare patients are obviously older, and thus probably at higher risk, the authors controlled for a variety of factors including disease severity. The most blatant finding was that “risk adjusted” mortality rate was significantly higher in Medicare than in privately insured patients.

This is Medicare. Not Medicaid, the insurance for poor people, famous for low reimbursement rates. It is Medicare, the insurance for older people, for our parents, for us as we age. For everyone. Medicare, the single-payer system that works so well at covering everyone (at least those over 65). (One of the reasons the authors did this study was the existing perception -- and some evidence -- that Medicaid and uninsured patients, as a whole, received lower quality care, but that was related to their care often being delivered at different hospitals.) The increase in mortality rates for Medicare patients compared to others with the same diagnosis was often substantial. But why?

Our hospital clearly has demonstrated that, essentially, Medicare is its poorest payer, and that, on the whole, it loses money on Medicare patient. This may well be true at other hospitals, but in itself should not account for lower quality of care, just lower profit. I would strongly doubt that either our hospital or the physicians caring for them believe that they deliver lower quality care to Medicare patients or that they are more reluctant to do expensive tests or provide expensive treatments when they are indicated. And yet, at the group of hospitals studied (if not mine, perhaps), it is true. The authors speculate as to what reasons might be. One thought is that Medicare (and other less-well-insured patients) might have worse physicians (“slower, less competent surgeons”); in some teaching hospitals, perhaps they are more likely to be cared for by residents than attending physicians. However, I do not believe, and have not seen good evidence, that this is the case. Another possibility is that newer, more expensive, technologies are provided for those with better insurance. Not good evidence for this, either, nor for another theory, that more diagnoses (“co-morbidities”) are listed on patient bills to justify higher reimbursements. I think that there is an increasing trend to do this (not necessarily inappropriately), and that, as the authors indicate, the trend is greater among for-profit than teaching hospitals, but in itself this does not suggest a significant difference for privately insured patients compared to those covered by Medicare.

What, then, is the reason? Frankly, I don’t know. It could be simply a coding issue; that is, in order to get greater reimbursement, hospitals list more intercurrent (co-morbid) conditions for private patients in hopes of greater reimbursement, which makes them appear sicker compared to Medicare patients when the latter are actually sicker. Or it may be that less experienced physicians and surgeons care for them. Or it may be that, despite the willingness of physicians, hospitals are less likely to provide expensive care for patients who, like those covered by Medicare, are reimbursed by diagnosis, not by the cost of treatment. Indeed, there may be other patient characteristics that lead to inequities in care that confound this study, but the idea that it may be because they are insured by Medicare is pretty disturbing.

Actually, in any case it is disturbing. It is already disturbing enough that a large portion of the US population is uninsured or underinsured, and that even with full implementation of the ACA there will still be many, if fewer, of us in that boat. It is disturbing to think that those who are poor and uninsured or poorly insured receive lower quality of care, possibly from less-skilled or less-experienced physicians, than those with private insurance. It is understandable (if not acceptable) that hospitals, physicians, and rehabilitation facilities might prefer to care for relatively young, straightforward patients with a single diagnosis, low likelihood of complications, and clean reimbursement. But if people are receiving poorer-quality care because they are our seniors, that is neither understandable nor acceptable.

It is another strong argument for everyone being covered by the same insurance, by a single-payer plan. Then, whatever differences in quality might be discovered, it would not be by insurance status.



[1] Spencer CS, Gaskin DJ, Roberts ET, “The quality of care delivered to patients within the same hospital varies by insurance type”, Health Affairs Oct2013;32(10):1731-39.

Saturday, November 2, 2013

Should Medical School last 3 years? If so, which 3?


Displaying as_seen_woz_chart_review (1).jpgAs we look at how to increase the number, and percent, of students entering primary care residency programs, it is interesting to see how some schools have creatively tried to address the problem. Texas Tech University Medical School and Mercer University Medical School’s Savannah campus have begun to offer MD degrees in 3 years to a select group of students who are both high performers and planning on Family Medicine careers, thus decreasing their indebtedness (one less year of school to pay for) and getting them into family medicine residencies, and several other schools are considering the same. They do this by essentially eliminating the fourth year of medical school. This is the subject of a piece by surgeon Pauline Chen, “Should medical school last just 3 years?” in the New York Times. She discusses different perspectives on the fourth year, previous experiences with reducing the length of medical school training, and two ‘point-counterpoint’ essays on the topic in the New England Journal of Medicine.

Chen addresses prior efforts to shorten medical school, including the most recent precursor of this current one. Specifically aimed at increasing the number of highly-qualified students entering Family Medicine residencies, it was implemented in several in the 1990s, and allowed students to effectively combine their 4th year of medical school with their first year of family medicine residency, thus completing both in 6 years. The programs were successful by all criteria. Students did well on exams and were able to save a year of tuition money, and medical schools were able to retain some of their best students into family medicine. Of course, therefore, the programs were stopped. In this case the villain was the Accreditation Council for Graduate Medical Education, which decreed that the fact that because students did not have their MD when they started residency training (it was granted after the first year, a combined 4th year of medical school and internship) they were ineligible for residency training. Thus this newest iteration offers the MD degree after three years.

An older effort to shorten medical school is also mentioned, one with which I have personal experience. In the 1970s ”as many as 33 medical schools began offering a three-year M.D. option to address the impending physicians shortages of the time.” One of those was Loyola-Stritch School of Medicine, in which the only curriculum was 3 years. In 1973, I was in the second class entering that program. We spent 12 months in ‘basic science’, pretty much just in classes in the mornings, and then two full years in clinical training. Chen writes that “While the three-year students did as well or better on tests as their four-year counterparts, the vast majority, if offered a choice, would have chosen the traditional four-year route instead.” I have no idea where she gets this impression; it is certainly not at all my memory. Our friends across town at the University of Illinois went to school for two years of basic science, 8 hours a day to our 4. We did not envy that. As Chen notes, we did just as well on our exams, and saved a year’s tuition, and I daresay no one could tell the difference in the quality of the physicians graduating between the two schools, when they entered residency in 1976 or today after 37 years of practice. Again, it was all good.

And, again, it was stopped. Why? Of course, the experiment only led to one additional class of physicians being produced (after that, it was still one class per year) so that benefit expired, but what about the other benefits that I have cited? Why wasn’t the program continued? Chen hits the nail on the head in her next paragraph: “The most vocal critics were the faculty who, under enormous constraints themselves to compress their lessons, found their students under too much pressure to understand fully all the requisite materials or to make thoughtful career decisions.” In particular, the basic science faculty who taught the first two-years-now-compressed-into-one of school. The fact that students did just fine on USMLE Step 1 and became good doctors was apparently insufficient to convince them. They made arguments like the one above, shifting the problem from to the students (“they” were under too much pressure) rather than that the faculty felt the pressure. I can’t remember anyone wishing they had another year to spend in basic science lectures.

The truth is that there is no magic amount of basic science time educational time needed to become a doctor. The amount of time needed is the amount necessary to either: (1) learn enough to pass USMLE 1, a fine utilitarian standard, or (2) learn the key pieces of basic science information that every physician needs to know in order to be able to practice quality medicine. If there are some basic science faculty might bridle at the idea of #1 (“Teach to the test? Moi?”), trying to identify what comprises #2 is a lot of work. It is easier to teach what we have always taught, what the instructors know about. If the reason for more time were the amount of basic science knowledge, then what required two years 35 years ago would require 10 or more years to teach now, because so much more is known. That is not feasible. The right answer is #2, but getting folks to do it is hard.

Chen quotes Dr. Stanley Goldfarb, lead author of the perspective piece against three-year programs as saying “You can’t pretend to have a great educational experience without spending time on the educational experience,”  which is of course true but begs the question of what those experiences should be. If we are going to decrease the length of time students are in medical school, it makes much more sense to reduce the amount of time spent in learning basic science factoids that most will forget after USMLE 1 (reasonable enough, since they will never need most of that information again) and focus on adult learning by teaching that information that all physicians do need to know. This effort requires clinicians having major involvement in the decision about what that is. It makes much less sense to remove one of the years of clinical training; what should be done is that training should be augmented, become less about vacations and “audition clerkships” and more about learning.  Why this is unlikely to happen, of course, has nothing to do with educational theory or the quality of physicians produced and everything to do with medical school politics. There is no constituency on the faculty for the fourth year, and a strong basic science faculty constituency for the first two.

Yes, we need more primary care doctors, lots of them, and we may need more doctors altogether, to help meet the health needs of the American people, and we need them soon. Data from the Robert Graham Center of the American Academy of Family Physicians (AAFP)[1] (attached figure) show the projected increase in need, including the one-time bump from the ACA, which will bring a large number of people who have not had access into care, and the longer-term need from population growth and aging. Programs that increase the number of primary care doctors (like the 6-year family medicine programs of the 1990s) are good. Programs that decrease the number of years by reducing basic science courses rather than clinical times obviously make more sense from the point of view of having well-trained doctors. (Programs like the 3-year option at NYU which is not even geared to training more primary care are, from this point of view, irrelevant.) We need to have these not be pilots, but scaled up to produce more clinically well trained primary care doctors.

And we need to do it soon. Medical school turf battles should not be the determinant of America’s health.







[1] Petterson SM, et al., “Projecting US Primary Care Physician Workforce Needs: 2010-2025”, doi: 10.1370/afm.1431 Ann Fam Med November/December 2012 vol. 10 no. 6 503-509