Thursday, July 26, 2012
The cost of medical care gets a lot of attention from politicians and policy pundits (including both the influential and people like me); we are often told that Medicare is going to bankrupt the nation, that people are getting unnecessary, expensive, and potentially harmful services (except, of course, when those services are being received by the speaker or writer or those they care about). We are also told that quality and cost control can go hand-in-hand. While sometimes they can, they do not always. As I have noted in the past, prevention does not always save money in the long term. (I guess if we wanted to save money on health care, we’d encourage people to smoke, eat fatty food, and not exercise, so they could have their heart attacks young – and of course not treat them – so they’d never get old enough to be the multi-morbidity high cost patients!)
Two recent “Perspectives” in the New England Journal of Medicine address this from different angles. “Cents and sensitivity: teaching physicians to think about costs” (July 12, 2012), by Rosenbaum and Lamas, looks at the education of physicians (students and residents) in terms of how they are taught what medical tests cost, and conclude that it is very little. They open with a typical rendition of a student presenting a new patient to residents and attending (faculty physician). As the student painfully proceeds with identifying less-and-less probable diagnoses for the person who almost certainly has pneumonia, this list of expensive tests to be done to “rule out” the improbable grows. “Our profession has traditionally rewarded the broadest differential diagnosis and a patient care approach that uses resources as though they were unlimited.” The issue is not that we should only consider one diagnosis; it is that expensive tests to look for the most unlikely diagnoses need not be done immediately, but only when a patient is not responding to therapy for the most likely (including after testing to “rule in” or “rule out” common, not rare, competing diagnoses). We certainly do not need to do every possible test that can be done to make a diagnosis even after the first, best, test confirms the clinical suspicion; this is the basis of an educational model that the authors cite by another scholar, Chris Moriates.
Radley and Schoen, from the Commonwealth Fund, write in the July 5, 2012 issue about “Geographic Variation in Access to Care — The Relationship with Quality”. This draws on data from the most recent Commonwealth Fund Scorecard “Rising to the Challenge”, published in March, 2012, and examines how health care quality varies dramatically depending upon which area you live in. This is largely regional, but there are also “sub-regional” differences. They discuss a number of the common areas in which quality can vary, including adults with a usual source of care (93% best, 59% worst), high-risk adults who visited a doctor for a checkup in the past 2 years (95, 67), adults over age 50 who received recommended preventive and screening care (59, 26), and adult patients with diabetes who received recommended diabetes care (69, 27). They note that “…when we look beyond state averages, there are staggeringly wide gaps in people's ability to gain access to care in different communities around the country. We also find a strong and persistent association between access and health care quality, including the receipt of preventive care. Simply put, where a person lives matters — it influences the ability to obtain health care, as well as the probable quality of care that will be received — though it should not matter in an equitable health care system.”
But the most important contribution that they make is to, matter-of-factly, state that not having insurance is a negative quality indicator, that there is “even [my bold] variation on such fundamental measures as having health insurance or a connection to a regular source of care.” The attached map shows the regional and sub-regional variations in health insurance; white areas have the lowest level of uninsurance (5-14%, Massachusetts lowest), and black areas (on the Texas-Mexico border) the highest, >50%.
The article by Rosenbaum and Lamas cites the views of a number of medical ethicists, including several who believe that it is an abrogation of the Hippocratic oath to limit the care provided to the individual patient in front of you based on cost. I do not agree; while the primary criterion should be a consideration of the cost-benefit ratio (how much will this help the patient per dollar of cost), it is also true that there are certain interventions for certain conditions that are too costly to provide for everyone who needs it equally. And that is the crux of the issue. While one can (if a bit disingenuously) say “I cannot worry about ‘society’, I have to care for the patient in front of me,” the fact is that the patient in front of you for whom you may be considering an expensive intervention is not randomly selected. At least in the US, it is probably someone with health insurance that will pay much of the cost. It is certainly someone who has made it through the medical maze to get your attention. If the person in front of you can afford to pay for any service, whether they need it or not, but there are others who cannot pay for even the services they most definitely require, this is not coincidence nor is it irrelevant.
Perhaps the primary responsibility for cost-control should not be at the individual doctor-patient level, but at the societal level, such as is done in Great Britain through the National Institute for Health and Clinical Excellence (NICE) that evaluates interventions and decides, based on cost-benefit ratios, whether the National Health Service will pay for them. However, as individuals’ out-of-pocket expenses for employees’ contribution to insurance premiums, deductibles, and co-pays continue to increase, more and more people are finding that, insured or not, cost is an issue. Remember that “low cost” is relative; most “low cost” interventions are still a lot of money, easily moving into 4, 5, or 6 digits, for folks to pay out of their pockets. Rosenbaum and Lamas end their article with “Protecting our patients from financial ruin is fundamental to doing no harm.”
We may have different perspectives on where the limits are in providing costly care to an individual, but making sure that everyone, wherever they live, has access to quality care is critical. And ensuring that it is not financial or insurance status that limits access is the first step.
Thursday, July 19, 2012
On June 22, 2012, the New York Times published an article on the results of the Oregon lottery. No, this was not your “pick 3” or “powerball”; this was a lottery to get publicly funded health insurance. “In Oregon, Test Case for Health Overhaul, Better Care at a Cost”, Annie Lowrey describes the outcome of Oregon conducting, in 2008, an actual lottery for working-age adults living in poverty to get on to Medicaid. It was not, presumably, intended as an experiment (although certainly people knew that it would end up being one), but rather the result of the state not having enough money to enroll everyone in that category.
The results, after 4 years, should surprise no one. The study “has found that gaining insurance makes people feel healthier, happier and more financially stable,” and that “The insured were 25 percent less likely to have an unpaid medical bill sent to a collection agency and 40 percent less likely to borrow money or skip paying other bills in order to cover their medical costs.” First of all, it is obvious. Having coverage makes it possible to go to the doctor to care for chronic disease and actually get better, or keep it from getting worse, it means you don’t have to forgo paying the rent or electric bill or buying food to get care, and it saves you from bankruptcy when you do have to go to the hospital. Second of all, a similar study was done before, the RAND Health Insurance Experiment of the 1970s and 1980s, which followed the result of giving free care or care with a co-pay to previously uninsured adults. A large number of publications resulted from this study, which was led by Joseph Newhouse. A key finding was that people with free care used more care than those who had to pay a co-payment (and much more than those with no insurance). This included care such as going to doctor for minor conditions (something many health care pundits consider “inappropriate” use of care, except, of course, when they are doing it). It also, however, included care that everyone agrees was “appropriate” – that cured acute conditions, controlled chronic disease, and prevented death.
Newhouse, along with Amy Finkelstein (“… the most recent winner of the John Bates Clark Medal, an economic prize considered second only to the Nobel”), was the evaluator of the Oregon Study; their high credentials lend credibility to results which would only have been incredible if they had gone the other way. Another obvious finding is that the insured spent more on health care than those who were uninsured. This finding, Ms. Lowrey says, was “dashing [to] some hopes of preventive-medicine advocates who have argued that coverage can save money — by keeping people out of emergency rooms, for instance.” Well, I’m sorry that getting care didn’t cost less than not getting care, but it is very hard to argue that this is a credible argument against helping people get health care. Besides, neither the total amount spent by the newly insured, nor the difference was very much: “…the newly insured spent an average of $778 a year, or 25 percent, more on health care than those who did not win insurance.”
Note the phrase “win insurance.” Not “had insurance.” It was a lottery, remember. The winners did a lot better than those folks who didn’t win. It’s kind of like being the third-world kid who is lucky enough to “win” by living in the “right” village where a “mission” trip comes to do surgery for your congenital anomaly. The Oregon lottery, even if it wasn’t intended as research, does illustrate why some people fear participating in research. They think that they will be “experimented on” and that they may not get treatment that will work and save them. It is often had to explain to people that we, the researchers, don’t know what works, what will save them, until after we have done the study. The legacy of the Tuskegee syphilis study continues to poison the well in terms of recruiting study participants, especially among minority groups like African-Americans. In Tuskegee, poor black men in the South were followed for four decades to determine the “natural history” of syphilis. Most outrageous, of course, was that the study continued for decades after effective treatment for syphilis, penicillin, was available, and they were not treated.
The Oregon health lottery is, in many ways, not like Tuskegee. It selected people randomly, through a lottery, not targeting any particular racial group. Of course, by its nature, it targeted poor people – working age adults who did not have health insurance. One can imagine Tuskegee researchers saying that they weren’t really racist, that if they wanted to study the natural history of syphilis they had to study the population that had it – poor black men in the South. Of course, it was racist.
The most important similarity between Tuskegee and Oregon is that we withheld treatment that we knew would work in both. The previous work done by the RAND Health Insurance Experiment (HIE) proved what was obvious even before – that having health coverage would improve people’s health. The Times notes that many of the Oregon winners “…said that Medicaid had made a significant — even transformative — difference in their lives.” It would have made the same difference in the lives of the lottery losers. Of course, this is the nature of a lottery; the winners do better than the losers. But this is a lottery about people’s lives and health.
I realize that what I have written might be seen as an attack on Oregon, saying that it did something bad. Quite the contrary; at least Oregon, for the second time in the last 30 years, has made an effort to do what it could to help as many uninsured poor adults as it could, and did it in a reasonably fair way, by a lottery. Compared to most states, certainly including my own, Kansas, it is an admirable effort that has transformed the lives of many people that the rest of our states seem to not care about. But it is beyond the time for such experiments; the results are in. It is time to cover everyone. It is time to go beyond what an Affordable Care Act rescued by the Supreme Court will provide. It is time to expand Medicare to everyone.
Because that’s the least we can do.
Thursday, July 12, 2012
Multimorbity, primary care,social determinants, and universal insurance: where they all come together
Tinetti, Fried, and Boyd, writing in JAMA June 20, 2012, discuss “Designing health care for the most common chronic condition – multimorbidity.” They note that adult patients with only a single chronic disease are the exception (e.g., only 17% of people with coronary disease have that as their only chronic condition) and the rate of multi-morbidity increases with aging. However, the medical system is organized around individual diseases, both in terms of reimbursement (based upon International Classification of Diseases, 9th Edition, or ICD-9, codes) and in terms of specialty structure. Thus, cardiologists care for heart disease (only), oncologists for cancer (only), endocrinologists for diabetes and thyroid disease, etc.
Moreover, they observe that even more recent efforts to reward quality have been single-disease focused, with metrics related to acute myocardial infarction (heart attack), pneumonia, and particular surgical procedures. This is added to the fact that these criteria focus on hospitalized patients, rather than on efforts to keep them well. They state that “To align with the clinical reality of multimorbidity, care should evolve from a disease orientation to a patient goal orientation, focused on maximizing the health goals of individual patients with unique sets of risks, conditions, and priorities.” This is a long way of saying care should be patient-centered. They also say that “The process for assigning responsibility for providing clinical care also needs redesign, perhaps beginning with a systematic process for determining which clinician should have primary responsibility for helping patients make decisions,” which is a long way of saying people need generalists, or primary care physicians.
This group does not want to call them generalists, though. Perhaps this is because they are from Yale, a school well-known for its research and for its high-tech tertiary and quarternary care capability, but woefully weak in training physicians to provide general, or primary care. It doesn’t even have a Family Medicine department, despite the fact that this is the specialty that provides the largest number of primary care physicians, so these authors are from Internal Medicine. They have suggested that such physicians be called comprehensivists rather than generalists because the latter term “fails to capture the breadth of skills and expertise required” to care for patients with multiple comorbidities.
While this article suggests nothing new (for example, I have addressed these issues several times, including Primary Care: What takes so much time? And how are we paying for it?, May 21, 2010 and Primary Care’s Image: A Problem?, November 17, 2009), it is good that it keeps these issues on the table. Providers, particularly hospitals, want to be paid for metrics that are easily identifiable and relatively easy to achieve. Students choosing careers often want to pick a field in which they can feel that they are masters of a limited field of knowledge.Patients sometimes want to get help for a specific problem from a particular specialist. But everyone is better served if there is coordination of care and decisions regarding the care of one condition take the others into consideration. This means that medications which have negative interactions or countervailing effects are less likely to be prescribed. It means that the difficult decision about whether or not to have a particular surgical procedure is taken in the context of all of the health issues confronting the person. It means that decisions about interventions in desperate situations or at end of life are made wisely, and in full possession of the available information, without bias toward treatment of a particular disease regardless of its impact on others.
In a recent “Doctor’s Blog” on the British Medical Journal’s (BMJ) “doc2doc” site [disclaimer: I also blog at this site] one doctor presented their thoughts on “Is prevention ALWAYS better than cure?”. I do not agree with all of Dr. Lush’s points, and am not sure I even understand them all, but that “…as we get older the risks of many diseases increases so that many patients end up on a cocktail of preventative drugs, probably 2 antihypertensives, aspirin, beta blocker, statins, anti-inflammatory medications, diuretic, asthma treatment, type 2 diabetic treatments, analgesics, etc etc.” is a fact. Many of these medications can be for either prevention or treatment or both (remember the concepts of secondary and tertiary prevention, so that treatment of one condition – say high blood pressure – can be prevention of another – say heart attack), but they often lead to patients saying “too much!” Worse than that, some may have opposing effects – the anti-inflammatory medication you take for your arthritis can lead to GI bleeding and kidney failure. The narcotics you take for your pain, in addition to the more well known negatives of addiction, cause constipation so serious it may well be the source of even worse symptoms.
Even the presence of a National Health Service is not sufficient. A study from Scotland published recently in the Lancet, “Epidemiology of multimorbidity and implications for health care, research, and medical education: a cross-sectional study”, demonstrates an extremely high rate of multi-morbidity in that county, with much higher rates in poorer communities. The “[O]nset of multimorbidity occurred 10–15 years earlier in people living in the most deprived areas compared with the most affluent”. The authors conclude that their findings “…challenge the single-disease framework by which most health care, medical research, and medical education is configured. A complementary strategy is needed, supporting generalist clinicians to provide personalised, comprehensive continuity of care, especially in socioeconomically deprived areas.”
Of course, unlike Britain, which has a National Health Service, the US does not cover everyone. A Kaiser Family Foundation (KFF) “health reform subsidy calculator”, cited by Don McCanne in his Quote of the Day, demonstrates the amazing out-of-pocket costs for health insurance that come with slight incremental increases in family income, and would be mitigated, although not eliminated, by the Affordable Care Act. This creates a real difference in our two health cultures, because many people in the US do not seek care because of the financial barriers, and then only for acute episodes.
In terms of having a supply of physicians who can fill the role of caring for multiple morbidities, Britain has much more extensive primary care base than the US. It is possible that their system has as great a risk as ours of generalists not being sufficiently “comprehensivist”; our system, with more hospitalists, is moving in the British direction of having primary care doctors who do not follow their patients into the hospital. But in the US, we are without a sufficient number or percent of primary care doctors altogether.
The reality is, as I have often observed before, is that a comprehensive national health insurance system is a necessary, if not sufficient, component of a plan to actually ensure health. Two other major components are also necessary. The first is addressing the social determinants of health, which are largely associated with class/socioeconomic status, and the second is having an adequate primary care base, And, while, as the Scottish study indicates, the national health service in Britain does not guarantee either, it does provide a vehicle for addressing the second and mitigates the impact of the first.
The absence of such a system in the US makes the problems of an inadequate primary care workforce and the impact of socioeconomic disparities much worse.
 Tinetti ME, Fried TR, Boyd CM, “Designing health care for the most common chronic condition – multimorbidity”, JAMA 20Jun2012;307(23):2493-4.
Wednesday, July 4, 2012
Three articles in the NY Times over a two-day period addressed the circumstances of a person’s (or, in medical parlance, “the patient’s”) visit to the doctor and their expectations. On Sunday, June 3, “Let’s (not) get physicals” by Elizabeth Rosenthal called into question the American habit (?) belief (?) that there is something called an “annual physical” that everyone should get to maintain their health, even if they are not having any symptoms. Rosenthal says that they are not necessary, and can even be harmful, and that the US is virtually alone in the world in perpetuating this idea.
She supports her argument by going through a list of tests frequently done at these visits that are not recommended by the US Preventive Services Task Force (USPSTF) and many other expert bodies. These include screening for prostate cancer with prostate-specific antigen (PSA) tests, routine electrocardiograms (EKGs, or sometimes more correctly, ECGs), Pap smears (should be done for most women every 3 years, and not at all for women under 21, or for those over 65 if they have had 3 previous normals). She doesn’t specifically address the actual physical examination part of the “physical” but there is little to no evidence to support this either. (And that is pretty much true of pre-participation physicals for school and sports also.) She indicates that the Canadian government recommends against these exams, noting that they are “potentially harmful,” and discusses the “Choosing Wisely” campaign of the American Board of Internal Medicine Foundation, which I recently discussed ("Eggs Benedict" and "Choosing Wisely": often the best thing to do is nothing,” April 14, 2012).
“Potentially harmful”? Yes, of course. When a screening test is positive, it is then necessary to do a confirmatory test (usually more difficult, expensive, uncomfortable, risky or all of the above than the screening test, which is why it wasn’t done in the first place) and this may lead to other procedures – biopsies, surgery, etc. We tend to think of this as good if we have the disease, but if we don’t we incur cost, risk, and sometimes actual harm in looking for it. Indeed, sometimes even if we do have the disease, the complications of the investigation can lead to worse outcomes that the disease we are looking for. Which is why no test should be ”routine”.
The right term is “screening,” which means testing for something for which you have no symptoms, and it should be reserved for conditions that are potentially serious, can be identified by testing before symptoms appear, and for which there is an intervention that is not only effective but is more effective when done before the symptoms appear. None of this relates to tests done when you have symptoms, or have a diagnosis, and are being tested to follow up on treatment. For example: a screening blood count (CBC) to look for anemia in asymptomatic people is not indicated, but it might be if you are tired and pale. And if you are anemic and are treated (say, with iron), further testing to see if it worked – if you are no longer anemic – is appropriate.
The next day (June 4) two pieces appeared in the paper. In “The trouble with ‘Doctor knows best’”, Peter Bach also discusses screening tests that are not indicated and the puzzling fact that many doctors do them anyway. He attributes this to a combination of 1) this is what they learned from their teachers, 2) their concern because of “bad things” they have seen before in their practices, and 3) our instincts that make us “apply these [cancer screening] tests as if they were treatments, as if getting a mammogram were somehow like prescribing an antibiotic.” He shows how all of these are, or can be, wrong. The first should be obvious to all of us: the state of the art and of medical knowledge has often, indeed likely, changed from when we learned from our mentors. We need to keep up with current information, based on the most recent data available.
The second and the third are maybe a little harder to understand. With regard to #2, we, even doctors, remember what is unusual, not what is usual, and we tend to think that “had we only done that test, the bad outcome might have been prevented” when it usually would not have. #3 has to do with the difference between treating a condition that we have diagnosed and screening asymptomatic people. For almost all conditions, the percent of people who actually have them is so low that a majority of the people who have positive screening tests will actually be false positives. The physician’s anecdotal experience, never a substitute for the actual population data, may have value in the treatment of a condition she sees frequently, but virtually none with regard to screening.
The third article, “Afraid to speak up at the doctor’s office” by Pauline Chen, which was published also on June 4 but originally appearing earlier on Dr. Chen’s blog, talks about the reticence of people (even, as she describes, intelligent, successful, and generally empowered people) to not only not question their doctor’s recommendations, but to not even ask questions. This is something I have seen over and over again with friends and relatives, who don’t want to bother the doctor, or, worse, have gotten the message that Dr. Chen’s friend did that “’I don’t really feel comfortable bringing it [her concern about her symptoms] up,’…While her doctor was generally warm and caring, ‘he seems too busy and uninterested in what I feel or want to say.’” Dr. Chen cites an article from a recent Health Affairs, “Authoritarian Physicians And Patients’ Fear Of Being Labeled ‘Difficult’ Among Key Obstacles To Shared Decision Making”which shows this is a really common problem.
How much this is due to doctors being “authoritarian” rather than simply “authoritative,” or due to the physician being very busy (despite being “caring”) and wanting to cut short potentially time-consuming conversations, I do not know, but it is not a good thing. Nor, of course, is it good for patients to be hostile or to treat the physician as if she were a retail store where you just put in an order for what you want. Shared decision making requires collaboration, but, as in all situations with unequal power (student-teacher, employee-employer, etc.) it is primarily the responsibility of the party with greater power – in this case the physician – to take primary responsibility for ensuring that they are open to and welcoming of sharing. This is not the same as becoming a rug for a demanding patient to walk on, just as a patient being aggressive is not the same thing as being assertive. But as this study shows, the absence of shared decision making is much more often a failure on the part of the physician to encourage it.
Annual exams are more complicated. Dr. Rosenthal is absolutely right in pointing out the lack of indications for many of the screening tests that we often do, and in the incorrectness of the myth of the “annual physical”. On the other hand, such visits, whether annual or less often, serve another purpose. They offer the physician a chance to talk to the patient, to ask questions about real or potential health risks that the patient may not have bothered to bring up because it didn’t seem “worth bothering the doctor about” or because they weren’t sure that they could talk to the doctor about it. The latter includes “sensitive” topics such as domestic violence, abortion, sexual health, drugs, etc. It also is a time that doctor and patient can discuss health risks and what the patient can do for themselves to minimize their risks, from smoking and alcohol and drugs to safe sex and bicycle helmets and healthful foods.
Indeed, this is the main use of a “school physical” for sports – not to really identify physical problems that put a student at risk, but as an opportunity to talk to adolescents, a group that doesn’t often come to the doctor, about their health behaviors. Dr. Rosenthal says “I respect my doctors, but I see them only when I’m sick.” But she adds “I religiously follow schedules for the limited number of screening tests recommended for women my age — like mammograms every two years and blood pressure checks — but most of those do not require a special office visit.” However, she is a doctor; a lot of people don’t know what is indicated without guidance, and may not be so “religious” about doing those things without encouragement.
The biggest problem, as I have said many times, is that do these unnecessary-and-potentially-harmful tests for patients with good insurance, going to extremes with even more and more expensive and more un-indicated tests for “executive physicals” when a company is paying, but not do even the most strongly-recommended tests for poor and uninsured people. These people may never get to the doctor until they are very sick.
This absurd inequity, too much testing for some and too little for others, based not on patient preference but class, income and insurance status, is the true scandal. While there is clearly much else to do, a universal health insurance program is the obvious first step.
 Frosch DL, et al., Authoritarian Physicians And Patients’ Fear Of Being Labeled ‘Difficult’ Among Key Obstacles To Shared Decision Making, Health Affairs May2012;31(5):1030-8.