Friday, April 26, 2013

Matthew Freeman Lecture and Awards, 2013

On April 1, 2013, Roosevelt University in Chicago presented the 2013 Matthew Freeman Lecture "What Works in Reversing the Cradle-to-prison Pipeline: System Change through Conflict and Collaboration", by Joseph Tulman, Professor of Law and Director of the Took Crowell Institute for At-Risk Youth at the University of the District of Columbia David A. Clarke School of Law. Prof. Tulman focused more on what he calls the "school to prison pipeline", discussing the depressing statistics on the probability of young people in certain communities ending up in prison, and the association of poor school experiences with this result. Most prisoners, he notes, have learning disabilities, not addressed in school, and he spoke about the ways in which he and others have  tried to help. His work is an important addition to a growing movement to address the mass incarceration of largely minority people in the US. 

I just returned from hearing Michelle Alexander, Associate Professor of Law at Ohio State, giving the Cleaver Lecture in Kansas City, on the topic of her book "The New Jim Crow". She points to the massive increase in incarceration over the last 30 years, with the quintupling of the prison population, 80% of whom are in for drug possession. She notes that this is a purposeful strategy pursued after the civil rights movement to create and maintain an "under-caste", where black men are more likely to be in the corrections system than in college; where in some cities and communities 80% of black men have a record. And felony records, often acquired before voting age, may prevent them from ever voting or even getting a job.

During the Jim Crow era, laws in the South institutionalized legal racism. The poor and working class white people, living in states that were often low-wage and non-union, could at least feel that they were superior to blacks. After the successes of the civil rights movement eliminated legal segregation, politicians seeking to mobilize the resentment of those same whites developed a "law and order" program, focused on the "War on Drugs" and targeted at people of color. Crime rates have gone up and down, but incarceration rates have only gone up. Drug use and sales are not significantly higher in minority communities, but arrests are. Felony convictions have decimated communities, and "kept them under control". One of my friends and colleagues notes that "If you're Paris Hilton or Lindsay Lohan, you can get away with virtually anything. If you're black or Latino, you get 5 to 10 years!" Indeed, you don't have to be Paris Hilton or Lindsay Lohan; if you are the right color and live on the right side of town, you can still go to college; if not, you probably will never get a job.

I would also like to congratulate the winners of the 2013 Matthew Freeman Social Justice Award, Nathan Lustig and Bailey Swinney.  Lustig, 22, a psychology major, Chicago resident and native of White Plains, N.Y., was selected for the award based on his commitment and engagement as an activist and organizer both on campus and in the community.

Swinney, 24, a sociology major, resident of Chicago’s Hyde Park neighborhood and native of Euless, Texas, received the award for her work in creating a conducive environment for Roosevelt students to teach reading skills to Cook County juveniles on probation.

It is clear that they are part of the solution.
Thank you, Professor Tulman, Professor Alexander, Nathan, and Bailey.

Nathan Lustig and Bailey Swinney

Sunday, April 21, 2013

Payments for surgical complications: With a scalpel or a meat ax?

When you bring your car to a mechanic and there is a complication, when something goes wrong with the procedure they say they are going to do, you don’t expect to pay for it, at least if it is their fault, and if you know it. Say that in repairing one part of the engine, they cut a hose in another part; you wouldn’t expect to be financially responsible for fixing it. You’d think that they should absorb the cost, but of course it might unlikely that you would know whether the complication was their fault (sloppy work) or unpredictable, maybe a pre-existing problem that they hadn’t anticipated. On the other hand, if they fix your brakes or transmission and a few days later they fail, you do expect them to repair it for no cost to you.

The relationship between payment and surgical procedures done on your body in the hospital is more complex. First of all, just as with your car, complications happen. Sometime they are the result of “operator error”, whether mechanic or surgeon, but most of the time they occur with a predictable (but hopefully low) frequency. And, like with your car, some people have higher risk of complications because they are in worse shape. And one of (although certainly not the only) the predictors of being worse shape is, just as with cars, the age of the patient. Therefore, it is important to consider the risk (and potential seriousness) of complications and weigh it against the potential benefit from the surgery.

Secondly, payment for surgery (as for all hospital activities, and to some degree all medical activities) is bewildering and incomprehensible to health economists, not to mention doctors and regular people. With your car, you get a bill for “parts” and for “labor”; if you think you’re overcharged, you go somewhere else next time. In medicine, and particularly in hospitals, “charges” for procedures are a largely mythical but definitely inflated number that bears only a little relationship to the costs the hospital incurs, and is almost never the amount that is paid. Different insurers (private, Medicare, Medicaid, “self-pay”) pay different amounts; big insurance companies can bargain down the rates that they pay, government programs such as Medicare and Medicaid set their rates, and the uninsured are the only ones who get a bill for the whole set of charges (of course, they can rarely pay them, but are often bankrupted or have their credit ruined in the process). (See Bargaining down the medical bills, March 15, 2009, or the experience of health journalist Frank Lalli trying to find out what his medicine would cost, “A health insurance detective story”, NY Times December 2, 2012, and covered in my blog post “Medicare: Consumer choice or choosing your poison? How about coverage for everyone?”, December 15, 2012.)

So, do hospitals make or lose money when there are complications to surgery? The answer is “it depends on who’s paying”. In a recent JAMA, Relationship Between Occurrence of Surgical Complications and Hospital Finances[1], Sunil Eappen and colleagues from the Harvard School of Public Health (including, as last and corresponding author, Atul A. Gawande, the surgeon whose New Yorker essays I have discussed several times) examined this question in a large hospital system in the southeast US whose “inpatient surgical payer mix (Medicare, 45%; private, 40%; Medicaid, 4%; and self-pay, 6%) was comparable to that of an average US hospital in 2010 (Medicare, 40%; private, 41%; Medicaid, 9%; and self-pay, 5%)”. Their study found that “…The financial effects of surgical complications varied considerably by payer type. Complications were associated with more than $30,000 greater contribution margin per privately insured patient ($16,936 vs $55,953) compared with less than $2000 per Medicare patient ($1880 vs $3629). In contrast, for Medicaid and self-pay procedures, those with complications were associated with significantly lower contribution margins than those without complications. As a result, the payer mix will determine the overall economics of surgical complications for a given hospital."

Definitions  of Costs and Margins  (from Eappen, et al.)
Variable costs: Costs that vary with patient  volume (ie, supplies and nurse staffing).
Fixed costs: Costs that do not vary with patient volume (ie, costs for the hospital building, utilities, and maintenance).
Total margin: Revenue minus variable costs and fixed costs.
Contribution margin: Revenue minus variable costs. These are revenues available to offset fixed costs.

This absolutely does not mean that in these hospitals, or any hospital, surgical complications are seen as desirable. It does mean that, when the complications happen, the hospitals make money (their “contribution margin” toward fixed costs goes up – see the box which I have reproduced from the article for explanations of terms) if the patients are privately insured or covered by Medicare and lose money if the patients are covered by Medicaid or self-pay. It provides another example of why hospitals see some patients as “more desirable” based upon their insurance coverage, and illustrates how flawed this system is.

The study by Eappen was done on data from 2010, and there have been some changes since then. Medicare no longer pays for the treatment of complications (surgical or medical) that it has identified as preventable and that did not exist on admission (such as new bed sores or blood clots). It will soon go further and not pay for readmissions to the hospital within a certain period of time, whether or not the readmission is for the same problem. So, to carry on the car analogy, not only will they not pay again if your brakes fail after they’ve been “fixed”, they also won’t pay if you have to bring your car back because it needs transmission work. The latter may be as inappropriate for people as for cars; with time multiple things break down, not always related. With a car, we may sell or junk it; with a person we usually try to treat it. Our high-tech medical system can often get a person from the brink of death to “well enough” to go home or a skilled-care facility, but the same problem or another recurs and requires readmission. And, of course, since this is Medicare, all of our patients over 65 covered by this program now become “less desirable”; it means that, even more than before, hospitals will compete for patients with private insurance coverage.

This is no way to fix the problem. The first step has to be to put everyone in the same boat, to have a universal health insurance system, so that no patient is “more” or “less” desirable from a financial standpoint based upon their insurance coverage. Second, hospitals should not be paid on a “per case” basis or have a charge structure that no one understands. They should not have to seek out “well-insured” patients to cover their actual costs (fixed and variable) or put aside money for purchase of new capital equipment. In Canada, hospitals receive a global yearly payment for operating costs (and capital expenses are considered separately), and can thus make treatment decisions based on best meeting the needs of the patient, rather than “readmissions good” (we make money) or “readmissions bad” (we lose money. It is rather parallel to capitated payments for physicians, which I have discussed (recently, for example, Gaming the system: Integration of healthcare services can just raise costs, not quality, December 1, 2012), allowing physicians to treat patients in person, by phone or email, with long visits or short, depending upon what is most appropriate rather than which has the greatest reimbursement.

Of course, like capitated payments to physicians, hospital global budgets need to be adequate to cover costs and incent efficient but effective performance. A well-designed structure for payment that minimizes “gaming” the system no longer works when it is grossly underfunded. An open and transparent system of funding is most likely to permit cost savings where appropriate and not “across the board” (which is almost always wrong); it encourages the use of a scalpel rather than a meat ax.

Sunday, April 14, 2013

Premature babies and informed consent: we need to do it right

The New York Times reported April 10, 2013 that an investigation by the Department of Health and Human Services “has found that a number of prestigious universities failed to tell more than a thousand families in a government-financed study of oxygen levels for extremely premature babies that the risks could include increased chances of blindness or death.”  “Study of Babies Did Not Disclose Risks, U.S. Finds”, by Barbara Tavernise reports that the study of the use of oxygen in 1,300 infants born at 24-27 weeks of gestation during 2004-2009 and published in the  New England Journal of Medicine in 2010, “did have an effect on which infants died and which developed blindness, and that those risks were not properly communicated to the parents, depriving them of information needed to decide whether to participate,” and that these  “…conclusions were listed in great detail in a letter last month to the University of Alabama at Birmingham [UAB], the lead site in the study,” which included 23 academic institutions, including Stanford, Duke and Yale.

The science is complex. Very premature infants such as those studied are at high risk of death and other complications, including blindness. It has long been known that the use of very high concentrations of oxygen, near 100% (room air is 21% oxygen), causes them to have an increased risk of blindness. The Times notes that “Clinical treatment of premature infants has a troubled history. Attempts to treat them with higher oxygen levels that were thought to improve their odds of survival led to many cases of blindness. Premature babies need oxygen because their lungs are underdeveloped and they often need help breathing.” The study under investigation was using lower concentrations, 85-95%, in an effort to find a level that was most beneficial for survival without having an increased risk of serious outcomes, such as blindness. The response letter from the lead investigator at UAB argues that a similar (but not formal control group) of infants had a higher mortality rate than those in the study group; this is criticized by others who note that it is not valid science to retrospectively attempt to create a comparison group because they may be different in important ways. Usually, participants are randomly assigned to either receive the intervention or not, because this means that any differences that may exist, known or not, are likely to be equally present in both groups. In fact, one critic, Michael Carome of Public Citizen’s Group on Health, points to a study published in the journal Pediatrics in 2012 that noted the babies in the other group were sicker and thus more prone to die.

But what is of concern to the HHS investigators, and to me, is that the parents of the babies in the study were apparently not informed of the risk of blindness that is known to accompany high-level oxygen therapy, despite the study (and its consent form) having been approved by the Institutional Review Boards (IRBs) of all 23 participant institutions. This is shocking on the face of it, but is even more surprising given that consent forms have largely become, as the Times article quotes Arthur Caplan, head of the division of medical ethics at New York University Langone Medical Center, “...captured by worries about legal liability, so risks tend to come billowing forward like a huge fog… It’s a truth dump, so they are covered should something go wrong.” He is absolutely correct; consent forms commonly include a list of potential negative outcomes so long as to obfuscate what is important, e.g., “rash, fever, nausea, vomiting, headaches, bleeding, seizures and death”. (NOTE: This is MY rendition of what I think is typical; it is NOT the list contained on the premature infant oxygen study.) What is a patient or parent of a patient to think? How bad are these outcomes? How likely are they to occur? How can I make a decision? And, perhaps most important, how can I give a well-thought-out response, giving or withholding consent, when I am – or my baby is – terribly sick and at high risk. What usually happens, as Dr. Caplan notes, is that “…often in such emergency medical situations, parents often rely more on what doctors say in deciding whether to participate than on the fine print of a consent form.“

Much, if not most, research, even medical research, is done with people who are not at high or immediate risk; much early clinical research, as well as that in the social sciences, specifically looks at “normal volunteers” or at regular people in the community. All require informed consent by participants. There are (at least) 3 different sets of interests in any research. The researcher is interested in finding out the answer to a question, which in clinical research is usually about whether one type of treatment is better than another. The patient (or guardian), whose motivation, according to the urgency of the situation, may be to contribute to expanding medical knowledge, to earn some money (if there is an honorarium involved), to help others in the future who may confront the same problem, or, especially when the situation is very urgent and the patient very sick (as in this study) to hopefully get immediate benefit for themselves or their child. The institution in which the study is conducted (represented by their legal department) is interested in guarding, to the extent possible, against risk of lawsuits.

Interestingly, the allegation in this current study is the failure to mention the potential outcome of blindness, more a typical failing of the “bad old days” when patients were not informed at all of risks (e.g., the Tuskegee syphilis studies) than of the “bad new days” where the risk aversion of universities’ legal departments has created the type of laundry list of bad outcomes noted above. But both are bad. Consent cannot be “informed” if typical, regular, non-lawyer people cannot understand the consent form because the language is so abstruse, or if the list of potential outcomes is a jumble of serious and minor ones. This is particularly true when a patient  is critically ill, and they or their guardian is under incredible stress. This is when, as Dr. Caplan notes, they rely on what the doctors say; unfortunately this puts the doctor in the position of trying to dispassionately provide a recommendation as to whether or not to participate in a study that s/he usually feels passionately about. This is not a setting conducive to true informed consent.

The literature on informed consent is enormous, and it is a major focus of the field of medical ethics. But the key principle must be that participants be adequately informed of the risks that may exist, in language that they can easily understand. The risks must be presented with the most serious ones highlighted, along with the probability that they will occur. They must be presented openly and dispassionately, and study leaders must be available to honestly and completely answer questions. There is, unfortunately, an inverse relationship between the comprehensibility of a consent document and the degree to which it is written in legal language (as anyone who has ever tried to read anything written in legal language can believe).

Consents must be appropriate to the situation. Researchers who wish to, say, interview people outside a local supermarket to find out what they think are health problems in their community, should not have to have those people sign a 3-page consent to participate or no one would do it. Researchers who wish to study clinical interventions in high-risk patients in urgent circumstances need to provide adequate information about risks and benefits in a way that patients can understand, and not use their role as the “doctor” to “sell” a study because they believe the intervention will be beneficial even though the results are not yet in. IRBs need to take their responsibility to protect participants’ interests and be truly informed, while facilitating the conduct of research seriously. It’s not easy. Lawyers need to learn how to write in a way that folks can understand.

Clearly, leaving out a major potential complication like blindness from a consent form is wrong, but there is much more to getting true informed consent than a simple fix. But it has to happen if we are to continue to be able to conduct research in order to have new knowledge.

Sunday, April 7, 2013

Research on disparities/inequities, in practices and communities needs much greater funding

This is my first attempt at a blog in several weeks; indeed only one in the last month. I took (and time will tell if I passed) the Family Medicine recertification exam, so I am now able to raise my head above water.

Research is the way we gain new knowledge. It is how we discover if the things that we are doing are the right things to do, or if they are of little or no value, or perhaps even of harm. In the decades after World War II, when the country was optimistic and growing and seeking new frontiers, science was a major area for investment by our government. Things were getting better, returned GIs found a plethora of well-paying jobs, were able to buy houses and cars and plan to send their children to college. American industry did extremely well, if not solely because of great planning and management here, because there was no competition from the rest of the world which had been devastated by the war.

Things were not all good, especially on the political front; there was the cold war, and the associated fanatic fear of Communists epitomized by Senator McCarthy, and there was a legitimate fear of nuclear. But, on the economic front, things were going well for the US. The growth benefited many more people, and the gap between the income of the average worker was large but not unconscionable. Not like today, where as demonstrated by much research, and the title of this HuffPost article, “CEO Pay Grew 127 Times Faster Than Worker Pay Over Last 30 Years”, (“It’s good to be a CEO!”), or in this graphic from Prof. GW Domhoff of UC-SC.

The most dramatic expenditures on science were on space travel; after the Soviet Union launched Sputnik, the first artificial satellite, in 1957 and the space race was on. With the election of John Kennedy in 1960, space exploration moved front and center. All of us who were schoolchildren, in addition to hiding under our desks to protect us from nuclear weapons, were much more productively engaging in a new-found, broad-based physical fitness program encouraged by the President. While Harry Truman was unsuccessful in passing a national health insurance plan, thanks to both the reactionary opposition of the AMA, and the fact that labor unions chose to demonstrate their effectiveness by negotiating health coverage rather than seeking political change as the Labour Party successfully did in Britain, in other areas of science, health moved to the forefront.

The National Institute of Health (NIH) became the major government institution funding medical research and saw enormous growth in the ensuing decades, including a doubling of the budget from about $15B to about $30B in the decade surrounding the last millennium. This fueled the development of an enormous expansion of medical research in laboratories, primarily in universities and medical schools. In addition, corporate support, mainly from pharmaceutical research companies, further enhanced the growth of these laboratories. There were many successes, of which the most famous is the sequencing of the human genome, but our understanding of human biology and how it might contribute to human health and diseases has been remarkably enhanced. Some of this research has led to true medical breakthroughs, with the creation of new drugs and treatment modalities that have sometimes been of great help to large numbers of people with common diseases, and sometimes of enormous help to a few with uncommon ones.
However (and you knew that there was going to be a “however”), the focus on laboratory research and new discoveries at the molecular, protein and genetic levels left unfunded areas of research at least as critical, but not seen as “hard science”, and thus not generally funded by NIH and drug companies. This is a problem. Yes, there are “clinical” research studies, but these are mostly trials of drugs and interventions in populations. The number of studies based in communities, looking at health disparities, and trying to discover how most effectively to have a positive influence on the health of people, populations, rather than occasional individuals, remains small.
Certainly, it has grown. As demonstrated in the graph, after the NIH budget doubled, it leveled off, “stagnated” given inflation, until the one-time infusions of American Recovery and Reinvestment Act (ARRA) funds in 2009. Funding for health disparities research has increased, both from NIH and from other federal agencies such as the Centers for Disease Control (CDC) and the Agency for Healthcare Quality and Research (AHRQ), which has but a tiny fraction of the funding that NIH does. NIH created Clinical Translational Science Awards (CTSAs) which funded centers at many medical schools to look at moving research into the community, but much more from the basic science laboratory to first-in-humans trials (or even from one basic science laboratory to another). A major new initiative of the Affordable Care Act (ACA) is the creation of the Patient-Centered Outcomes Research Institute (PCORI), designed to evaluate not just new treatments but how they affect people. However, even the community-based research has focused largely on the recruitment of research subjects to studies designed by academic researchers, rather than on directly studying issues that would improve the health of the people in those communities.

Part of the problem is that it is difficult to get community members to think about what would be in the best interests of their health and that of their communities. They are, after all, not trained in such assessment. In addition, particularly in the communities that are the most vulnerable, that have to greatest health inequities, people are just focused on getting by, paying the rent, buying food, working multiple low-wage jobs. However, another part of the problem is that research at this level is seen as less important and significant, particularly by those who have always focused on new discoveries in the lab and who control most of the agencies such as NIH.

But it is not true. No matter how wonderful the discoveries in the lab, no matter how much they might lead to new understanding, new drugs, new treatments, these are only of value if people benefit from them. So this requires clinical research in the real world, with actual people. But beyond this, if they are to benefit not just a chosen few, the interventions have to be studied among diverse populations, including people facing economic, social, psychological and environmental challenges. In addition, the delivery of these treatments is sporadic. It is clearly demonstrated that administration of aspirin is of benefit to people who have had heart attacks. So it should be used. Why, then, are half the Americans who should be on aspirin not? I don’t know. It probably isn’t cost. It requires research to find out why and to change it. Saying (as is often done) that “new medical knowledge takes 10-20 years to penetrate into practice” is not adequate. Finding out how to get this effective treatment to the people who need it is as important as discovering the treatment. This is known as “fidelity” research.

Finally, effective research on improving people’s health needs to involve medical practices, where the people are being seen. There are many Practice-Based research networks (PBRNs) around the nation, but they are all challenged by how busy the providers are seeing patients; this is at least as true in practices such as Federally-Qualified Health Centers (FQHCs) that care for poorer populations. And yet, without involving them in research, how can we know what is effective in delivering the “best quality” care, and how can practices at the point of care be changed?

This is not to say that we should not fund basic biomedical research or early clinical trials. Nor is it to say that the current programs from NIH and PCORI and others to fund work in health disparities and inequities, and in population and community health are not good. But they are too little. People working in basic laboratory research, early clinical research, practice-based research, and community health should not be competing with each other. There should be more money for all of it, but especially a lot more for fidelity research, community-based participatory research, and practice based research.

Where will the money come from? From policies that are used in every other successful country, and every time the US has been successful, progressive tax policies that take some of our wealth out of the control of private corporations, who use it only to sock away more money, and into the public sector where it can be used to benefit us all.

Total Pageviews