Showing posts with label basic science. Show all posts
Showing posts with label basic science. Show all posts

Sunday, January 5, 2014

Medical schools are no place to train physicians

Doctors have to go to medical school. That makes sense. They have to learn their craft, master skills, and gain an enormous amount of knowledge. They also, and this is at least as important, need to learn how to think and how to solve problems. And they need to learn how to be life-long learners because new knowledge is constantly being discovered, and old truths are being debunked. Therefore, they must learn to un-learn, and not to stay attached to what they once knew to be true but no longer is. They also need, in the face of drinking from this fire-hose of new information and new skills, to retain their core humanity and their caring, the reasons that (hopefully) most of them went into medicine.

Medical students struggle to acculturate to the profession, to learn the new language replete with eponyms, abbreviations, and long abstruse names for diseases (many are from Latin, and while they are impressive and complicated, they are also sometimes trite in translation, e.g., “itchy red rash”). They have to learn to speak “medical” as a way to be accepted into the guild by their seniors, but must be careful that it does not block their ability to communicate with their patients; they also need to continue to speak English (or whatever the language is that their patients speak). “Medical” may also offer a convenient way of obscuring and temporizing and avoiding difficult conversations (“the biopsy indicates a malignant neoplasm” instead of “you have cancer”).  But there needs to be a place for them to learn.

So what is wrong with the places that we are teaching them now? Most often, allopathic (i.e., “MD”) medical schools are part of an “academic health center” (AHC), combined with a teaching hospital. They have large biomedical research enterprises, with many PhD faculty who are, if they are good and lucky, are externally funded by the National Institutes of Health (NIH). Some or many of them spend some of their time teaching the “basic science” material (biochemistry, anatomy, physiology, microbiology, pharmacology, pathology) that medical students need to learn. By “need to learn” we usually mean “what we have always taught them” or “what they need to pass the national examination (USMLE Step 1) that covers that material”. This history goes back 100 years, to the Flexner Report of 1910. Contracted by the AMA, educator Abraham Flexner evaluated the multitude of medical schools, recommended closing many which were little more than apprenticeship programs without a scientific basis, and recommended that medical schools be based upon the model of Johns Hopkins: part of a university (from the German tradition), grounded in science, and based in a core curriculum of the sciences. This has been the model ever since.

However, 100 years later, these medical schools and the AHCs of which they are a part have grown to enormous size, concentrating huge basic research facilities (Johns Hopkins alone receives over $300 million a year in NIH grants) and tertiary and quarternary medical services – high tech, high complexity  treatment for rare diseases or complex manifestations of more common ones. They have often lost their focus on the health of the actual community of which they are a part. This was a reason for two rounds of creating “community-based” medical schools, which use non-university, or “community”, hospitals: the first in the 1970s and the second in the 2000s. Some of these schools have maintained a focus on community health, to a greater or lesser degree, but many have largely abandoned those missions as they have sought to replicate the Hopkins model and become major research centers. The move of many schools away from community was the impetus for the “Beyond Flexner” conference held in Tulsa in 2012 (see Beyond Flexner: Taking the Social Mission of Medical Schools to the next level, June 16, 2012) and for a number of research studies focused on the “social mission” of medical schools.

The fact is that most doctors who graduate from medical school will not practice in a tertiary AHC, but rather in the community, although the other fact is that a disproportionate number of them will choose specialties that are of little or no use in many communities that need doctors. They will, if they can (i.e., if their grades are high enough) often choose subspecialties that can only be practiced in the high-tech setting of the AHC or the other relatively small number of very large metropolitan hospitals, often with large residency training programs. As they look around at the institution in which they are being educated, they see an enormously skewed mix of specialties. For example, 10% of doctors may be anesthesiologists and there well may be more cardiologists than primary care physicians. While this is not the mix in world of practice, and still less the mix that we need to have for an effectively functioning health system, it is the world in which they are being trained.

The extremely atypical mix of medical specialties in the AHC is not “wrong”; it reflects the atypical mix of patients who are hospitalized there. It is time for another look at the studies that have been done on the “ecology of medical care”, first by Kerr White in 1961 and replicated by the Robert Graham Center of the American Academy of Family Physicians in 2003 (see The role of Primary Care in improving health: In the US and around the world, October 13, 2013), and represented by the graphic reproduced here. The biggest box (1000) is a community of adults at risk, the second biggest (800) is those who have symptoms in a given month, and the tiny one, representing less than 0.1%,  is those hospitalized at an academic teaching hospital.  Thus, the population that students mostly learn on is atypical, heaving skewed to the uncommon; it is not representative of even all hospitalized people, not to mention the non-hospitalized ill (and still less the healthy-but-needing-preventive care) in the community.

Another aspect of educating students in the AHC is that much of the medical curriculum is determined by those non-physician scientists who are primarily researchers. They not only teach medical students, they (or their colleagues at other institutions) write the questions for USMLE Step 1. They are often working at the cutting edge of scientific discovery, but the knowledge that medical students need in their education is much more basic, much more about understanding the scientific method, and what constitutes valid evidence. There is relatively little need, at this stage, for students to learn about the current research that these scientists are doing. Even the traditional memorization of lots of details about basic cell structure and function is probably unnecessary; after 5 years of non-use students likely retain only 10% of what they learn; even if they need 10% -- or more – in their future careers, there is no likelihood that it will be the same 10%. We have to do a better job has of determining what portion of the information currently taught in the “basic sciences” is crucial for all future doctors to know and memorize, and we also need to broaden the definition of “basic science” to include the key social sciences of anthropology, sociology, psychology, communication, and even many areas of the humanities such as ethics. This is not likely to happen in a curriculum controlled by molecular biologists.

Medical students need a clinical education in which the most common clinical conditions are the most common ones they see, the most common presentations of those conditions are the most common ones they see, and the most common treatments are the ones they see implemented. They need to work with doctors who are representative, in skills and focus, of the doctors they will be (and need to be) in practice. Clinical medical education seems to work on the implicit belief that ability to take care of patients in an intensive care unit necessarily means one is competent to take care of those in the hospital, or that the ability to care for people in the hospital means one can care for ambulatory patients, when in fact these are dramatically different skills sets.

This is not to say that we do not need hospitals and health centers that can care for people with rare, complicated, end stage, tertiary and quarternary disease. We do, and they should have the mix of specialists appropriate to them, more or less the mix we currently have in AHCs. And it is certainly not to say that we do not need basic research that may someday come up with better treatments for disease. We do, and those research centers should be generously supported. But their existence need not be tied to the teaching of medical students. The basic science, and social science, and humanities that every future doctor needs to learn can be taught by a small number of faculty members focused on teaching, and does not need to be tied to a major biomedical research enterprise. Our current system is not working; we produce too many doctors who do narrow rescue care, and not enough who provide general care. We spend too much money on high-tech care and not enough on addressing the core causes of disease.

If we trained doctors in the right way in the right place we might have a better shot at getting the health system, and even the health, our country needs.

Saturday, November 2, 2013

Should Medical School last 3 years? If so, which 3?


Displaying as_seen_woz_chart_review (1).jpgAs we look at how to increase the number, and percent, of students entering primary care residency programs, it is interesting to see how some schools have creatively tried to address the problem. Texas Tech University Medical School and Mercer University Medical School’s Savannah campus have begun to offer MD degrees in 3 years to a select group of students who are both high performers and planning on Family Medicine careers, thus decreasing their indebtedness (one less year of school to pay for) and getting them into family medicine residencies, and several other schools are considering the same. They do this by essentially eliminating the fourth year of medical school. This is the subject of a piece by surgeon Pauline Chen, “Should medical school last just 3 years?” in the New York Times. She discusses different perspectives on the fourth year, previous experiences with reducing the length of medical school training, and two ‘point-counterpoint’ essays on the topic in the New England Journal of Medicine.

Chen addresses prior efforts to shorten medical school, including the most recent precursor of this current one. Specifically aimed at increasing the number of highly-qualified students entering Family Medicine residencies, it was implemented in several in the 1990s, and allowed students to effectively combine their 4th year of medical school with their first year of family medicine residency, thus completing both in 6 years. The programs were successful by all criteria. Students did well on exams and were able to save a year of tuition money, and medical schools were able to retain some of their best students into family medicine. Of course, therefore, the programs were stopped. In this case the villain was the Accreditation Council for Graduate Medical Education, which decreed that the fact that because students did not have their MD when they started residency training (it was granted after the first year, a combined 4th year of medical school and internship) they were ineligible for residency training. Thus this newest iteration offers the MD degree after three years.

An older effort to shorten medical school is also mentioned, one with which I have personal experience. In the 1970s ”as many as 33 medical schools began offering a three-year M.D. option to address the impending physicians shortages of the time.” One of those was Loyola-Stritch School of Medicine, in which the only curriculum was 3 years. In 1973, I was in the second class entering that program. We spent 12 months in ‘basic science’, pretty much just in classes in the mornings, and then two full years in clinical training. Chen writes that “While the three-year students did as well or better on tests as their four-year counterparts, the vast majority, if offered a choice, would have chosen the traditional four-year route instead.” I have no idea where she gets this impression; it is certainly not at all my memory. Our friends across town at the University of Illinois went to school for two years of basic science, 8 hours a day to our 4. We did not envy that. As Chen notes, we did just as well on our exams, and saved a year’s tuition, and I daresay no one could tell the difference in the quality of the physicians graduating between the two schools, when they entered residency in 1976 or today after 37 years of practice. Again, it was all good.

And, again, it was stopped. Why? Of course, the experiment only led to one additional class of physicians being produced (after that, it was still one class per year) so that benefit expired, but what about the other benefits that I have cited? Why wasn’t the program continued? Chen hits the nail on the head in her next paragraph: “The most vocal critics were the faculty who, under enormous constraints themselves to compress their lessons, found their students under too much pressure to understand fully all the requisite materials or to make thoughtful career decisions.” In particular, the basic science faculty who taught the first two-years-now-compressed-into-one of school. The fact that students did just fine on USMLE Step 1 and became good doctors was apparently insufficient to convince them. They made arguments like the one above, shifting the problem from to the students (“they” were under too much pressure) rather than that the faculty felt the pressure. I can’t remember anyone wishing they had another year to spend in basic science lectures.

The truth is that there is no magic amount of basic science time educational time needed to become a doctor. The amount of time needed is the amount necessary to either: (1) learn enough to pass USMLE 1, a fine utilitarian standard, or (2) learn the key pieces of basic science information that every physician needs to know in order to be able to practice quality medicine. If there are some basic science faculty might bridle at the idea of #1 (“Teach to the test? Moi?”), trying to identify what comprises #2 is a lot of work. It is easier to teach what we have always taught, what the instructors know about. If the reason for more time were the amount of basic science knowledge, then what required two years 35 years ago would require 10 or more years to teach now, because so much more is known. That is not feasible. The right answer is #2, but getting folks to do it is hard.

Chen quotes Dr. Stanley Goldfarb, lead author of the perspective piece against three-year programs as saying “You can’t pretend to have a great educational experience without spending time on the educational experience,”  which is of course true but begs the question of what those experiences should be. If we are going to decrease the length of time students are in medical school, it makes much more sense to reduce the amount of time spent in learning basic science factoids that most will forget after USMLE 1 (reasonable enough, since they will never need most of that information again) and focus on adult learning by teaching that information that all physicians do need to know. This effort requires clinicians having major involvement in the decision about what that is. It makes much less sense to remove one of the years of clinical training; what should be done is that training should be augmented, become less about vacations and “audition clerkships” and more about learning.  Why this is unlikely to happen, of course, has nothing to do with educational theory or the quality of physicians produced and everything to do with medical school politics. There is no constituency on the faculty for the fourth year, and a strong basic science faculty constituency for the first two.

Yes, we need more primary care doctors, lots of them, and we may need more doctors altogether, to help meet the health needs of the American people, and we need them soon. Data from the Robert Graham Center of the American Academy of Family Physicians (AAFP)[1] (attached figure) show the projected increase in need, including the one-time bump from the ACA, which will bring a large number of people who have not had access into care, and the longer-term need from population growth and aging. Programs that increase the number of primary care doctors (like the 6-year family medicine programs of the 1990s) are good. Programs that decrease the number of years by reducing basic science courses rather than clinical times obviously make more sense from the point of view of having well-trained doctors. (Programs like the 3-year option at NYU which is not even geared to training more primary care are, from this point of view, irrelevant.) We need to have these not be pilots, but scaled up to produce more clinically well trained primary care doctors.

And we need to do it soon. Medical school turf battles should not be the determinant of America’s health.







[1] Petterson SM, et al., “Projecting US Primary Care Physician Workforce Needs: 2010-2025”, doi: 10.1370/afm.1431 Ann Fam Med November/December 2012 vol. 10 no. 6 503-509

Friday, February 8, 2013

Creating more family doctors: should we shorten medical school? How?

At the recently-completed Society of Teachers of Family Medicine (STFM) Conference on Medical Student Education, held in San Antonio, one of the big areas of discussion was the shortening of the medical school experience to 3 years for students planning to enter family medicine. Steven Berk, Dean of the Texas Tech University School of Medicine, and Betsy Goebel Jones from the Department of Family Medicine, described the Lubbock medical school’s recently-instituted program in a plenary presentation, and a later seminar featured presenters from several other schools which have instituted or are planning such tracks, including the Savannah campus of Mercer University School of Medicine, Medical College of Wisconsin, as well as Texas Tech. The goal of such tracks is to increase the number of students choosing to enter family medicine by eliminating one year of school, and thus tuition; these schools believe that this financial incentive at least helps a little to offset the lower income that accrues to family physicians compared to other specialists. To the extent that these students then enter family medicine residencies at those same schools, it also decreases uncertainty for both the student and the program.

The most direct forebears of these programs were in the 1990s, at some of the same schools. They offered an “accelerated track” for family medicine, in which students began their first year of FM residency while completing their final year of medical school, getting the MD degree after that year. While initially approved by the American Board of Family Medicine as a pilot, these programs were closed when the decision was made by the body that accredits residencies that one could not get credit for residency training until after receiving the MD degree. This latest effort gets around this by granting the MD degree after 3 years, mainly by compressing the final year of medical school; in most schools the fourth year is already largely used for electives.

Not all accelerated MD programs are about increasing the number of primary care, or certainly family medicine, physicians. A program at the NYU School of Medicine, which remains one of the few US medical schools to not even have a Family Medicine department, was featured in the New York Times "N.Y.U. and Other Medical Schools Offer Shorter Course in Training, for Less Tuition" by Anemona Harticollis, December 24, 2012. While the Texas Tech and Mercer-Savannah programs are also mentioned, NYU’s program is clearly not about producing more of the primary care physicians that the US needs, as this is not something NYU seems to care about at all. As of now all of these programs are “tracks”, rather than for all students; they recruit “high-performing” students who can finish the traditional curriculum in a shorter time.

Interestingly, these current programs do not focus on shortening the amount of time or changing the content of the first two years of medical school, the “basic science” years. This struck me as odd, because when I went to medical school (Loyola-Stritch) in the mid-1970s, it was precisely this component that was shortened (to 12 months, with 2 full years of clinical training). Loyola was far from the only school to do so during that period; my current school, the University of Kansas and many others did so; according to an article by Walling and Merando in Academic Medicine[1] “…By 1973, 27% of U.S. schools offered compressed three year curricula.”  For most, this was not a “track” but was the curriculum for all students. The primary method of shortening the curriculum was abbreviating the time spent in basic science, although the amount varied (at KU it was 15 months). It is thus, to me, surprising that in the current efforts to decrease the length of training very little attention has been paid to shortening the basic sciences. Walling and Merando note that “Although educational outcomes were very similar for three-year and four-year curricula, most schools subsequently reinstated the fourth year to provide students with a broader clinical experience.” I don’t completely buy that; at least at Loyola, the clinical experience was not shortened during its 3-year curriculum. It surprised me in talking to people at the conference that so few even knew about these “experiments” from the 1970s.

My guess is that the current efforts focus on reducing the 4th year rather than the first two years because of politics. No one “owns” the 4th year, but the first two years are “owned” by the basic sciences in most medical schools, and by a strong advocacy constituency in the Association of American Medical Colleges (AAMC), the National Board of Medical Examiners (NBME) which offers the US Medical Licensing Examinations (USMLE) and other groups. They have strongly resisted efforts to decrease the time spent on basic science teaching in medical schools individually, as well as nationally. An effort by the NBME to combine the 3 “steps” of the USMLE into two was seen as “elimination of Step 1” and generated huge opposition from the basic science community; the change has been put on hold for several years.

While the need for students to pass “Step 1” is often used as the ultimate reason to not cut back biologic science curricular time, the fact is that students can pass this test with significantly pared-down content. Hopefully, however, there is a better reason to teach basic sciences. That would be that learning the concepts that are important for everyone training to be a doctor to know rather than forcing the memorization of details that are irrelevant, can be looked up, or are likely to change regularly. It means both subjecting the content of curriculum to the this test of relevance, and increasing the breadth of disciplines included as “basic” to include social sciences such as psychology, anthropology, sociology, epidemiology. The teaching -- and testing -- of all this material should focus on understanding concepts, solving problems, and knowing where to look up detailed facts, rather than memorization.
We do need more primary care doctors, and more family physicians to meet the health needs of the American people. We need to do everything possible to make this happen, and addressing financial incentives is a big part of it. Another plenary presentation at the meeting from STFM President Jerry Kruse addressed the successful efforts in Canada to increase the number of primary care doctors (in that country, all family physicians); the key element is decreasing the ratio between primary care and specialist income, and the effective ratio is between 80-85%. There are also good arguments for decreasing the cost of medical education, and perhaps shortening medical school is one method of doing so, especially if it can be done without sacrificing important training; it certainly needs to be relevant training.

But these efforts – to increase the primary care workforce and to consider the appropriate length of medical education – are different. They may complement each other, or may not. The strategies that we employ should be based on their effectiveness at achieving our goals, and for that to happen we need to be clear on what those goals are.  Piecemeal approaches may ultimately work, but they are not the most efficient ways of approaching the problem.

Of course, in terms of health insurance reform, piecemeal is the way we have chosen to go rather than a comprehensive national health program such as Medicare for All; why would we expect a more rational approach to improving medical education?



[1] Walling A, Merando A, “The Fourth Year of Medical Education: A Literature Review”, Acad Med  November 2010  85(11): 1698-1704.

Sunday, September 9, 2012

Research basic and applied: we need them both


 “Not every mystery has to be solved, and not every problem has to be addressed. That’s hard to get your brain around.”

This statement was the coda of a very good article, “Overtreatment is taking a harmful toll”, by Tara Parker-Pope, in the NY Times, August 28, 2012. The topic of the article, and the implication by the speaker, who was talking about her own family’s health care and unnecessary testing, is one that I have written about several times recently, in terms of both screening tests (“The "Annual Physical": Screening, equity, and evidence”, July 4, 2012) and investigation and treatment of disease (“Rationing, Waste, and Useless Interventions”, June 21, 2012). Thus, I certainly agree that there is too much testing and too much intervention, and that it has a high cost in both dollars and in potential risk to people (the English word for what the health system calls “patients”). So why do I feel a little uncomfortable with the quotation above?

I think it is because I very strongly believe that the decision on what tests to do and what interventions to take should be informed, as much as possible by the evidence. That evidence, I have also argued, should come from research, from well-designed studies, from science. This is also costly, but it is necessary. Your treatment should be based on evidence and probability gathered from studies of large populations. Without it, doctors and other health professionals are flying blind, with treatments based on their own experience, or worse yet “what makes sense”. Sometimes the doctor’s own experience is a good guide, if they see a lot of patients with the same problem, and have reason to know what works. It is even better when they can bring in knowledge of the local community (e.g., what antibiotics are common bugs resistant to here? What are the common belief systems of the people that I care for?) and better yet if they actually know you, and what you value, and what your medical history is, and what your belief system is, and what is most likely to engage your effort in the interest of your health.

But it is better if the set of options from which they choose are all based in evidence. That something makes sense, I have often pointed out to medical students and residents, makes it a research question, not an answer. If something makes sense, based on what we already know, it is likely to be a more valuable thing to study than something that does not make sense. However, until the study, or more likely several studies, are done we won’t know if it is, in fact, true. Human beings, both in terms of their biology and behavior, are too complex, and have too many different systems interacting with each other, to predict accurately how something that “makes sense” based on one of those dimensions is likely to turn out.

The thing is that not all research is immediately clinically relevant. Sometimes it is; the “Ottawa rules”, developed by research done in Canada, provide physicians with evidence based guidelines about when it is appropriate to do x-rays for injured ankles, knees, and feet – common problems. Other studies investigate whether particular drugs may provide real benefit to people with more uncommon problems. This is particularly satisfying when the drug is not some new, expensive blockbuster but something cheap and common like aspirin or folic acid. Or when an old drug, all but abandoned for its original purpose, turns out to be very effective for another condition entirely. (One of my colleagues just demonstrated this for an old heart drug that works for a rare neuromuscular condition – coming soon to your local JAMA!) But much research is at a very basic level. Before those drugs can be tested on particular conditions, they have to be developed. Before they can be developed, the biological and biochemical mechanisms upon which they have an effect have to be identified. Just as, before we can send rockets to the moon, we need to understand physics. Science, what in medicine we call “basic science”, has to continually move forward, and this requires not solely focusing on what might be of practical use tomorrow, but what is still a mystery that has to be solved.

I find it almost ironic that I am writing this defense of basic science research. Just recently, I was in NYC and went to brunch at the riverpark restaurant. On the block leading to it is a big vegetable gardens where they grow many of their own ingredients, much of it surrounded by a big wooden fence. And, since it is right there at Bellevue Hospital and NYU Medical Center and Rockefeller University, that fence is decorated with pictures and biographies of Nobel Prize winners in Medicine who had ties to NYC. My reaction was that all of these people (even if they had MD degrees) were doing laboratory, basic science research, not clinical research, even though the prize is for “Medicine”. Of course, having won Nobel Prizes, their research led to important practical breakthroughs, but for every Nobel Prize winner who discovers something that will make a major difference in health, there are thousands and thousands of others, working in laboratories everywhere, and this work is necessary.

Personally, I don’t think it is necessarily necessary that it  be done in medical schools, whether NYU or the University of Kansas, rather than in research institutes like Rockefeller or Kansas City’s Stowers Institute (or Karolinska in Sweden or the Pasteur Institute in France). I find, as a family doctor, that the fact that much basic research in human biology is done at medical schools leads to what I think are negative “side effects”. I believe that there is an over-emphasis on teaching medical students biological sciences in great detail (often at the level of minutia) and an under-emphasis on the social sciences. I think that these areas are just as important – maybe more important for the practicing physician -- but are usually not considered as “core” to medical student teaching.

In part this is because those working in the social sciences are most often “there”, at the main campus, not “here”, at the medical school. I am proud that the research conducted by faculty in my department is mostly community-based, looking at determinants of health and health disparities. But, whether biomedical research should be as important a part of medical schools as it usually is, or not, it is absolutely clear that it needs to occur, and that scientists need to solve mysteries.

Every mystery? Well, of course, that will never happen. And even for the ones they solve, the results are not always beneficial for folks. We can map the human genome! We can tell you if you and your family members are at increased risk for a terrible disease! Of course, often we cannot do anything about it, but it can make you depressed and pessimistic, and maybe you’ll lose your health insurance. So maybe we don’t need to tell your insurer, or even tell you, but getting to be able to do something about it first requires doing the science.

And of course there is a big difference between uncovering the mysteries of the universe, and even of finding evidence for what is appropriate diagnosis and treatment in populations, and in having to investigate everything in you. The father of another person quoted in the article developed delirium from overtreatment with drugs that was mistaken for dementia. “I don’t know if we have too many specialists and every one is trying to practice their specialty, but it should not have happened.” I agree; too many mistakes, too many errors (see Medical errors: to err may be human, but we need systems to decrease them, August 10, 2012) can come from there being too many specialists combined with too little communication.

The quote at the top of this piece notes that not everything has to be addressed and that this is hard to wrap your brain around, but it shouldn’t be.  All that research in the basic and clinical sciences should help us to understand when we need to investigate (do a CT scan for a black eye, in another example from the article, say) and when we don’t.

Often we should leave well enough alone. 

Total Pageviews