Sunday, January 5, 2014

Medical schools are no place to train physicians

Doctors have to go to medical school. That makes sense. They have to learn their craft, master skills, and gain an enormous amount of knowledge. They also, and this is at least as important, need to learn how to think and how to solve problems. And they need to learn how to be life-long learners because new knowledge is constantly being discovered, and old truths are being debunked. Therefore, they must learn to un-learn, and not to stay attached to what they once knew to be true but no longer is. They also need, in the face of drinking from this fire-hose of new information and new skills, to retain their core humanity and their caring, the reasons that (hopefully) most of them went into medicine.

Medical students struggle to acculturate to the profession, to learn the new language replete with eponyms, abbreviations, and long abstruse names for diseases (many are from Latin, and while they are impressive and complicated, they are also sometimes trite in translation, e.g., “itchy red rash”). They have to learn to speak “medical” as a way to be accepted into the guild by their seniors, but must be careful that it does not block their ability to communicate with their patients; they also need to continue to speak English (or whatever the language is that their patients speak). “Medical” may also offer a convenient way of obscuring and temporizing and avoiding difficult conversations (“the biopsy indicates a malignant neoplasm” instead of “you have cancer”).  But there needs to be a place for them to learn.

So what is wrong with the places that we are teaching them now? Most often, allopathic (i.e., “MD”) medical schools are part of an “academic health center” (AHC), combined with a teaching hospital. They have large biomedical research enterprises, with many PhD faculty who are, if they are good and lucky, are externally funded by the National Institutes of Health (NIH). Some or many of them spend some of their time teaching the “basic science” material (biochemistry, anatomy, physiology, microbiology, pharmacology, pathology) that medical students need to learn. By “need to learn” we usually mean “what we have always taught them” or “what they need to pass the national examination (USMLE Step 1) that covers that material”. This history goes back 100 years, to the Flexner Report of 1910. Contracted by the AMA, educator Abraham Flexner evaluated the multitude of medical schools, recommended closing many which were little more than apprenticeship programs without a scientific basis, and recommended that medical schools be based upon the model of Johns Hopkins: part of a university (from the German tradition), grounded in science, and based in a core curriculum of the sciences. This has been the model ever since.

However, 100 years later, these medical schools and the AHCs of which they are a part have grown to enormous size, concentrating huge basic research facilities (Johns Hopkins alone receives over $300 million a year in NIH grants) and tertiary and quarternary medical services – high tech, high complexity  treatment for rare diseases or complex manifestations of more common ones. They have often lost their focus on the health of the actual community of which they are a part. This was a reason for two rounds of creating “community-based” medical schools, which use non-university, or “community”, hospitals: the first in the 1970s and the second in the 2000s. Some of these schools have maintained a focus on community health, to a greater or lesser degree, but many have largely abandoned those missions as they have sought to replicate the Hopkins model and become major research centers. The move of many schools away from community was the impetus for the “Beyond Flexner” conference held in Tulsa in 2012 (see Beyond Flexner: Taking the Social Mission of Medical Schools to the next level, June 16, 2012) and for a number of research studies focused on the “social mission” of medical schools.

The fact is that most doctors who graduate from medical school will not practice in a tertiary AHC, but rather in the community, although the other fact is that a disproportionate number of them will choose specialties that are of little or no use in many communities that need doctors. They will, if they can (i.e., if their grades are high enough) often choose subspecialties that can only be practiced in the high-tech setting of the AHC or the other relatively small number of very large metropolitan hospitals, often with large residency training programs. As they look around at the institution in which they are being educated, they see an enormously skewed mix of specialties. For example, 10% of doctors may be anesthesiologists and there well may be more cardiologists than primary care physicians. While this is not the mix in world of practice, and still less the mix that we need to have for an effectively functioning health system, it is the world in which they are being trained.

The extremely atypical mix of medical specialties in the AHC is not “wrong”; it reflects the atypical mix of patients who are hospitalized there. It is time for another look at the studies that have been done on the “ecology of medical care”, first by Kerr White in 1961 and replicated by the Robert Graham Center of the American Academy of Family Physicians in 2003 (see The role of Primary Care in improving health: In the US and around the world, October 13, 2013), and represented by the graphic reproduced here. The biggest box (1000) is a community of adults at risk, the second biggest (800) is those who have symptoms in a given month, and the tiny one, representing less than 0.1%,  is those hospitalized at an academic teaching hospital.  Thus, the population that students mostly learn on is atypical, heaving skewed to the uncommon; it is not representative of even all hospitalized people, not to mention the non-hospitalized ill (and still less the healthy-but-needing-preventive care) in the community.

Another aspect of educating students in the AHC is that much of the medical curriculum is determined by those non-physician scientists who are primarily researchers. They not only teach medical students, they (or their colleagues at other institutions) write the questions for USMLE Step 1. They are often working at the cutting edge of scientific discovery, but the knowledge that medical students need in their education is much more basic, much more about understanding the scientific method, and what constitutes valid evidence. There is relatively little need, at this stage, for students to learn about the current research that these scientists are doing. Even the traditional memorization of lots of details about basic cell structure and function is probably unnecessary; after 5 years of non-use students likely retain only 10% of what they learn; even if they need 10% -- or more – in their future careers, there is no likelihood that it will be the same 10%. We have to do a better job has of determining what portion of the information currently taught in the “basic sciences” is crucial for all future doctors to know and memorize, and we also need to broaden the definition of “basic science” to include the key social sciences of anthropology, sociology, psychology, communication, and even many areas of the humanities such as ethics. This is not likely to happen in a curriculum controlled by molecular biologists.

Medical students need a clinical education in which the most common clinical conditions are the most common ones they see, the most common presentations of those conditions are the most common ones they see, and the most common treatments are the ones they see implemented. They need to work with doctors who are representative, in skills and focus, of the doctors they will be (and need to be) in practice. Clinical medical education seems to work on the implicit belief that ability to take care of patients in an intensive care unit necessarily means one is competent to take care of those in the hospital, or that the ability to care for people in the hospital means one can care for ambulatory patients, when in fact these are dramatically different skills sets.

This is not to say that we do not need hospitals and health centers that can care for people with rare, complicated, end stage, tertiary and quarternary disease. We do, and they should have the mix of specialists appropriate to them, more or less the mix we currently have in AHCs. And it is certainly not to say that we do not need basic research that may someday come up with better treatments for disease. We do, and those research centers should be generously supported. But their existence need not be tied to the teaching of medical students. The basic science, and social science, and humanities that every future doctor needs to learn can be taught by a small number of faculty members focused on teaching, and does not need to be tied to a major biomedical research enterprise. Our current system is not working; we produce too many doctors who do narrow rescue care, and not enough who provide general care. We spend too much money on high-tech care and not enough on addressing the core causes of disease.

If we trained doctors in the right way in the right place we might have a better shot at getting the health system, and even the health, our country needs.

No comments:

Total Pageviews