Can Medicine Be Cured Read online

Page 4


  Why had so much progress been made during this fifty-year period? The Second World War drove technological innovation; the post-war years saw a dramatic expansion of academic medicine and biomedical research, particularly in the US. Vannevar Bush’s 1945 report, Science: the Endless Frontier, set the agenda for biomedical research. Bush, formerly dean of engineering at MIT, chaired the National Defense Research Council, which was established by President Roosevelt in 1940. Roosevelt recognized that science had been crucial to the war effort and wanted the lessons learned to be applied to the development of science in peacetime. Bush’s report emphasized the primacy of ‘basic’ research and government funding. The report led to a huge expansion in scientific research at American universities and a tenfold increase in government funding from the 1940s to the 1960s. Universities, which until then saw their main function as teaching, became the principal location of scientific research. The concentration on ‘basic’ science was not universally welcomed. The great epidemiologist and clinician Alvan Feinstein wrote in 1987: ‘The research changed its orientation. The preclinical sciences became detached from their clinical origins and were converted into “basic biomedical sciences” with goals that… often had no overt relationship to clinical phenomena.’

  Following the foundation of the NHS, there was a major expansion of academic medicine in Britain and the establishment of research-focused teaching hospitals, such as the Royal Postgraduate Medical School (RPMS) at the Hammersmith Hospital. The 1944 Goodenough Report on Medical Education recommended that ‘every medical school should be a university medical school’. This led to a dramatic expansion in the number of academic medical posts, with the creation of more than fifty new clinical professorships. After the war, the RPMS set the agenda for British medical research; the consultants there were not allowed to engage in private practice, and they were essentially full-time clinical researchers. Clinical research was then unhampered by bureaucracy. This came with a downside: many patients were abused – unaware that they were being used as guinea pigs, they underwent invasive and dangerous procedures purely for research purposes, without their knowledge or consent. Sir John McMichael was director of the Hammersmith school for twenty years after the war; he pioneered the techniques of cardiac catheterization and liver biopsy, both now routine procedures. In the early 1950s, Alex Paton, then a registrar at the Hammersmith, kept a private diary in which he expressed his concerns: ‘We and anyone else at Hammersmith use subjects for experiments who will not necessarily benefit by them… The beds are really nothing more than an annexe to the medical laboratories.’ The physician Maurice Pappworth drew the public’s attention to these unethical practices in 1967 with his book Human Guinea Pigs, and for his trouble was ostracized by the medical establishment. Writing on McMichael’s experiments on cardiac catheterization in elderly patients with heart failure, Pappworth wrote: ‘It appears that doctors sometimes forget that those who are most in need of help, sympathy and gentle treatment are not the less sick but the most sick, and that among these the dying and the old have pre-eminent claims.’ Professor (later Dame) Sheila Sherlock was a protégée of McMichael’s at the Hammersmith, and later Professor of Medicine at the Royal Free Hospital. Neither cultivated a bedside manner; Sherlock’s British Medical Journal obituary stated that ‘there was little place for good taste or patients’ feelings’. Pappworth wrote a letter to the Lancet in 1955, in which he referred to Sherlock’s ‘dastardly experiments’ and accused British teaching hospitals of being ‘dominated by ghoulish physiologists masquerading as clinicians’. The Lancet declined to publish.

  Sheila Sherlock’s protégé – and later rival – Roger Williams set up the Liver Unit at King’s College Hospital, which became the largest such unit in Britain. Working with Roy Calne, a surgeon at Addenbrooke’s Hospital in Cambridge, Williams established liver transplantation in Britain. They wrote up their initial experience in the British Medical Journal in 1969, describing the outcome of transplantation in thirteen patients. Only two of the thirteen survived to four months; four died within thirty-six hours of the operation. Many others might have given up at this point, but Williams and Calne kept going. They learned how to prevent rejection of the transplanted organ with new drugs to suppress the body’s immune system; they refined pre- and post-operative care; they established which diseases could be successfully treated by transplantation, and thus improved the selection of patients. Liver transplantation is now a successful and routine treatment, carried out in several British hospitals, and the vast majority of patients survive. Were the British Medical Journal to publish a paper now with these mortality figures, the doctors involved would be ordered by their managers to desist, and might well find themselves before the General Medical Council.

  The British medical establishment resisted the foundation of the NHS, and Aneurin Bevan had to offer generous inducements to the hospital consultants to ensure their co-operation. He allowed them to continue practising privately, and also introduced the financial incentives called merit awards. From 1948 to sometime in the late 1970s, these consultants enjoyed professional and academic freedoms that today’s beleaguered doctors can only dream of. Peter Cotton described the gentlemanly pace of his colleagues in the mid-1970s: ‘Many of my consultant colleagues seemed to spend most of their time up the street in private practice, allowing their chauffeurs to take them to the Middlesex twice a week for a ward round and a pot of tea with the ward sister.’ Many, if not most, contented themselves with Harley Street and the assignation for tea with sister, but the enthusiasts were free to pursue their obsessions. Cotton wrote admiringly of his senior colleague at the Middlesex, Peter Ball:

  He worked two days a week at the hospital, one day at Kew gardens where he was revising the taxonomy of Orchids, one day in private practice, and one at the London Zoo, where he did research on snakes and various parasites. Part of his research with worms took him regularly to Africa, and he actually imported some specimens by swallowing them.

  British medicine’s prestige rose dramatically during the three decades after the Second World War, and research was producing treatments that were demonstrably and dramatically effective. Consultants – particularly those based in the great teaching hospitals – enjoyed almost complete professional and academic freedom. They answered neither to administrators nor to the general public. Their eccentricities and scientific passions were not only tolerated, but actively encouraged. Many, as Peter Cotton noted, abused this freedom to make money, but others, such as his colleague Peter Ball – and Cotton himself – pursued loftier ambitions. Cotton had established ERCP, a major new technique, when he was still – technically – a trainee (senior registrar). This would now be unthinkable. By the mid-1980s, Cotton had a global reputation, and doctors came from all over the world to learn ERCP at his unit in the Middlesex Hospital. He had, however, become disenchanted with the NHS. The hospital, operating on a fixed annual budget, was not impressed by the fame of their young star gastroenterologist, and the many overseas trainees he attracted, and once sent him written instructions ‘to do 25 per cent less procedures next year’. The final straw came when the hospital told him they would no longer allow the overseas trainees to work there (even though they were unpaid) ‘because they attracted patients who they had to feed and bathe’. In 1986, Cotton was appointed Chief of Endoscopy at Duke University in North Carolina, and spent the remainder of his career in the US.

  Power in medicine slowly shifted from the teaching-hospital clinician-aristocrats – doctors like Sir Francis Avery Jones – to the new laboratory-based professional researchers, the Big Science Brahmins. The historian Roy Porter wrote in 1997:

  Today, though one or two transplant surgeons are household names, the real medical power lies in the hands of Nobel Prize-winning researchers, the presidents of the great medical schools, and the boards of multi-billion dollar hospital conglomerates, health maintenance organizations and pharmaceutical companies.

  The great teaching hospitals
are now run by managers, and the consultants, although their number has increased dramatically since the mid-1980s, are collectively and individually without influence. Doctors still receive knighthoods and damehoods, but they are bestowed mainly on the academic Brahminate and the committee men and women. I last attended a BSG meeting in Manchester in 2014; the clinician-aristocrats had disappeared. In their place was a collection of demoralized Stakhanovite workers, none of whom had ever lunched at the Athenaeum over roast pheasant and a frisky claret. The golden age had passed.

  4

  Big Bad Science

  The professionalization, industrialization and globalization of medical research was well under way by the time of the BSG’s Golden Jubilee Meeting in 1987, and by the 2000s, the process was complete. From the 1950s to the 1990s, many NHS teaching-hospital consultants carried out significant research, even though they had no formal academic appointment. In the new millennium, this all changed. In the wake of the scandal at Alder Hey Hospital in Liverpool (where the organs of children who had died at the hospital had been retained, for research purposes, without the parents’ knowledge or consent), the bureaucracy around medical research grew to the point where only full-time research professionals, supported by a secretariat to handle all the red tape, could do it. The Thatcher-era health reforms disempowered hospital consultants, who now found themselves at the clinical coalface, at the beck and call of managers: with so many targets to reach, there was no time for research. The academic Brahminate withdrew ever more from the hospitals. Teaching was taken over by specialists in medical education, allowing the researchers to concentrate entirely on grant applications and committee work. The dividing line between academic medicine and industry became so blurred as to be almost invisible.

  The phrase ‘Big Science’ was coined by the physicist Alvin Weinberg to mean the type of science that is laboratory-based, lavishly funded and conducted in large, usually university-based research facilities, overseen by powerful, quasi-feudal, academic managerialists. The research industry draws in vast quantities of public money, and sells itself to politicians and industry as a driver of economic growth. The molecular biologists often use the hackneyed and pompous phrase ‘from bench to bedside’ to boost their claim that their labours are relevant to real-life patients. Although this type of research has grown massively since the late 1980s, advances which benefit patients have been modest and unspectacular compared to the golden age. The Big Science Brahmins are now so removed from the clinical front line that the phrase rings hollow. The basic science model seems superficially plausible. It is based on the Cartesian idea of the body as an elaborate machine; disease is a malfunction of this machine. To cure disease, you must first understand how the machine works. A 2003 study from John Ioannidis showed the limitations of such ‘mechanistic’ research. Ioannidis is a Greek–American professor of medicine at Stanford Medical School, and is the founder and leader of a new discipline known as ‘meta-research’, or research about research. He and his wife Despina – a paediatrician – examined 101 basic science discoveries, published in the top basic science journals (Science, Nature, Cell, etc.) between 1979 and 1983, all of which claimed to have a clinical application. Twenty years later, twenty-seven of these technologies had been tested clinically, five eventually were approved for marketing, of which only one was deemed to have clinical benefit.

  Most of the diseases that kill us now are caused by, and associated with, ageing. We just wear out. Dementia, heart disease, stroke and cancer kill us now, not smallpox and Spanish flu. Medicine can still pull off spectacular rescues of mortally sick young people, but these triumphs are notable for their relative rarity. The other flaw in the Big Science theory is that a great deal of what is laid at medicine’s door to fix has nothing to do with malfunction of the machine; much of the work of GPs is helping people cope not with disease but with living problems, or ‘shit life syndrome’, as some call it. More than 50 per cent of my outpatients have symptoms caused by psychosomatic conditions, such as irritable bowel syndrome, which cannot be elucidated or cured by the molecular biologists. Humans have always experienced stress and distress, but only in the twentieth century did we (at least those of us living in rich countries) decide that the inevitable vicissitudes of living should be reconfigured as medical problems.

  Does medicine still need breakthroughs? Is research still worth doing? Medicine should try its utmost to prevent premature mortality; I would arbitrarily set this as anything below eighty. More importantly, medicine should deal better with pain, suffering and disability. I do not believe, however, that better ways of relieving suffering will emerge from molecular medicine. There is a philosophical, moral and existential paradox at the heart of research. Death is the inevitable product of disease, ageing and the body’s breakdown. Research aims to ‘fight’ this, yet we accept, deep within our being, that death is not only inevitable, it is good. Why, then, should we continue to fight this unwinnable and unnecessary battle? Premature mortality has declined dramatically and is now rare; most of us can expect to live into our eighth or ninth decade. Extending longevity beyond that is misguided and dangerous. The dramatic increase in human longevity witnessed in the twentieth century is so new and so dramatic that as a species we haven’t learned how to deal with it.

  What, then, should medical research do? The Big Science model – find the ‘cause’ and thence the ‘cure’ – should still be applied to some diseases. Crohn’s disease (a chronic inflammatory disorder of the intestine), for example, causes long-term disability in a predominantly young patient population, and often requires permanent treatment with dangerous immunosuppressive drugs. A cure would be an unquestioned benefit. Research, unfortunately, will never help most of what ails mankind: growing old and dying. These are eternal human verities, but we expect medicine to somehow solve this riddle. Epidemiologists and public health doctors would argue that medicine now contributes little to health in developed countries, and that poverty, lack of education and deprivation are now the main drivers of poor health. This is almost certainly true. Although vaccination and antibiotics contributed significantly to the increase in human longevity in the mid-twentieth century, medical care now has little direct influence on the health of a population, accounting for only about 10 per cent of variation. Furthermore, some have argued persuasively that if we were to simply apply evenly and logically what research has already proven, health care would be transformed.

  Big Science has not delivered the breakthroughs expected of it, yet it consumes the great majority of medical research funding, so other, possibly more productive, types of research have been starved of resources. Why is this model a failure? As my experience of research taught me, Big Science is funded to produce data, rather than original ideas. The biophysicist John Platt wrote in Science in 1964: ‘We speak piously of taking measurements and making small studies that will “add another brick to the temple of science”. Most such bricks just lie around in the brickyard.’ Another limitation of the Big Science model is its assumption that nothing unexpected is ever encountered, that research is something that is planned, yet many of the great scientific discoveries (penicillin, Helicobacter) were unanticipated and serendipitous.

  The doctor and polemicist Bruce Charlton has observed that the culture of contemporary medical research is so conformist that truly original thinkers can no longer prosper in such an environment, and that science selects for perseverance and sociability at the expense of intelligence and creativity:

  Modern science is just too dull an activity to attract, retain or promote many of the most intelligent and creative people. In particular, the requirement for around 10, 15, or even 20 years of post-graduate ‘training’ before even having a chance at doing some independent research of one’s own choosing, is enough to deter almost anyone with a spark of vitality or self-respect; and utterly exclude anyone with an urgent sense of vocation for creative endeavour. Even after a decade or two of ‘training’ the most likely scientific pros
pect is that of researching a topic determined by the availability of funding rather than scientific importance, or else functioning as a cog in someone else’s research machine. Either way, the scientist will be working on somebody else’s problem – not his own. Why would any serious intellectual wish to aim for such a career?

  Charlton observed that modern medical research is a collective activity, requiring the co-ordination and co-operation of many sub-specialists: being a ‘team-player’ is an essential attribute for such work. He argues that the very best scientists are ‘wasted as team players. The very best scientists can function only as mavericks because they are doing science for vocational reasons.’ Charles Darwin, for example, worked alone, mainly at home, without a university appointment or funding. He had the good fortune of independent means, and worked only on subjects that stimulated his curiosity.