Can Medicine Be Cured Read online

Page 8


  The laudable goal of making clinical decisions based on evidence can be impaired by the restricted quality and scope of what is collected as ‘best available evidence’. The authoritative aura given to the collection, however, may lead to major abuses that produce inappropriate guidelines or doctrinaire dogmas for clinical practice.

  Evidence-based medicine, Feinstein argued, did not reflect the demographic shift towards older, frailer patients with several chronic diseases (comorbidities), taking as its evidence data from trials on younger patients with single diseases. Such evidence oversimplified the many complex variables of the clinical encounter, and the goals that are important to real people. Feinstein attacked, too, the fascination with meta-analysis, which he called ‘statistical alchemy for the twenty-first century’. Many others have criticized meta-analysis, arguing that researchers combine different types of studies – they are comparing apples and oranges; that they often exclude ‘negative’ studies; and that they often include low-quality studies (‘garbage in, garbage out’). Meta-analysis, however, remains the least worst tool we have.

  Alvan Feinstein’s prediction that evidence-based medicine would lead to prescriptive guidelines came true, with massive over-prescribing, particularly in the elderly. The 2004 NHS contract for GPs heavily incentivized preventive prescribing (for high blood pressure, cholesterol, osteoporosis), contributing to a huge increase in the volume of medication consumed by the British population. Twenty per cent of all adults in Scotland are on five or more long-term medications. In the US, 25 per cent of people in their sixties are on five or more medications, rising to 46 per cent of people in their seventies, and 91 per cent of nursing-home residents. Feinstein also correctly pointed out that the evidence supporting the use of these drugs was based on studies of younger, healthier people, not the kind of patients who ended up taking them. The oldest, sickest people – nursing-home residents – are generally excluded from trials of new drugs, yet they are the most medicated people in our society. Sick and dying people are commonly sent in from nursing homes to the kind of acute general hospital where I work. Most are on ten or more medications, which are continued even when the person has clearly entered the terminal stage of their lives: the average survival of a person entering a nursing home in Ireland is two years. These patients are also far more likely to experience side effects and drug interactions, and to die of these ‘adverse reactions’.

  Over-prescribing (‘polypharmacy’) is now such a major public health issue – particularly in the elderly – that it has, somewhat ironically, become the subject of a new field of research. My brother Denis, an academic geriatrician, has devoted his career to it. Polypharmacy is the direct cause of major side effects and increased mortality, and a huge waste of money. The people most likely to experience side effects of prescribed drugs are those over eighty, with multiple comorbidities and a life expectancy of three years or less. One prescription often leads to another: a drug given for high blood pressure may cause ankle swelling due to fluid retention, leading to another prescription for a diuretic (water tablet), which may cause potassium depletion, leading to a prescription for potassium tablets, which may cause nausea, leading to a prescription for antiemetic drugs, which may cause confusion, and on, and on, a process known as the prescribing cascade. Fifteen per cent of acute admissions to hospital in elderly people are due to drug side effects.

  Taken individually, the prescribing of each drug can be justified on the basis of available evidence: yes, a statin lowers the risk of a heart attack or stroke; yes, a drug to lower blood pressure lowers the risk of stroke; yes, aspirin lowers the risk of a heart attack; yes, this drug for osteoporosis lowers the risk of a fracture; yes, an anticoagulant lowers the risk of stroke, and on, and on. What evidence-based medicine doesn’t tell us is whether the combination of all these drugs, in this specific, individual, unique person, is beneficial or harmful. Statins – taken for high cholesterol – are one of the great triumphs of Pharma. High cholesterol is only one of many factors associated with heart disease, which include smoking, high blood pressure, diabetes and family history. Once started on a statin, the ‘patient’ continues to take it for the rest of their life. As we have seen, the vast majority of people taking statins every day do not benefit, yet people with advanced dementia and other ‘life-limiting’ conditions commonly take statins to lower the risk of a heart attack or stroke in their non-existent future. We are treating populations, not people. The cholesterol awareness campaign has been so successful that very elderly people, and patients with other diseases that will carry them off long before their heart gives out, commonly ask me to check their cholesterol levels. It can be difficult to explain that cholesterol is only one of many risk factors, and that medication to lower it will probably do them no good and might even cause harm. The drug companies know well that the medical outpatients clinic or the GP surgery is not the environment which facilitates these nuanced discussions of benefit and risk. It’s so much easier to write a prescription for a statin. Before the patent expired, Lipitor – a statin – was the biggest-selling drug in the world. Between 1996 and 2012, Lipitor made $125 billion for Pfizer. Meanwhile, in poor countries, millions of people die every year in unnecessary pain because they have no access to morphine.

  GPs are often blamed for this over-prescribing, but there is a cultural expectation – particularly in Ireland and Britain – that a visit to the doctor must conclude with the issuing of a prescription. Doctors also find this gesture a useful means of concluding a prolonged and demanding consultation; a polite way, as one GP put it, of saying ‘now fuck off ’. Drugs are commonly prescribed for people with relatively mild, transient depression and anxiety because doctors do not have the time or resources to offer psychological therapies. Both patients and doctors over-estimate the benefits of drugs, and under-estimate their risks. I commonly see patients – usually from nursing homes – who are taking up to twenty prescribed medications. Deprescribing is much more difficult than prescribing, involving, as it does, lengthy discussions on balancing risks and benefits, discussions that many people are either not able to have or unwilling to have. Older patients may view such deprescribing as a sign that their doctor is giving up on them, while some doctors regard deprescribing as a criticism of the colleague who first prescribed the medication. My brother has developed criteria to detect inappropriate prescribing: for example, giving a patient two drugs which are known to adversely interact with each other. I am more concerned, however, about ‘appropriate’ prescribing: that which, although sanctioned by ‘evidence’ and enforced by guidelines and protocols, is unlikely to benefit the individual patient.

  Kieran Sweeney (1952–2009) was a GP and academic who questioned the philosophical basis of evidence-based medicine. He pointed out – as did many others – that this evidence comes from studies of populations, and

  the results relate to what happens in groups of people, rather than in an individual. Decisions are based on interpretation of the evidence by objective criteria, distant from the patient and the consultation. Subjective evidence is anathema. In this context, evidence-based medicine is almost always doctor-centred; it focuses on the doctor’s objective interpretation of the evidence, and diminishes the importance of human relationships and the role of the other partner in the consultation – the patient.

  Sweeney argued that there is a ‘personal significance’, beyond statistical significance and clinical significance: what matters most to this person now? The role of the doctor, he argued, is to evaluate the available evidence, explore the patient’s aspirations and preferences, and advise accordingly. The doctor’s experience, training and personality will influence this discussion, but ‘the patient’s contribution is more important’.

  Medicine is an applied, not a pure science. Many would say that it is not a science at all: it is a craft, a practice. Even the phrase ‘scientific medicine’ implies that we don’t really believe that medicine is a science. After all, does anyone use the p
hrase ‘scientific physics’? In many ways, science and medicine are antithetical: doubt is at the very core of science, but doctors who express doubt are not highly regarded by their patients. This reflects the contemporary combination of consumerism in health care and the Cartesian belief that our bodies are machines, and should be mended as efficiently and unfussily as a broken kitchen appliance. The most successful doctors are those who make a clear and unambiguous diagnosis, and who give the patient complete faith in the treatment. This is why complementary and alternative medicine remains so popular. Its practitioners are always absolutely definite as to the cause of their patients’ problems; depending on which church the practitioner belongs to, this could be allergy to yeast, or misaligned vertebrae. It doesn’t matter. What counts is the absolute certainty with which this diagnosis is conveyed. Belief in the efficacy of the treatment is similarly instilled. Most of the problems that prompt people to visit doctors are transient and self-limiting; they get better regardless of what is done. This explains the continuing success of complementary medicine: nature heals, and the homeopath gets the fee and the credit. They are occasionally found out when they take on more serious problems, particularly cancer.

  There is a paradox at the heart of medicine: its intellectual basis is scientific in its ethos, but practice is not. The rational scepticism of David Hume is the basis of scientific thinking, but is a positive handicap for the doctor who, unlike Richard Doll, Petr Skrabanek, Austin Bradford Hill, Archie Cochrane, Thomas McKeown and John Ioannidis, sees real patients. We deal with people, with all of their irrationality, variation, vulnerability and gullibility. Science informs medicine, and medicine looks to science for answers, but they are radically different, often opposing, activities. Although eminent and pompous doctors like to quote the philosopher of science Karl Popper, the Popperian scientist, with ideas of bold conjecture and merciless refutation, does not flourish in medicine. There is little or no evidence to support quite a lot of what we do, and doctors have to work within, and around, this limitation. A project called ‘Clinical Evidence’, sponsored by the British Medical Journal, reviewed 3,000 medical practices, including treatments and tests. They found that a third are effective, 15 per cent are harmful, and 50 per cent are of unknown effectiveness. Medicine adopts new practices quickly, but drops them slowly. Over the last twenty-five years there have been some evidence-based medicine successes, in areas such as surgery and endoscopy, which are outside the baleful influence of pharma. Many useless, once routine procedures have been abandoned. Most doctors, however, work in a world where managing uncertainty is the greatest skill, and most patients have trivial, self-limiting conditions, or chronic shit life syndrome. Hospital-based general medicine is mainly concerned with the management of frail old people with multiple problems – medical, social and existential. A ‘post-take’ general medical ward round seems a long way from the Popperian idea of science.

  Richard Asher, who somehow managed to be both a Humean sceptic and a clinical doctor, observed that whatever the evidence says, success in medical practice is often due to a combination of real enthusiasm on the part of the doctor and blind faith in the patient. He argued that you cannot fake such enthusiasm: ‘If you admit to yourself that the treatment you are giving is frankly inactive, you will inspire little confidence in your patients, unless you happen to be a remarkably gifted actor, and the results of your treatment will be negligible.’ Asher’s paradox − ‘a little credulity makes us better doctors, though worse research workers’ – is why medicine is so difficult for the Humean sceptic. When I was a very young and inexperienced house officer, a wise rheumatologist, whose outpatients I assisted with, told me: ‘You will find this clinic much easier if you can bring yourself to believe in the concept of soft-tissue rheumatism.’ (‘Soft-tissue rheumatism’ is a blanket term used to describe all manner of muscle and joint aches – often psychosomatic – which cannot be given a definite diagnosis by X-rays or blood tests.) The Spanish philosopher and essayist José Ortega y Gasset in his Mission of the University (1930) expressed this well: ‘Medicine is not a science but a profession, a matter of practice… It goes to science and takes whatever results of research it considers efficacious; but it leaves all the rest. It leaves particularly what is most characteristic of science: the cultivation of the problematic and doubtful.’

  The founders of evidence-based medicine did not foresee the sins that would be committed in its name. When clinical guidelines first appeared in the 1990s, I welcomed them as a useful educational tool. Gradually, however, guidelines became mandatory protocols. These protocols, we were told, were all ‘evidence-based’, but many would not withstand for very long the beady gaze of John Ioannidis. In the US, protocols were driven by the insurance companies, and in Britain by the NHS, supported by a vast number of professional bodies and quangos. Although protocols are supposedly evidence-based, there is little evidence that their adoption and implementation has led to any significant improvements. Some believe that protocol-driven care is a prelude to a future when most medical care is provided by paraprofessionals, such as physician assistants and nurse practitioners.

  The Israeli behavioural psychologists Daniel Kahneman and Amos Tversky persuaded the world that humans are inherently flawed, their reasoning subject to systematic cognitive bias and error. The idea of an individual doctor exercising clinical judgement, using unquantifiable attributes such as experience and intuition, has become unfashionable and discredited, yet it is this judgement, this human touch, which is the heart of medicine. Richard Asher defined ‘common sense’ as ‘the capacity to see the obvious even amid confusion, and to do the obviously right thing rather than working to rule, or by dead reckoning’. The most powerful therapy at doctors’ disposal is themselves.

  6

  How to Invent a Disease

  The medical–industrial complex has undermined the integrity of evidence-based medicine. It has also subverted nosology (the classification of diseases) by the invention of pseudo-diseases to create new markets. A couple of years ago, I stumbled across a new pseudo-disease: ‘non-coeliac gluten sensitivity’. I was invited to give a talk at a conference for food scientists on gluten-free foods. I guessed that I was not their first choice. Although I had published several papers on coeliac disease during my research fellowship, I had written only sporadically on the subject since then; I was not regarded as a ‘key opinion leader’ in the field. I was asked to specifically address whether gluten sensitivity might be a contributory factor in irritable bowel syndrome (IBS). This is a common, often stress-related condition that causes a variety of symptoms, such as abdominal pain, bloating and diarrhoea. It is probably the most frequent diagnosis made at my outpatient clinic.

  Coeliac disease is known to be caused by a reaction to gluten in genetically predisposed people, but now many others, who despite having negative tests (biopsy, blood antibodies) for coeliac disease, still believe their trouble is caused by gluten. This phenomenon has been given the label of ‘non-coeliac gluten sensitivity’. At the conference, an Italian doctor spoke enthusiastically about this new entity, which she claimed was very common and responsible for a variety of maladies, including IBS and chronic fatigue. I told the food scientists that I found little evidence that gluten sensitivity had any role in IBS, or indeed anything other than coeliac disease. I listened to several other talks, and was rather surprised that the main thrust of these lectures was commercial. A director from Bord Bia (the Irish Food Board) talked about the booming market in ‘free-from’ foods: not just gluten free, but lactose free, nut free, soya free, and so on. A marketing expert from the local University Business School gave advice on how to sell these products – he even used the word ‘semiotics’ when describing the packaging of gluten-free foods. Non-coeliac gluten sensitivity may not be real, but many people at this conference clearly had invested in its existence. One speaker showed a slide documenting the exponential rise in journal publications on gluten sensitivity, and it reminded me of Wi
m Dicke, who made the single greatest discovery in coeliac disease – that the disease was caused by gluten – and struggled to get his work published.

  Willem-Karel Dicke (1905–62) was a paediatrician who practised at the Juliana Children’s Hospital in The Hague, and later (after the Second World War) at the Wilhelmina Children’s Hospital in Utrecht. He looked after many children with coeliac disease. This was then a mysterious condition which caused malabsorption of nutrients in food, leading to diarrhoea, weight loss, anaemia and growth failure. Many had bone deformity due to rickets (caused by lack of vitamin D), and death was not uncommon: a 1939 paper by Christopher Hardwick of Great Ormond Street Children’s Hospital in London reported a mortality rate of 30 per cent among coeliac children. Hardwick described how these children died: ‘The diarrhoea was increased, dehydration became intense, and the final picture was that of death from a severe enteritis.’ It had long been suspected that the disease was food-related, and various diets, such as Dr Haas’s banana diet, were tried, but none was consistently effective. During the 1930s, Dicke had heard of several anecdotal cases of coeliac children who improved when wheat was excluded from their diet. Towards the end of the Second World War, during the winter of 1944–5 – the hongerwinter, or ‘winter of starvation’ – Holland experienced a severe shortage of many foods, including bread, and the Dutch were famously reduced to eating tulip bulbs. Dicke noticed that his coeliac children appeared to be getting better when their ‘gruel’ was made from rice or potato flour instead of the usual wheat. He attended the International Congress of Pediatrics in New York in 1947, and although he was a shy and reticent man, he told as many of his colleagues as he could about his observation on wheat and coeliac disease. Years later, Dicke’s colleague and collaborator, the biochemist Jan van de Kamer, wrote: ‘Nobody believed him and he came back from the States very disappointed but unshocked in his opinion.’