Can Medicine Be Cured Read online

Page 6


  John Ioannidis observed how perverse incentives, and the natural selection described by Smaldino and McElreath, have created a new breed of managerialist-scientist, whose success is measured, not by originality of thought, or new discoveries, but by volume of grant income generated, and PhD students and post-doctoral scientists (postdocs) employed:

  Of course, those who are the most successful in grantsmanship include many superb scientists. However, they also include a large share (in many places, the majority) of the most aggressive, take-all, calculating managers. These are all very smart people and they are also acting in self-defence: trying to protect their research fiefdoms in uncertain times. But often I wonder: what monsters have we generated through selection of the fittest! We are cheering people to learn how to absorb money, how to get the best PR to inflate their work, how to become more bombastic and least self-critical. These are our science heroes of the 21st century.

  Big Science and Big Pharma have grown ever closer. Astonishingly, most medical researchers do not see this as a conflict of interest, or a threat to their scientific integrity; many of them are irritated that anyone would even bother to question this ‘partnership’. Pharmaceutical firms in Britain have increasingly co-located themselves adjacent to prestigious academic medical centres, such as Addenbrooke’s Hospital in Cambridge. Large US medical centres like the Cleveland Clinic strongly encourage its medical staff to collaborate with industry. GlaxoSmithKline proudly announced in 2016 that senior biomedical academics would join the ‘Immunology Catalyst Sabbatical Programme’, ‘designed to embed academic scientists in GSK laboratories’. In 2000, Dr Marcia Angell, then editor of the New England Journal of Medicine (by common consent the world’s most prestigious general medical journal), wrote an editorial entitled ‘Is academic medicine for sale?’ in which she warned against the increasingly unhealthy relationship between medical researchers and industry, particularly Big Pharma. Shortly after, Dr Thomas J. Ruane wrote to the journal: ‘Is academic medicine for sale? No, the current owner is very happy with it.’

  The last decade has seen the emergence of a new type of research facility, independent of government and university. These ‘cathedral-sized industrial campuses’ are funded either by pharmaceutical companies or by billionaire philanthropists such as Eli Broad and Mark Zuckerberg. Zuckerberg and his paediatrician wife, Priscilla Chan, are planning to spend $3 billion on medical research, with the modest aim to ‘cure, prevent or manage all diseases’. (Some commentators have pointed out that $3 billion is a rather small sum for such an ambition.) These centres are beginning to dominate the hugely profitable biotechnology sector. A typical product of such a campus is Novartis’s genetically engineered CAR (chimeric antigen receptor) T-cell therapy for childhood acute lymphoblastic leukaemia, which costs $475,000 per patient. The science writer Jim Kozubek has warned that this new biotech is a malign force, widening the gap between rich and poor:

  Biotech, rapidly becoming ever more sophisticated, may be the most powerful cultural force the world has known – and it looks increasingly unfair. Forms of eugenics, in vitro fertilization, and the transformations of our very genes and cells into profitable biologic medicines for investor-first culture are already being normalized, and inequalities are therefore accelerating. Indeed, the ‘artificial world’ of biotech, rather than an equitable cultivating force in society that promotes access to medicines and health for the poor and disenfranchised, is enhancing the wealth of elite scientists and their lawyers, while making medicine far more expensive and harder to afford.

  ‘Philanthrocapitalism’ – the funding of medical research by the likes of Zuckerberg, Broad and Bill Gates – is a powerful new force in global health. The Bill and Melinda Gates Foundation has done much good, but some have argued that such organizations lack accountability, and that they are used as a shield to deflect criticism of the industries (Microsoft, Facebook) that generated their wealth. Philanthrocapitalism is not new: Rockefeller, Ford and Carnegie cited their charitable activities when responding to criticism of their business methods and treatment of employees. Some new plutocrats, such as the radical libertarian and Trump supporter Peter Thiel, the founder of PayPal, are funding Big Science in the hope that it can buy them the one thing their billions currently cannot: immortality. Many see such philanthrocapitalism as a malign influence on both health care and medical research. The AIDS activist Gregg Gonsalves expressed concern about the Gates Foundation: ‘Depending on what side of bed Gates gets out of in the morning, it can shift the terrain of global health… it’s not a democracy. It’s not even a constitutional monarchy. It’s about what Bill and Melinda want.’ The Foundation is keen on partnership with pharmaceutical corporations, employing numerous former industry executives. A study published in the Lancet in 2009 showed that most of the Foundation’s grants went to commercial organizations, and most of the grants to NGOs went to those in high-income countries. David McCoy, Professor of Global Public Health at Queen Mary University London, says: ‘Appealing to the megarich to be more charitable is not a solution to global health problems. We need a system that does not create so many billionaires and, until we do that, this kind of philanthropy is either a distraction or potentially harmful to the need for systemic change to the political economy.’

  The Human Genome Project (fully completed in 2003) was thought to be the greatest breakthrough of Big Science. James Watson, co-discoverer of the DNA double helix, described it as ‘the ultimate tool for understanding ourselves at the molecular level… we used to think our fate was in our stars. Now we know, in large measure, our fate is in our genes.’ There were, in fact, two rival genome projects: one was carried out by an international public consortium led by the US and headed by Francis Collins, and the other by Celera, a biotechnology company led by the maverick entrepreneur Craig Venter. In 1999, Collins wrote in the New England Journal of Medicine that ‘the idea captured the public imagination… in the manner of the great expeditions – those of Lewis and Clark, Sir Edmund Hillary, and even Neil Armstrong.’ The completion of a ‘rough draft’ of the genome was announced on 26 June 2000 at the White House by Bill Clinton, who was joined by Tony Blair on a satellite link. Clinton and Blair declared that all genome information should be free. Collins and Venter announced that the two rival genome projects would co-operate. Clinton, with his great talent for telling people what they want to hear, declared: ‘Without a doubt, this is the most important, most wondrous map ever produced by mankind… Today, we are learning the language in which God created life’, and confidently predicted that the Human Genome Project would ‘revolutionize the diagnosis, prevention and treatment of most, if not all, human diseases’. Blair – who had always struggled with science and technology – piously agreed: ‘Today’s developments are almost too awesome fully to comprehend.’ Francis Collins was slightly overcome by the occasion: ‘It’s a happy day for the world. It is humbling for me and awe-inspiring to realize that we have caught the first glimpse of our own instruction book, previously known only to God. What a profound responsibility it is to do this work. Historians will consider this a turning point.’ In an article published in the Journal of the American Medical Association in February 2001, Collins predicted that by 2020, ‘new gene-based “designer drugs” will be introduced to the market for diabetes mellitus, hypertension, mental illness, and many other conditions… every tumour will have a precise molecular fingerprint determined, cataloguing the genes that have gone awry, and therapy will be individually targeted to that fingerprint.’ Although the print and broadcast media reported uncritically these hyperbolic claims, there were a few dissenters within the academy. Neil Holtzman of Johns Hopkins Medical School and Theresa Marteau of King’s College London wrote in the New England Journal of Medicine shortly after the White House ceremony:

  Differences in social structure, lifestyle, and environment account for much larger proportions of disease than genetic differences. Although we do not contend that the genetic mantle is as impe
rceptible as the emperor’s new clothes were, it is not made of the silks and ermines that some claim it to be. Those who make medical and science policies in the next decade would do well to see beyond the hype.

  In 2010, some years after the hype had evaporated, Monika Gisler, from the Swiss science university ETH Zurich, wrote a paper in which she characterized the Human Genome Project as an example of a ‘social bubble’: ‘The hypes fuelling the bubble during its growth have not been followed by real tangible outcomes… the consensus of the scientific community is that it will take decades to exploit the fruits of the HGP [Human Genome Project].’

  Collins’s predictions have not come to pass. Practical applications of the HGP have been modest, a great disappointment to those who heralded this as the great scientific achievement of any age. The psychiatrist Joel Paris observed that when we are told answers are around the corner, that is where they tend to stay. Some of the most eminent figures of American molecular medicine have come clean. The renowned cancer biologist Robert Weinberg admitted that the clinical applications of the Human Genome Project ‘have been modest – very modest compared to the resources invested’. Harold Varmus, former director of the National Institutes of Health and doyen of American cancer research, wrote in the New England Journal of Medicine that ‘only a handful of major changes… have entered routine medical practice’, and most of them the result of discoveries that preceded the completion of the Human Genome Project. ‘Genomics’, he said, ‘is a way to do science, not medicine.’ In 2009, Francis Collins, along with twenty-six other geneticists, wrote a review paper for Nature, in which they acknowledged that despite all the effort and money spent, geneticists had not found more than a fractional basis for the common human diseases. ‘It is fair to say’, Collins admitted, ‘that the Human Genome Project has not yet directly affected the health care of most individuals.’ Craig Venter, too, confessed: ‘There is still some way to go before this capability can have a significant effect on medicine and health.’

  Although it failed to deliver the breakthroughs predicted of it, the Human Genome Project has been one of the main drivers of the new age of ‘Big Data’. David Pye, scientific director of the Kidscan Children’s Cancer Research Charity, warns:

  the quantity of data available to researchers is fast becoming a problem. Over the next few years, the computing resources needed to store all the genomic data will be mind boggling (almost 40 exabytes) – far exceeding the requirements of YouTube (one to two exabytes per year) and Twitter (0.02 exabytes per year). Finding the nugget of information that is vital to the production of an effective cure in this mountain of information is looking ever less likely. [An exabyte is 1018, or one quintillion, bytes.]

  The failure of Big Science was predicted by the Australian virologist and Nobel Laureate Sir Macfarlane Burnet (1899–1985). His book Genes, Dreams and Realities was a sensation when it appeared in 1971. He argued that ‘the contribution of laboratory science has virtually come to an end… almost none of modern basic research in the medical sciences has any direct bearing on the prevention of disease or on the improvement of medical care’. Burnet believed that the challenges of the future would not be infectious diseases, but the diseases of civilization, degeneration and old age, and that these would not be conquered like the infectious diseases were during the golden age. Many were outraged by his book, but his eminence ensured that he was taken seriously. His fellow Nobel Laureate, the immunologist Sir Peter Medawar, described Genes, Dreams and Realities as an ‘extraordinary lapsus mentis’. In the New York Review of Books in 1980, he wrote: ‘As an antidote to Burnet’s spiritless declaration I roundly declare that within the next ten years remedies will be found for multiple sclerosis, juvenile diabetes, and at least two forms of cancer at present considered somewhat intractable.’ History sided with Burnet, not Medawar, none of whose bold predictions and round declarations came to pass.

  The decadence of contemporary biomedical science has a historical parallel in the medieval pre-Reformation papacy. Both began with high ideals. Both were taken over by careerists who corrupted these ideals, while simultaneously paying lip service to them. Both saw the trappings of worldly success as more important than the original ideal. Both created a self-serving high priesthood. The agenda for the profession is set by an academic elite (the hierarchy of bishops and cardinals), while the day-to-day work is done by low-status GPs and hospital doctors (curates, monks). This elite, despite having little to do with actual patient care, is immensely powerful in the appointment of the low-status doctors. Orthodoxy is, in part, established by consensus conferences (church councils). The elite is self-serving, and recruits to its ranks people with similar values and beliefs. The elite is respected by laypeople and has the ear of politicians and princes. The elite collects research funding from laypeople and governments (tithes). This elite is rarely, if ever, challenged, claiming that its authority comes from a higher power (God/Science).

  John Ioannidis argues that society at large should lower its expectations: ‘Science is a noble endeavour, but it’s also a low-yield endeavour. I’m not sure that more than a very small percentage of medical research is ever likely to lead to major improvements in clinical outcomes and quality of life. We should be very comfortable with that fact.’ Real science is so hard that it can only be done by a small minority of people who combine high intelligence, passionate curiosity and a commitment to truth. Real science cannot be planned and carried out by committees of bureaucrats and careerists. Contemporary biomedical research has become a danger to both society and medicine. It is a danger because it is scientifically corrupt, and because it serves its own needs, not those of society. Research that has no function other than the production of data and the advancement of careers is self-evidently dangerous. Big Science, for all its boosterism, has been a crushing disappointment. It was inevitably so; most of the major discoveries had already been made during the golden age.

  5

  The Medical Misinformation Mess

  In 1948, Francis Avery Jones (then just Dr, not yet Sir Francis) recruited a young doctor called Richard Doll to work with him at the Central Middlesex Hospital. Doll joined the hospital’s newly established Statistical Research Unit, and worked initially on peptic ulcer disease, showing that Jones’s bland diet (milky tea, bramble jelly, sponge cake) was of no benefit. Doll later remarked: ‘I have often thought that perhaps that was my most important contribution to gastroenterology, and certainly to public welfare: namely that it was quite unnecessary to have a bland diet if you had a peptic ulcer.’ I wonder how Sir Francis reacted to the news that his sponge cake and bramble jelly diet – although doubtless appetizing – was clinically futile? Sir Austin Bradford Hill (1897–1991), professor of medical statistics and epidemiology at the London School of Hygiene and Tropical Medicine, advised Doll on the statistical design of these ulcer trials. Hill was part of the group convened by the Medical Research Council which conducted the first ever randomized controlled trial in human subjects. It showed ‘the clearest possible proof that tuberculosis could be halted by streptomycin’. Hill had set out the principles of clinical-trial design in a series of articles published in the Lancet in 1937. These principles are adhered to still, and the streptomycin trial was one of triumphs of post-war British medicine – and the model for all such trials in the future. Hill had a personal stake in the trial: he had served as a pilot during the First World War and was invalided out when he developed chest tuberculosis. He spent two years in hospital and had to abandon his ambition to study medicine; he instead took a degree in economics from the University of London by correspondence. When he had recovered, he went to work with the medical statistician Major Greenwood. Hill was a wise and witty man, and cheerfully admitted to the limitations of medical statistics; he liked to poke fun ‘at that most sacred cow, the prospective double blind randomised controlled trial’. He told the story of a conversation with a patient recruited to such a trial: ‘Doctor, why did you change my pills?’ a
sked the patient. ‘What makes you think that I have?’ replied the doctor. ‘Well, last week when I threw them down the loo they floated, this week they sink.’

  In the late 1940s, Doll turned his attention to cigarette smoking and lung cancer, and worked on this with Hill. It is hard to comprehend now, but at that time, smoking was not regarded as dangerous to health, and more than 80 per cent of the adult male population were smokers. Doll and Jones had earlier investigated the effects of smoking on peptic ulcers, but couldn’t reach a definite conclusion because, whether they had a peptic ulcer or didn’t, nearly all men smoked; there simply weren’t enough non-smokers to make a valid comparison. In a paper published in the British Medical Journal in 1950, Doll and Hill showed that smokers had a much higher risk of lung cancer, and the more they smoked, the higher the risk. Association, of course, does not always mean causation, so they followed this with a prospective study of smoking habits and death from lung cancer in doctors. They collected data from doctors on their smoking habits, and followed them up over the next three years to determine how many developed lung cancer, and if so, whether or not they were smokers. This study proved beyond all reasonable doubt that smoking causes lung cancer. Doll’s demonstration has saved millions of lives, and he is regarded by many as the greatest medical researcher never to have won the Nobel Prize.