I recently finished a 2-year stint as an American Society for Microbiology Distinguished Lecturer. It’s an excellent program–ASM pays all travel expenses for lecturers, who speak at ASM Branch meetings throughout the country. I was able to attend Branch meetings from California and Washington in the West, to Massachusetts in the east, and south as far as El Paso, Texas, with many in-between. Each Lecturer selects several topics to speak on, and the Branch chooses from those which they want to hear. Mine included basic research (zoonotic disease, antibiotic resistance) as well as science outreach and advocacy topics (zombies, vaccines).
My talk on vaccines covered vaccine hesitancy and denial, the concerns some parents have regarding vaccination, and the way social media and celebrities contributed to the spread of vaccine misinformation. Inevitably, someone would ask in the Q&A or speak to me afterward inquiring, “But what can I do? I don’t feel I know enough about why people reject vaccines, and feel helpless to combat the fears and misinformation that is out there.” These were audiences of microbiologists and other types of infectious disease specialists–people who are very likely to be educated about vaccines and vaccine-preventable diseases, but who may not have followed the saga of disgraced former physician Andrew Wakefield, or aren’t familiar with the claims of the current anti-vaccine documentary, Vaxxed, or other common anti-vaccine talking points.
To help fill this gap, I recently published a paper in Open Forum Infectious Diseases,” Vaccine Rejection and Hesitancy: a Review and Call to Action.” As the title suggests, in it I give a brief overview of some of the figures in the anti-vax movement and the arguments they commonly use. I don’t go into rebuttals directly within the paper, but the supplemental information includes a subset of both anti-vax literature as well as several published rebuttals to them that interested individuals can look up.
I also briefly review the literature on vaccine hesitancy. Who fears or rejects vaccines, why do they do so, and how might we reach them to change their minds? This is really an area where many individuals, even if they’re educated about vaccines and infectious disease, lack a lot of background. As I note in the paper, many science-minded people still think that it’s enough to just educate people about vaccines properly, and that will be enough. While accurate information is indeed important, for many individuals on the vaccine-hesitant spectrum, it’s not only about misinformation, but also about group identity, previous experience with the health care field, and much more.
Still, vaccine advocates can get involved in a number of way. One of the easiest is simply to discuss your own vaccine history in order to normalize it. I regularly post pictures of my own vaccinations on social media (including my public Facebook and Twitter accounts), and those of my kids*. In over 17 years of parenthood, their vaccinations have all been…boring. These “uneventful vaccination” stories are the ones which rarely get told, as the media focuses on “vaccine injury” stories, in which the injuries may or may not actually be caused by vaccines. Those interested in promoting vaccines can write letters to the editor, get involved with local physicians to speak with hesitant families, break out and be political about vaccine exemptions; there are a number of ways that we can work to encourage vaccination and keep our children and our communities healthy (again, explored in more detail in the manuscript).
I hope this paper will serve as a starting point for those who want to be a vaccine advocate, but just aren’t sure they know enough background, or know where or how to jump in. Whether you’re an expert in the area or not, everyone can do small things to encourage vaccines and demonstrate your trust in them. Those of us working in the area thank you in advance for your help.
I’ve written about these types of claimsbefore. The first one–a claim that antimicrobial peptides were essentially “resistance proof,” was proven to be embarrassingly wrong in a laboratory test. Resistance not only evolved, but it evolved independently in almost every instance they tested (using E. coli and Pseudomonas species), taking only 600-700 generations–a relative blip in microbial time. Oops.
A very similar claim made the rounds in 2014, and the newest one is out today–a report of a “super vancomycin” that, as noted above, could be used “without fear of resistance emerging.” (The title of the article literally claims “‘Magical’ antibiotic brings fresh hope to battle against drug resistance”, another claim made in addition to the “no resistance” one in the Scripps press release by senior author Dale Boger). This one claims that, because the modified vancomycin uses 3 different ways to kill the bacteria, “Organisms just can’t simultaneously work to find a way around three independent mechanisms of action. Even if they found a solution to one of those, the organisms would still be killed by the other two.”
A grand claim, but history suggests otherwise. It was argued that bacteria could not evolve resistance to bacteriophage, as the ancient interaction between viruses and their bacterial hosts certainly must have already exploited and overcome any available defense. Now a plethora of resistance mechanisms are known.
Within the paper itself, the limitations are much more clearly laid out. Discussing usage of the antibiotic, the authors note of these conventional semisynthetic vancomycin analogs:
“However, their use against vancomycin-resistant bacteria (e.g., VRE and VRSA), where they are less potent and where only a single and less durable mechanism of action remains operative, likely would more rapidly raise resistance, not only compromising its future use but also, potentially transferring that resistance to other organisms (e.g., MRSA).”
So as they acknowledge, not really so resistance-proof at all–only if they’re used under perfect conditions and without any vancomycin resistance genes already present. What are the odds of that once this drug is released? (Spoiler alert: very low).
Alexander Fleming, who won the 1945 Nobel Prize in Physiology or Medicine, tried to sound the warning that the usefulness of antibiotics would be short-lived as bacteria adapted, but his warnings were (and still are?) largely ignored. There is no “magic bullet;” there are only temporary solutions, and we should have learned by now not to underestimate our bacterial companions.
Part of this post previously published here and here.
HIV’s supposed “Patient Zero” in the U.S., Gaetan Dugas, is off the hook! He wasn’t responsible for our outbreak!
This is presented as new information.
It is not, and I think by focusing on the “exoneration” of Dugas, a young flight attendant and one of the earliest diagnosed cases of AIDS in the U.S., these articles (referencing a new Nature paper) are missing the true story in this publication–that Dugas was really a victim of Shilts and the media, and remains so, no matter how many times the science evidence has cleared his name.
First, the idea that Dugas served to 1) bring HIV to the U.S. and 2) spark the epidemic and infect enough people early on that most of the initial cases could be traced back to him is simply false. Yes, this was the hypothesis based on some of the very early cases of AIDS, and the narrative promoted in Randy Shilts’s best-selling 1987 book, “And the Band Played On.” But based on the epidemiology of first symptomatic AIDS cases, and later our understanding of the virus behind the syndrome, HIV, we quickly understood that one single person in the late 1970s could not have introduced the virus and spread it rapidly enough to lead to the level of infections we were seeing by the early 1980s. Later understanding of the virus’s African origin and its global spread made the idea of Dugas as the epidemic’s originator in America even more impossible.
When we think of Dugas’s role in the epidemiology of HIV, we could possibly classify him as, at worst, a “super-spreader“–and individual who is responsible for a disproportionate amount of disease transmission. Dugas acknowledged sexual contact with hundreds of individuals between 1979 and 1981–but his numbers were similar to other gay men interviewed, averaging 227 per year (range 10-1560). And while Shilts portrayed Dugas as a purposeful villain, actively and knowingly spreading HIV to his sexual partners, that does not jibe with both our scientific knowledge of HIV/AIDS or with the assistance Dugas provided to scientists studying the epidemic. Dugas worked with researchers to identify as many of his partners as he could (~10% of his estimated 750), as the scientific and medical community struggled to figure out whether AIDS stemmed from a sexually-transmitted infection, as several lines of evidence suggested. There’s no evidence Dugas was maliciously infecting others, though that was the reputation he received. Dugas passed away from complications of AIDS in March of 1984–weeks before the discovery of HIV was announced to the general public.
Furthermore, the information in the new publication is not entirely novel. Molecular analyses carried out in part by Michael Worobey, also an author on the new paper, showed almost a decade ago that Dugas could not have been the true “Patient Zero.” The 2007 paper, “The emergence of HIV/AIDS in the Americas and beyond,” had the same conclusions as the new paper: HIV entered the U.S. from the Caribbean, probably Haiti, and was circulating in the U.S. by the late 1960s–when Dugas was only about 16 years old, and long before his career as a flight attendant traveling internationally. So this 2007 molecular analysis should have been the nail in the coffin of the Dugas-as-Patient-Zero ideas.
But apparently we’ve forgotten that paper, or other work that has followed the evolution of HIV over the 20th century.
What is unique about the new publication is that it included a sample from Dugas himself, via a plasma contribution Dugas donated in 1983, and other samples banked since the late 1970s. The new paper demonstrated that Dugas’s sample is not in any way unique, nor is it a “basal” virus–one of the earliest in the country, from which others would diverge. Instead, it was representative of what was already circulating among others infected with HIV at that time. In supplemental information, the authors also demonstrated how notation for Dugas in scientific notes changed from Patient 057, then to Patient O (for “Outside California”) to Patient 0/”Zero” in the published manuscript–which Shilts then named as Dugas and ran with in his narrative.
The media then extended Shilts’s ideas, further solidifying the assertion that Dugas was the origin of the U.S. epidemic, and in fact that he was outright evil. The supplemental material notes that Shilts didn’t want the focus of the media campaign initially to be about Dugas, but was convinced by his editor, who suggested the Dugas/Patient Zero narrative would result in more attention than the drier critiques of policy and inaction in response to the AIDS epidemic by the Reagan administration.
And the media certainly talked about it. A 1987 edition of U.S. News and World Report included a dubious quote attributed to Dugas: “‘I’ve got gay cancer,’ the man allegedly told bathhouse patrons after having sex with them. ‘I’m going to die, and so are you.’” NPR’s story adds “The New York Post ran a huge headline declaring “The Man Who Gave Us AIDS. Time magazine jumped in with a story called ‘The Appalling Saga Of Patient Zero.’ And 60 Minutes aired a feature on him. ‘Patient Zero. One of the first cases of AIDS. The first person identified as the major transmitter of the disease,’ host Harry Reasoner said.”
This is the real scandal and lingering tragedy of Dugas. His story was used to stoke fear of HIV-infected individuals, and especially gay men, as predators seeking to take others down with them. His story was used in part to justify criminalization of HIV transmission. So while science has exonerated him again and again, will the public–and the media–finally follow?
Previous research suggested Ebola could persist in the semen for 40 to 90 days. But that window has been eclipsed in this epidemic by a considerable amount. A probable case of sexual transmission occurred approximately six months after the patient’s initial infection last year in Liberia. Another study found evidence of Ebola in the semen of 25% of surviving men tested seven to nine months after infection. And it takes only a single transmission to kick off a fresh recurrence of the disease.
A recent paper extended this window of virus persistence in the semen even longer–over 500 days. It also explains how the outbreaks began in both countries after being declared Ebola-free–so where did the virus come from?
In a convergence of old-fashioned, “shoe leather” epidemiology/tracing of cases and viral genomics, two converging lines of evidence led to the identification of the same individual: a man who had been confirmed as an EVD case in 2014, and had sexual contact with one of the new cases. Author Nick Loman discussed via email:
The epidemiologists told us independently that they had identified a survivor and we were amazed when we decoded the metadata to find that case was indeed the same person. The sequencing and epidemiology is tightly coordinated via Guinea’s Ministry of Health who ran National Coordination for the Ebola outbreak and the World Health Organisation.
It shows that the genomics and epidemiology works best when working hand-in-hand. If we’d just had the genomics or the epidemiology we’d still have an element of doubt.
The sequencing results also suggested that it was likely that the new viral outbreak was caused by this survivor, and unlikely that the outbreak was due to another “spillover” of the virus from the local animal population, according to author Andrew Rambaut:
If the virus was present in bats and jumped to humans again in 2016, it might be genetically similar to the viruses in the human outbreak but not have any of the mutations that uniquely arose in the human outbreak (it would have its own unique mutations that had arisen in the bat population since the virus that caused human epidemic).
It might be possible that the virus jumped from humans to some animal reservoir in the region and then back to humans in 2016 but because we have the virus sequence from the patients acute disease 15 months earlier we can see that it essentially exactly the same virus. So this makes it certain the virus was persisting in this individual for the period.
So the virus–persisting in the survivor’s semen for at least 531 days–sparked a new wave of cases. Ebola researcher Daniel Bausch noted elsewhere that “The virus does seem to persist longer than we’ve ever recognized before. Sexual transmission still seems to be rare, but the sample size of survivors now is so much larger than we’ve ever had before (maybe 3,000-5,000 sexually active males versus 50-100 for the largest previous outbreak) that we’re picking up rare events.”
And we’re now actively looking for those rare events, too. The Liberia Men’s Health Screening Program already reports detection of Ebola virus in the semen at 565 days following symptoms, suggesting we will need to remain vigilant about survivors in both this and any future EVD epidemics. The challenges are clear–we need to investigate EVD survivors as patients, research participants, and possible viral reservoirs–each of which comes with unique difficulties. By continuing to learn as much as we can from this outbreak, perhaps we can contain future outbreaks more quickly–and prevent others from igniting.
[Obvious warning is obvious: potential spoilers for A Song of Ice and Fire novels/Game of Thrones TV series below].
While no one will claim that George R.R. Martin’s epic series, “A Song of Ice and Fire,” is historically accurate, there are a number of historical parallels that can be drawn from the characters and plotline–particularly from medieval Europe. While most of those relate to epic battles or former monarchs or other royalty, another of Martin’s characters, so to speak, is the disease greyscale (1).
Greyscale is a contagious disease that seems to come in at least two distinct forms: greyscale, an endemic and slow acting, highly contagious illness that can affect either adults or children; and the grey plague, a rapidly-spreading epidemic that can wipe out entire swaths of cities in a short period of time. Both versions of the illness have a high fatality rate (no exact details are given, but it seems to be close to 100%, especially in adults). Recovery from greyscale makes one immune to outbreaks of grey plague, so they seem to be caused either by the same microbe or ones which are very closely related.
The Epidemiology of Greyscale
Greyscale is a disfiguring disease. As its name suggests, it transforms the skin into a hardened, scaly tissue. As the skin dies, it becomes grey in color with permanent cracks and fissures. Infection that spreads across the face can cause blindness.
Like many diseases we consider to be “childhood” diseases (measles, mumps, smallpox, chickenpox, etc.), children seem to be spared the worst of the disease and are the most likely to recover from the illness, though recovery still appears to be quite rare. The disease is most common in Essos, but can also be found occasionally throughout Westeros, including north of the Wall (more on that below).
Greyscale is believed to be transmitted primarily person-to-person via direct skin contact. We see this in the books with the infection of Jon Connington and on the TV show with Jorah Mormont, as both characters are transporting/protecting Tyrion Lannister and apparently are exposed to the pathogen during a battle with the Stone Men (2, 3). The Stone Men are victims in the last stage of greyscale infection, where the skin is entirely calcified and there is involvement of muscle, bone, and internal organs, including the brain. Late signs of greyscale infection include violent insanity, leading sufferers to violently attack anyone who comes near. As these Stone Men are highly feared as sources of the disease, greyscale appears to be contagious for the entire duration of infection, from the development of symptoms to near-death.
If a person has been exposed to greyscale, but is not yet showing symptoms, they can check for impending infection by pricking their toes and fingers each day. Once they’re no longer able to feel the knife, that’s bad news–greyscale infection is likely, as insensitivity to touch is one of the early signs. Once the scaling begins, the victim no longer feels any pain in the affected areas, making the Stone Men essentially invulnerable to pain.
The incubation period of greyscale seems to be very short. As soon as Jorah and Tyrion realize they are safe and the Stone Men are defeated, Jorah rolls up his sleeve and we see that the initial small patch of greyscale has already appeared.
Another prominent victim of greyscale, Shireen Baratheon, is thought to have acquired greyscale via contact with a fomite (an inanimate object that serves as a vehicle to transmit an infectious agent between people)–in her case, a beloved wooden doll clothed in Baratheon House colors from when she was an infant. Her father, Stannis, implies that this may have been a form of bioterrorism–that Stannis received the doll from a Dornish trader on Dragonstone. He tells his daughter, “No doubt he’d heard of your birth, and assumed new fathers were easy targets” (S05E04). “I still remember how you smiled when I placed that doll in your cradle, and you pressed it to your cheek,” where evidence of greyscale is still present (4).
A number of remedies have been proposed to treat greyscale, but none of them are proven effective. They include treating it with boiling water containing limes; chopping off of the infected limbs; religious means/magic; and maybe fire–in A Dance with Dragons, Tyrion touches a Stone Man with his torch, and the Stone Man shrieks in pain (even while having bone showing through his skin, which apparently doesn’t bother him). Whether fire could be a cure is unclear.
Also in A Dance with Dragons, we read of Tyrion’s musings on treating greyscale: “He had heard it said that there were three good cures for greyscale: axe and sword and cleaver. Hacking off afflicted parts did sometimes stop the spread of the disease, Tyrion knew, but not always. Many a man had sacrificed one arm or foot, only to find the other going grey. Once that happened, hope was gone.” As such, the infectious agent seems to enter into the bloodstream and spread throughout the body at some point during the infection, and at this point, local measures such as amputation are no longer useful. Other home remedies, such as cleansing the infected area with vinegar, are also employed. In fact, Jon Connington, once he realizes he’s been infected, soaks his hand in bad wine instead of vinegar, because he believes that if he asks for vinegar, it will be an obvious “tell” that he has the disease.
In the TV series (S05E04), Stannis says to Shireen regarding her infection, “I called in every Maester in this side of the world, every healer, every apothecary. They stopped the disease and saved your life.” However, no details are given on the show regarding how it was stopped (medicine? magic?), or if a mechanism exists that could be used on an adult instead of an infant. When Daenerys asks Jorah if there is a cure, he tells her simply that he doesn’t know, and she directs him to leave, find one, and return to her.
Largely, those with greyscale are shunned and sent elsewhere, especially to the ruins of Valyria (5) where a whole colony of Stone Men live. Shireen asks Stannis, “Are you ashamed of me, Father?”, understanding that her obvious greyscale scars are a sign of stigma for their entire family. Stannis tells his daughter, “Everyone advised me to send you to the ruins of Valyria to live our your short life with the Stone Men before the sickness spread throughout the castle. I told them all to go to hell.” (Father of the Year before that whole burning stuff, Stannis!)
Similarly, both the books and show note the existence of greyscale beyond the wall among the Wildlings–and that the free folks’ response to greyscale infection is exile and/or death. In the books, a wildling named Val sees Shireen, and notes Shireen has a condition they call “the grey death,” which is always fatal in children–because they’re given either hemlock, a pillow, or a blade rather than be allowed to live. She also suggests that greyscale may become quiescent and return later, saying “The grey death sleeps, only to wake again. The child [Shireen] is not clean.”
On the TV version, the wildling Gilly takes the place of Val, and while she is not as frightened of Shireen’s greyscale, she notes she’s also had experience with the illness. She tells the tale of two of her sisters, who contracted greyscale (exactly how, we’re not told). Though he did not kill them as Val suggested, Gilly noted that her father “made them move out of the keep, into the hut outside. None of them were allowed to go near them, but we heard them, especially at night. They started to sound not like themselves.” Gilly saw them again “only once, at the end. They were covered with it. Their faces, their arms. They acted like animals. My father had to drag them out to the woods on a rope.” Shireen doesn’t find out what happened to them after that, but we can guess it’s not good.
What are some real-life parallels?
Clearly greyscale is another invention of Martin’s that doesn’t quite match up to any real infectious disease (6), and I’ll leave that linked article to summarize some of the pros and cons of the alternative diagnoses. But given the other historical parallels, leprosy (Hansen’s disease) is probably the closest real-life affliction to greyscale, due to the route of transmission (I’ll elaborate on that below), symptoms, incubation period, and particularly the cultural response to those who are affected.
Like those with leprosy, sufferers of greyscale can become disfigured, are considered “unclean” and shuffled off to the far corners of the map, feared and then ignored by their family and friends. Connington, when hiding his infection, noted that “Queer as it seemed, men who would cheerfully face battle and risk death to rescue a companion would abandon that same companion in a heartbeat if he were known to have greyscale”–a similar phenomenon to what still can happen today with stigmatized diseases such as leprosy. A case of greyscale is a source of stigma for both the sufferer (even if they survive, like Shireen) and for the family, as there will always be those who fear contagion.
Though evidence is gathering that leprosy is actually transmitted via the respiratory route (like its cousin, tuberculosis), for centuries people believed it could be spread by touch, as greyscale is. So even though the transmission route for the two diseases really isn’t the same, the *presumption* that leprosy can be spread by touch is still incredibly common. The lengthy period between infection and outward symptoms of the affliction is also similar, taking years from exposure to the final stages of infection that we see in the Stone Men. Leprosy can also take years or decades to progress, and while untreated leprosy is not typically a cause of death itself, it can lead to death indirectly due to secondary infections and other issues.
One of the early signs of leprosy is also numbness in an affected area as nerves are damaged by the infection, as Tyrion tried to evaluate after his exposure to the Stone Men, as well as a general thickening and stiffness of the skin. It doesn’t get to the level that’s seen with the Stone Men–one of the biggest problems with leprosy is actually secondary infections, which can lead to loss of digits or even whole limbs rather than a whole-body calcification of the skin–but many of the hallmarks of greyscale are very similar to leprosy.
While leprosy is now treatable with antibiotics, it wasn’t all that long ago that we had our own leper colonies in the U.S. (you can read about one of them here, also on a near-deserted island where the afflicted were largely left to fend for themselves with some occasional governmental assistance, similar to Valyria/the Sorrows). Martin himself even notes that Valyria is “like a leper colony.” Leprosy, and its stigma, remains an issue in some countries still today, and the purposeful isolation of those who have leprosy and exclusion from society persists.
However, while there are many similarities, leprosy doesn’t have an epidemic form equivalent to the grey plague. Described in A Dance with Dragons, it’s suggested that the grey plague wiped out half of Oldtown in the southwest of Westeros, and was only stopped by closing the gates and preventing anyone from entering or leaving. And like the Black Plague, the grey plague’s arrival in Pentos (a city in Essos) came by ship, and its spread into the city was possibly aided by rats. So is there an airborne form of greyscale that causes the grey plague? Could it be similar to Yersinia pestis, the bacterium that causes the Black Plague: transmitted by rats and fleas (or skin to skin in the case of greyscale) in its more mild form, but occasionally ending up in the lungs of an unfortunate victim and spread via the air after that, causing massive epidemics? Is it zoonotic, spread via rats? Will we see the grey plague on the TV series or not?
While comparisons to other real infections are interesting, my real question is–what is Martin going to do with greyscale? How does it feature into the larger end game, when we move beyond just a human “Game of Thrones” into the battle for humanity itself against the White Walkers and their army of undead wights? With all the time spent on the affliction in both the books and particularly in the show, there has to be some payoff somewhere, right?
In some ways, the wights beyond the wall and Stone Men are similar–undead, or nearly-dead, aggressive hunters of humans, with no sense of humanity left. When we last saw Jorah in the TV version, he had confessed his affliction to Daenerys, and she sent him off to find a cure. Will he find Dany after her arrival in Westeros and bring with him an army of (now healthy?) Stone Men–healed by fire perhaps, to fight against those brought back to life by ice? Will he return to Valyria–an area largely abandoned except as a place of exile for the Stone Men since The Doom a thousand years ago–and learn the truth of what happened there? Could Valyria provide a key to ending both greyscale and perhaps also the White Walkers? Or is the haunting poem Tyrion and Jorah recited as they rowed down the Rhoyne toward the ruins of the city foreshadowing what’s going to happen to Westeros?
(1) The information provided on greyscale in this article is a mix of literature from the books and the show. Note that the show, to my recollection, hasn’t delved into the grey plague, so information on that malady comes exclusively from the books. Also note some of the victims of greyscale differ in the books versus in the show (eg Jorah Mormont taking Jon Connington’s place in the TV version).
(2) Though Jorah denies any contact with the Stone Men initially, and it isn’t 100% clear if he was touched during the scene, he does back off from Daenerys when she moves toward him in S06E05, when he discloses his condition (which is now all the way up his forearm). This suggests he does believe he acquired it through direct contact with a Stone Man.
(3) Though these sufferers are uniformly called Stone Men, and the ones seen on-screen appear to be male, presumably there are also Stone Women. Possibly loss of hair as the skin calcifies could lead to a more androgynous look.
(4) I should note there are some alternative views about exactly how Shireen’s greyscale infection was acquired, and about the use of greyscale as a biological weapon.
(5) Or on “the Sorrows” in the novels.
(6) I don’t agree with several things in that article, written by a dermatologist. It concludes based mainly on symptoms and a bit on epidemiology that greyscale is something more like smallpox or HPV and largely rules out a leprosy-like illness. It also notes the potential for an infectious agent that’s only infectious to those with an underlying genetic susceptibility, but I don’t think there’s much evidence to suggest that.
Find other posts in today’s carnival on the science of Game of Thrones!
Yesterday, two article were released showing that MCR-1, the plasmid-associated gene that provides resistance to the antibiotic colistin, has been found in the United States. And not just in one place, but in two distinct cases: a woman with a urinary tract infection (UTI) in Pennsylvania, reported in the journal Antimicrobial Agents and Chemotherapy, and a positive sample taken from a pig’s intestine as part of the National Antimicrobial Resistance Monitoring System (NARMS), which tracks resistant bacteria related to retail meat products. Not surprising, not unexpected, but still, not good.
Colistin is an old antibiotic. Dating back to the 1950s, it’s been used sparingly over the decades because it can cause serious damage to the kidneys and nervous system. It’s also typically administered intravenously in humans, so you can’t just pop a colistin pill and be sent home from the doctor. Newer preparations appear to be safer, and because of the problem with antibiotic resistance in general and limited treatment options for multidrug-resistant Gram-negative infections in particular, colistin has seen a new life in the last decade or so as a last line of defense against some of these almost-untreatable infections.
Because of its sparing use in humans, resistance has not been much of an issue until recently. And while human use is relatively rare compared to other types of antibiotics, in animals, the story is different. Because colistin is old and cheap, it’s used as an additive to feed in Chinese livestock, to make them grow faster and fatter. (We do this here in the U.S. too, but using different antibiotics than colistin). So as would be expected, use of this antibiotic led to the evolution and spread of a resistant strain, due to the presence of the MCR-1 gene. By the first time they saw this resistance, it was already present in 20% of the pigs they tested near Shanghai, and 15% of the raw meat samples they tested. In this case, the gene is on a plasmid, which makes it easier to spread to other types of bacteria. To date, most of the reports of MCR-1 have been in E. coli, but it’s also been found in Salmonellaand Klebsiella pneunoniae–all gut bacteria that can be spread from animals via contaminated food products, or person-to-person when someone carrying the bacterium doesn’t wash their hands after using the bathroom.
So a question becomes, how exactly did it get here? And that’s very difficult to say right now. The hospital where the human case was reported notes that the patient reported no travel history in the past 5 months (so it’s unlikely that she traveled to China, for instance, and picked up the gene or bacterium carrying it there). The hospital says they’ve not found other MCR-1 positive isolates from other patients, but also that they’ve only been testing specimens for 3 weeks, so…yeah. Hard to say. People and animals (like the tested pig) can carry E. coli or other species that harbor MCR-1in their gut without becoming ill, so it may have been in the population for awhile (as they’ve seen in Brazil) before it came to the attention of medical researchers. Perhaps it’s been circulating in some of our meat products, or spreading in a chain of miniscule transfers of shit from person to person to person to person, for longer than we realize. Or both.
I was asked on Twitter yesterday, “Should I panic today or put that off until next week?” I’m not an advocate of panic myself, but I do think this is yet another concern and another hit on our antibiotic arsenal. It’s not widespread in this country and as mentioned, colistin is luckily not a first-line drug, so it won’t affect all *that* many people–for now, at least.
There are already papers out thereshowing bacteria that have both NDM-1 (or related variants) and MCR-1 genes. NDM-1 is a gene that provides resistance to another class of last-resort antibiotics, the carbapenems. (Maryn McKenna has covered this extensively on her blog). When carbapenems fail, treatment with colistin sometimes works. But if the bacterium is resistant to both colistin and carbapenems, well…not good. That hasn’t been reported yet in the U.S., but it’s only a matter of time, as McKenna notes.
It doesn’t mean that we’re out of antibiotics (yet) or that everyone who has one of these resistant infections will be unable to find a treatment that works (yet). But we’re inching ever closer to those days, one resistant bacterium at a time.
As you’ve probably seen, unless you’ve been living in a cave, Zika virus is the infectious disease topic du jour. From an obscure virus to the newest scare, interest in the virus has skyrocketed just in the past few weeks:
I have a few pieces already on Zika, so I won’t repeat myself here. The first is an introductory primer to the virus, answering the basic questions–what is it, where did it come from, what are its symptoms, why is it concerning? The second focuses on Zika’s potential risk to pregnant women, and what is currently being advised for them.
I want to be clear, though–currently, we aren’t 100% sure that Zika virus is causing microcephaly, the condition that is most concerning with this recent outbreak. The circumstantial evidence appears to be pretty strong, but we don’t have good data on 1) how common microcephaly really was in Brazil (or other affected countries) prior to the outbreak. Microcephaly seems to have increased dramatically, but some of those cases are not confirmed, and others don’t seem to be related to Zika; and if Zika really is causing microcephaly, 2) how Zika could be causing this, whether timing of the infection makes a difference, and whether women who are infected asymptomatically are at risk of medical problems in their developing fetuses.
The first question needs good epidemiological data for answers. This can be procured in a few ways. First, babies born with microcephaly, and their mothers, can be tested for Zika virus infection. This can be looked at a few ways: finding traces of the virus itself; finding antibodies to the virus (suggesting a past infection–but one can’t know the exact timing of this); and asking about known infections during pregnancy. Each approach has advantages and limitations. Tracking the virus or its genetic material is a gold standard, but the virus may only be present in body fluids for a short time. So if you miss that window, a false negative could result. This could be coupled with serology, to look at past infection–but you can’t be 100% certain in that case that the infection occurred during pregnancy–though with the apparently recent introduction of Zika into the Americas, it’s likely that infection would be fairly recent.
Serology coupled with an infection in pregnancy that has symptoms consistent with Zika (headache, muscle/joint pain, rash, fever) would be a step up from this, but has some additional problems. Other viral infections can be similar in symptoms to Zika (dengue, chikungunya, even influenza if the patient is lacking a rash), so tests to rule those out should also be done. On the flip side, about 80% of Zika infections show no symptoms at all–so a woman could still have come into contact with the virus and have positive serology, but she wouldn’t have any recollection of infection.
None of this is easy to carry out, but needs to be done in order to really establish with some level of certainty that Zika is the cause of microcephaly in this area. In the meantime, there are a few other possibilities to consider: that another virus (such as rubella) is circulating there. This is a known cause of multiple congenital issues, including microcephaly. This could explain why they’re seeing cases of microcephaly in Brazil, but none have been reported thus far in Colombia. Another is that there is no real increase in microcephaly at all–that, for some reason, people have just recently started paying more attention to it, and associated it with the Zika outbreak in the area–what we call a surveillance bias.
This is a fast-moving story, and we probably won’t have any solid answers to these questions for some time. In the interim, I think it’s prudent to take this as a possibility, and raise awareness of the potential this virus *may* have on the developing fetus, so that women can take precautions as they’re able. Public health is about prevention, and there have certainly been cases in the past of links between A and B that fell apart under further scrutiny. Zika/microcephaly may be one, but for now, it’s an unfortunate case where “more research is needed” is about the best answer one can currently give.
I’ve been involved in a few discussions of late on science-based sites around yon web on antibiotic resistance and agriculture–specifically, the campaign to get fast food giant Subway to stop using meat raised on antibiotics, and a graphic by CommonGround using Animal Health Institute data, suggesting that agricultural animals aren’t an important source of resistant bacteria. Discussing these topics has shown me there’s a lot of misunderstanding of issues in antibiotic resistance, even among those who consider themselves pretty science-savvy.
I think this is partly an issue of, perhaps, hating to agree with one’s “enemy.” Vani Hari, the “Food Babe,” recently also plugged the Subway campaign, perhaps making skeptics now skeptical of the issue of antibiotics and agriculture? Believe me, I am the farthest thing from a “Food Babe” fan and have criticized her many times on my Facebook page, but unlike her ill-advised and unscientific campaigns against things like fake pumpkin flavoring in coffee or “yoga mat” chemicals in Subway bread, this is one issue that actually has scientific support–stopped clocks and all that. Nevertheless, I think some people get bogged down in a lot of exaggeration or misinformation on the topic.
So, some thoughts. Please note that in many cases, my comments will be an over-simplification of a more complex problem, but I’ll try to include nuance when I can (without completely clouding the issue).
First–why is antibiotic resistance an issue?
Since the development of penicillin, we have been in an ongoing “war” with the bacteria that make us ill. Almost as quickly as antibiotics are used, bacteria are capable of developing or acquiring resistance to them. These resistance genes are often present on transmissible pieces of DNA–plasmids, transposons, phage–which allow them to move between bacterial cells, even those of completely different species, and spread that resistance. So, once it emerges, resistance is very difficult to keep under control. As such, much better to work to prevent this emergence, and to provide conditions where resistant bacteria don’t encounter selection pressures to maintain resistance genes (1).
In our 75-ish years of using antibiotics to treat infections, we’ve increasingly found ourselves losing this war. As bacterial species have evolved resistance to our drugs, we keep coming back with either brand-new drugs in different classes of antibiotics, or we’ve made slight tweaks to existing drugs so that they can escape the mechanisms bacteria use to get around them. And they’re killing us. In the US alone, antibiotic-resistant infections cause about 2 million infections per year, and about 23,000 deaths due to these infections–plus tens of thousands of additional deaths from diseases that are complicated by antibiotic-resistant infections. They cost at least $20 billion per year.
But we’re running out of these drugs. And where do the vast majority come from in any case? Other microbes–fungi, other bacterial species–so in some cases, that means there are also pre-existing resistance mechanisms to even new drugs, just waiting to spread. It’s so bad right now that even the WHO has sounded the alarm, warning of the potential for a “post-antibiotic era.”
This is some serious shit.
Where does resistance come from?
Resistant bacteria can be bred anytime an antibiotic is used. As such, researchers in the field tend to focus on two large areas: use of antibiotics in human medicine, and in animal husbandry. Human medicine is probably pretty obvious: humans get drugs to treat infections in hospital and outpatient settings, and in some cases, to protect against infection if a person is exposed to an organism–think of all the prophylactic doses of ciprofloxacin given out after the 2001 anthrax attacks, for example.
In human medicine, there is still much debate about 1) the proper dosing of many types of antibiotics–what is the optimal length of time to take them to ensure a cure, but also reduce the chance of incubating resistant organisms? This is an active area of research; and 2) when it is proper to prescribe antibiotics, period. For instance, ear infections. These cause many sleepless nights for parents, a lot of time off work and school, and many trips to clinics to get checked out. But do all kids who have an ear infection need antibiotics? Probably not. A recent study found that “watchful waiting” as an alternative to immediate prescription of antibiotics worked about as well as drug treatment for nonsevere ear infections in children–one data point among many that antibiotics are probably over-used in human medicine, and particularly for children. So this is one big area of interest and research (among many in human health) when it comes to trying to curb antibiotic use and employ the best practices of “judicious use” of antibiotics.
Another big area of use is agriculture (2). Just as in humans, antibiotics in ag can be used for treatment of sick animals, which is completely justifiable and accepted–but there are many divergences as well. For one, animals are often treated as a herd–if a certain threshold of animals in a population become ill, all will be treated in order to prevent an even worse outbreak of disease in a herd. Two, antibiotics can be, and frequently are, used prophylactically, before any disease is present–for example, at times when the producer historically has seen disease outbreaks in the herd, such as when animals are moved from one place to another (moving baby pigs from a nursery facility to a grower farm, as one example). Third, they can be used for growth promotion purposes–to make animals fatten up to market weight more quickly. The latter is, by far, the most contentious use, and the “low hanging fruit” that is often targeted for elimination.
From practically the beginning of this practice, there were people who spoke out against it, suggesting it was a bad idea, and that the use of these antibiotics in agriculture could lead to resistance which could affect human health. A pair ofpublications by Stuart Levy et al. in 1976 demonstrated this was more than a theoretical concern, and that antibiotic-resistant E. coli were indeed generated on farms using antibiotics, and transferred to farmers working there. Since this time, literally thousands of publications on this topic have demonstrated the same thing, examining different exposures, antibiotics, and bacterial species. There’s no doubt, scientifically, that use of antibiotics in agriculture causes the evolution and spread of resistance into human populations.
Why care about antibiotic use in agriculture?
A quick clarification that’s a common point of confusion–I’m not discussing antibiotic *residues* in meat products as a result of antibiotic use in ag (see, for example, the infographic linked above). In theory, antibiotic residues should not be an issue, because all drugs have a withdrawal period that farmers are supposed to adhere to prior to sending animals off to slaughter. These guidelines were developed so that antibiotics will not show up in an animal’s meat or milk. The real issue of concern for public health are the resistant bacteria, which *can* be transmitted via these routes.
Agriculture comes up many times for a few reasons. First, because people have the potential to be exposed to antibiotic-resistant bacteria that originate on farms via food products that they eat or handle. Everybody eats, and even vegetarians aren’t completely protected from antibiotic use on farms (I’ll get into this below). So even if you’re far removed from farmland, you may be exposed to bacteria incubating there via your turkey dinner or hamburger.
Second, because the vast majority of antibiotic use, by weight, occurs on farms–and many of these are the very same antibiotics used in human medicine (penicillins, tetracyclines, macrolides). It’s historically been very difficult to get good numbers on this use, so you may have seen numbers as high as 80% of all antibiotic use in the U.S. occurs on farms. A better number is probably 70% (described here by Politifact), which excludes a type of antibiotic called ionophores–these aren’t used in human medicine (3). So a great deal of selection for resistance is taking place on farms, but has the potential to spread into households across the country–and almost certainly has. Recent studies have demonstrated also that resistant infections transmitted through food don’t always stay in your gut–they can also cause serious urinary tract infections and even sepsis. Studies from my lab and others (4) examining S. aureus have identified livestock as a reservoir for various types of this bacterium–including methicillin-resistant subtypes.
How does antibiotic resistance spread?
In sum–in a lot of different ways. Resistant bacteria, and/or their resistance genes, can enter our environment–our water, our air, our homes via meat products, our schools via asymptomatic colonization of students and teachers–just about anywhere bacteria can go, resistance genes will tag along. Kalliopi Monoyios created this schematic for the above-mentioned paper I wrote earlier this year on livestock-associated Staphyloccocus aureus and its spread, but it really holds for just about any antibiotic-resistant bacterium out there:
And as I noted above, once it’s out there, it’s hard to put the genie back in the bottle. And it can spread in such a multitude of different ways that it complicates tracking of these organisms, and makes it practically impossible to trace farm-origin bacteria back to their host animals. Instead, we have to rely on studies of meat, farmers, water, soil, air, and people living near farms in order to make connections back to these animals.
And this is where even vegetarians aren’t “safe” from these organisms. What happens to much of the manure generated on industrial farms? It’s used as fertilizer on crops, bringing resistant bacteria and resistance genes along with it, as well as into our air when manure is aerosolized (as it is in some, but not all, crop applications) and into our soil and water–and as noted below, antibiotics themselves can also be used in horticulture as well.
So isn’t something being done about this? Why are we bothering with this anymore?
Kind of, but it’s not enough. Scientists and advocates have been trying to do something about this topic since at least 1969, when the UK’s Swann report on the use of Antibiotics in Animal Husbandry and Veterinary Medicine was released. As noted here:
One of its recommendations was that the only antimicrobials that should be permitted as growth promotants in animals were those that were not depended on for therapy in humans or whose use was not likely to lead to resistance to antimicrobials that were important for treating humans.
And some baby steps have been made previously, restricting use of some important types of antibiotics. More recently in the U.S., Federal Guidelines 209 and 213 were adopted in order to reduce the use of what have been deemed “medically-important” antibiotics in the livestock industry. These are a good step forward, but truthfully are only baby steps. They apply only to the use of growth-promotant antibiotics (those for “production use” as noted in the documents), and not other uses including prophylaxis. There also is no mechanism for monitoring or policing individuals who may continue to use these in violation of the guidelines–they have “no teeth.” As such, there’s concern that use for growth promotion will merely be re-labeled as use for prophylaxis.
Further, even now, we still have no data on the breakdown of antibiotic use in different species. We know over 32 million pounds were used in livestock in 2013, but with no clue how much of that was in pigs versus cattle, etc.
We do know that animals can be raised using lower levels of antibiotics. The European Union has not allowed growth promotant antibiotics since 2006. You’ll read different reports of how successful that has been (or not); this NPR article has a balanced review. What’s pretty well agreed-upon is that, to make such a ban successful, you need good regulation and a change in farming practices. Neither of these will be in place in the U.S. when the new guidance mechanisms go into place next year–so will this really benefit public health? Uncertain. We need more.
So this brings me back to Subway (and McDonald’s, and Chipotle, and other giants that have pledged to reduce use of antibiotics in the animals they buy). Whatever large companies do, consumers are demonstrating that they hold cards to push this issue forward–much faster than the FDA has been able to do (remember, it took them 40 freaking years just to get these voluntary guidelines in place). Buying USDA-certified organic or meat labeled “raised without antibiotics” is no 100% guarantee that you’ll have antibiotic-resistant-bacteria-free meat products, unfortunately, because contamination can be introduced during slaughter, packing, or handling–but in on-farm studies of animals, farmers, and farm environment, studies have typically found reduced levels of antibiotic-resistant bacteria on organic/antibiotic-free farms than their “conventional” counterparts (one example here, looking at farms that were transitioning to organic poultry farming).
Nothing is perfect, and biology is messy. Sometimes reducing antibiotic use takes a long time to have an impact, because resistance genes aren’t always quickly lost from a population even when the antibiotics have been removed. Sometimes a change may be seen in the bacteria animals are carrying, but it takes longer for human bacterial populations to change. No one is expecting miracles, or a move to more animals raised antibiotic-free to be a cure-all. And it’s not possible to raise every animal as antibiotic-free in any case; sick animals need to be treated, and even on antibiotic-free farms, there is often some low level of antibiotic use for therapeutic purposes. (These treated animals are then supposed to be marked and cannot be sold as “antibiotic-free”). But reducing the levels of unnecessary antibiotics in animal husbandry, in conjunction with programs promoting judicious use of antibiotics in human health, is a necessary step. We’ve waited too long already to take it.
(1) Though we know that, in some cases, resistance genes can remain in a population even in the absence of direct selection pressures–or they may be on a cassette with other resistance genes, so by using any one of those selective agents, you’re selecting for maintenance of the entire cassette.
(2) I’ve chosen to focus on use in humans & animal husbandry, but antibiotics are also used in companion animal veterinary medicine and even for aquaculture and horticulture (such as for prevention of disease in fruit trees). The use in these fields is considerably smaller than in human medicine and livestock, but these are also active areas of research and investigation.
(3) This doesn’t necessarily mean they don’t lead to resistance, though. In theory, ionophores can act just like other antibiotics and co-select for resistance genes to other, human-use antibiotics, so their use may still contribute to the antibiotic resistance problem. Studies from my lab and others have shown that the use of zinc, for instance–an antimicrobial metal used as a dietary supplement on some pig farms, can co-select for antibiotic resistance. In our case, for methicillin-resistant S. aureus.
(4) See many more of my publications here, or a Nature profile about some of my work here.
I’ve been working on livestock-associated Staphylococcus aureus and farming now for almost a decade. In that time, work from my lab has shown that, first, the “livestock-associated” strain of methicillin-resistant S. aureus (MRSA) that was found originally in Europe and then in Canada, ST398, is in the United States in pigs and farmers; that it’s present here in raw meatproducts; that “LA” S. aureus can be found not only in the agriculture-intensive Midwest, but also in tiny pig states like Connecticut. With collaborators, we’ve also shown that ST398 can be found in unexpected places, like Manhattan, and that the ST398 strain appears to have originated as a “human” type of S. aureuswhich subsequently was transmitted to and evolved in pigs, obtaining additional antibiotic-resistance genes while losing some genes that help the bacterium adapt to its human host. We also found a “human” type of S. aureus, ST5, way more commonly than expected in pigs originating in central Iowa, suggesting that the evolution of S. aureus in livestock is ongoing, and is more complicated than just ST398 = “livestock” Staph.
However, with all of this research, there’s been a big missing link that I repeatedly get asked about: what about actual, symptomatic infections in people? How often do S. aureus that farmers might encounter on the farm make them ill? We tried to address this in a retrospective survey we published previously, but that research suffered from all the problems that retrospective surveys do–recall bias, low response rate, and the possibility that those who responded did so *because* they had more experience with S. aureus infections, thus making the question more important to them. Plus, because it was asking about the past, we had no way to know that, even if they did report a prior infection, if it was due to ST398 or another type of S. aureus.
So, in 2011, we started a prospective study that was just published in Clinical Infectious Diseases, enrolling over 1,300 rural Iowans (mostly farmers of some type, though we did include individuals with no farming exposures as well, and spouses and children of farmers) and testing them at enrollment for S. aureus colonization in the nose or throat. Like previous studies done by our group andothers in the US, we found that pig farmers were more likely to be carrying S. aureus that were resistant to multiple antibiotics, and especially to tetracycline–a common antibiotic used while raising pigs. Surprisingly, we didn’t find any difference in MRSA colonization among groups, but that’s likely because we enrolled relatively small-scale farmers, rather than workers in concentrated animal feeding operations (CAFOs) like we had examined in prior research, who are exposed to many more animals living in more crowded conditions (and possibly receiving more antibiotics).
What was unique about this study, besides its large size, was that we then followed participants for 18 months to examine development of S. aureus infections. Participants sent us a monthly questionnaire telling us that they had a possible Staph infection or not; describing the infection if there was one, including physician diagnosis and treatment; and when possible, sending us a sample of the infected area for bacterial isolation and typing. Over the course of the study, which followed people for over 15,ooo “person-months” in epi-speak, 67 of our participants reported developing over 100 skin and soft tissue infections. Some of them were “possibly” S. aureus–sometimes they didn’t go to the doctor, but they had a skin infection that matched the handout we had given them that gave pictures of what Staph infections commonly look like. Other times they were cellulitis, which often can’t be definitively confirmed as caused by S. aureus without more invasive tests. Forty-two of the infections were confirmed by a physician, or at the lab as S. aureus due to a swab sent by the patient.
Of the swabs we received that were positive, 3/10 were found to be ST398 strains–and all of those were in individuals who had contact with livestock. A fourth individual who also had contact with pigs and cows had an ST15 infection. Individuals lacking livestock contact had infections with more typical “human” strains, such as ST8 and ST5 (usually described as “community-associated” and “hospital-associated” types of Staph). So yes, ST398 is causing infections in farmers in the US–and very likely, these are flying under the radar, because 1) farmers really, really don’t like to go to the doctor unless they’re practically on their deathbed, and 2) even if they do, and even if the physician diagnoses and cultures S. aureus (which is not incredibly common–many diagnoses are made on appearance alone), there are very limited programs in rural areas to routinely type S. aureus. Even in Iowa, where invasive S. aureus infections were previously state-reportable, we know that fewer than half of the samples even from these infections ever made it to the State lab for testing–and for skin infections? Not even evaluated.
As warnings are sounded all over the world about the looming problem of antibiotic resistance, we need to rein in the denial of antibiotic resistance in the food/meat industry. Some positive steps are being made–just the other day, Tyson foods announced they plan to eliminate human-use antibiotics in their chicken, and places like McDonald’s and Chipotle are using antibiotic-free chicken and/or other meat products in response to consumer demand. However, pork and beef still remain more stubborn when it comes to antibiotic use on farms, despite a recent study showing that resistant bacteria generated on cattle feed yards can transmit via the air, and studies by my group and others demonstrating that people who live in proximity to CAFOs or areas where swine waste is deposited are more likely to have MRSA colonization and/or infections–even if it’s with the “human” types of S. aureus. The cat is already out of the bag, the genie is out of the bottle, whatever image or metaphor you prefer–we need to increase surveillance to detect and mitigate these issues, better integrate rural hospitals and clinics into our surveillance nets, and work on mitigation of resistance development and on new solutions for treatment cohesively and with all stakeholders at the table. I don’t think that’s too much to ask, given the stakes.
Because of the anti-science views she has expressed, and their chance to do real harm, I’ve noted previously that I’m very uncomfortable with Bialik being used as any kind of an ambassador for science and STEM education. And of course, anti-vaccine advocates have seized on her education and anti-vaccine stance as proof of their own correctness:
i would like to dispel the rumors about my stance on vaccines. i am not anti-vaccine. my children are vaccinated. there has been so much hysteria and anger about this issue and i hope this clears things up as far as my part.
…which is great, from my point of view. I’d really like to see Bialik advocate for vaccines, as she is firmly in the “crunchy” camp that all too often have a reputation as eschewing vaccines.
So did she really change her mind and her stance? If so, why? Or is she just jumping on the “I’m not anti-vaccine” bandwagon like Jenny McCarthy and others who claim not to be anti-vaccine, but at the same time spew vaccine fear and misinformation? Are her kids fully vaccinated, or did they only have the ones she mentioned previously (such as polio for international travel)? Is she walking back statements that are basically anti-vaccine talking points, and removing her support of anti-vaccine doctors like Bob Sears and Lauren Feder (or her own pediatrician, Jay Gordon)?
I really hope so. But I won’t hold my breath, and take her statements that she’s “not anti-vaccine” with a big grain of salt. After all, that statement, itself, is often an anti-vaccine talking point.