HIV’s supposed “Patient Zero” in the U.S., Gaetan Dugas, is off the hook! He wasn’t responsible for our outbreak!
This is presented as new information.
It is not, and I think by focusing on the “exoneration” of Dugas, a young flight attendant and one of the earliest diagnosed cases of AIDS in the U.S., these articles (referencing a new Nature paper) are missing the true story in this publication–that Dugas was really a victim of Shilts and the media, and remains so, no matter how many times the science evidence has cleared his name.
First, the idea that Dugas served to 1) bring HIV to the U.S. and 2) spark the epidemic and infect enough people early on that most of the initial cases could be traced back to him is simply false. Yes, this was the hypothesis based on some of the very early cases of AIDS, and the narrative promoted in Randy Shilts’s best-selling 1987 book, “And the Band Played On.” But based on the epidemiology of first symptomatic AIDS cases, and later our understanding of the virus behind the syndrome, HIV, we quickly understood that one single person in the late 1970s could not have introduced the virus and spread it rapidly enough to lead to the level of infections we were seeing by the early 1980s. Later understanding of the virus’s African origin and its global spread made the idea of Dugas as the epidemic’s originator in America even more impossible.
When we think of Dugas’s role in the epidemiology of HIV, we could possibly classify him as, at worst, a “super-spreader“–and individual who is responsible for a disproportionate amount of disease transmission. Dugas acknowledged sexual contact with hundreds of individuals between 1979 and 1981–but his numbers were similar to other gay men interviewed, averaging 227 per year (range 10-1560). And while Shilts portrayed Dugas as a purposeful villain, actively and knowingly spreading HIV to his sexual partners, that does not jibe with both our scientific knowledge of HIV/AIDS or with the assistance Dugas provided to scientists studying the epidemic. Dugas worked with researchers to identify as many of his partners as he could (~10% of his estimated 750), as the scientific and medical community struggled to figure out whether AIDS stemmed from a sexually-transmitted infection, as several lines of evidence suggested. There’s no evidence Dugas was maliciously infecting others, though that was the reputation he received. Dugas passed away from complications of AIDS in March of 1984–weeks before the discovery of HIV was announced to the general public.
Furthermore, the information in the new publication is not entirely novel. Molecular analyses carried out in part by Michael Worobey, also an author on the new paper, showed almost a decade ago that Dugas could not have been the true “Patient Zero.” The 2007 paper, “The emergence of HIV/AIDS in the Americas and beyond,” had the same conclusions as the new paper: HIV entered the U.S. from the Caribbean, probably Haiti, and was circulating in the U.S. by the late 1960s–when Dugas was only about 16 years old, and long before his career as a flight attendant traveling internationally. So this 2007 molecular analysis should have been the nail in the coffin of the Dugas-as-Patient-Zero ideas.
But apparently we’ve forgotten that paper, or other work that has followed the evolution of HIV over the 20th century.
What is unique about the new publication is that it included a sample from Dugas himself, via a plasma contribution Dugas donated in 1983, and other samples banked since the late 1970s. The new paper demonstrated that Dugas’s sample is not in any way unique, nor is it a “basal” virus–one of the earliest in the country, from which others would diverge. Instead, it was representative of what was already circulating among others infected with HIV at that time. In supplemental information, the authors also demonstrated how notation for Dugas in scientific notes changed from Patient 057, then to Patient O (for “Outside California”) to Patient 0/”Zero” in the published manuscript–which Shilts then named as Dugas and ran with in his narrative.
The media then extended Shilts’s ideas, further solidifying the assertion that Dugas was the origin of the U.S. epidemic, and in fact that he was outright evil. The supplemental material notes that Shilts didn’t want the focus of the media campaign initially to be about Dugas, but was convinced by his editor, who suggested the Dugas/Patient Zero narrative would result in more attention than the drier critiques of policy and inaction in response to the AIDS epidemic by the Reagan administration.
And the media certainly talked about it. A 1987 edition of U.S. News and World Report included a dubious quote attributed to Dugas: “‘I’ve got gay cancer,’ the man allegedly told bathhouse patrons after having sex with them. ‘I’m going to die, and so are you.’” NPR’s story adds “The New York Post ran a huge headline declaring “The Man Who Gave Us AIDS. Time magazine jumped in with a story called ‘The Appalling Saga Of Patient Zero.’ And 60 Minutes aired a feature on him. ‘Patient Zero. One of the first cases of AIDS. The first person identified as the major transmitter of the disease,’ host Harry Reasoner said.”
This is the real scandal and lingering tragedy of Dugas. His story was used to stoke fear of HIV-infected individuals, and especially gay men, as predators seeking to take others down with them. His story was used in part to justify criminalization of HIV transmission. So while science has exonerated him again and again, will the public–and the media–finally follow?
Previous research suggested Ebola could persist in the semen for 40 to 90 days. But that window has been eclipsed in this epidemic by a considerable amount. A probable case of sexual transmission occurred approximately six months after the patient’s initial infection last year in Liberia. Another study found evidence of Ebola in the semen of 25% of surviving men tested seven to nine months after infection. And it takes only a single transmission to kick off a fresh recurrence of the disease.
A recent paper extended this window of virus persistence in the semen even longer–over 500 days. It also explains how the outbreaks began in both countries after being declared Ebola-free–so where did the virus come from?
In a convergence of old-fashioned, “shoe leather” epidemiology/tracing of cases and viral genomics, two converging lines of evidence led to the identification of the same individual: a man who had been confirmed as an EVD case in 2014, and had sexual contact with one of the new cases. Author Nick Loman discussed via email:
The epidemiologists told us independently that they had identified a survivor and we were amazed when we decoded the metadata to find that case was indeed the same person. The sequencing and epidemiology is tightly coordinated via Guinea’s Ministry of Health who ran National Coordination for the Ebola outbreak and the World Health Organisation.
It shows that the genomics and epidemiology works best when working hand-in-hand. If we’d just had the genomics or the epidemiology we’d still have an element of doubt.
The sequencing results also suggested that it was likely that the new viral outbreak was caused by this survivor, and unlikely that the outbreak was due to another “spillover” of the virus from the local animal population, according to author Andrew Rambaut:
If the virus was present in bats and jumped to humans again in 2016, it might be genetically similar to the viruses in the human outbreak but not have any of the mutations that uniquely arose in the human outbreak (it would have its own unique mutations that had arisen in the bat population since the virus that caused human epidemic).
It might be possible that the virus jumped from humans to some animal reservoir in the region and then back to humans in 2016 but because we have the virus sequence from the patients acute disease 15 months earlier we can see that it essentially exactly the same virus. So this makes it certain the virus was persisting in this individual for the period.
So the virus–persisting in the survivor’s semen for at least 531 days–sparked a new wave of cases. Ebola researcher Daniel Bausch noted elsewhere that “The virus does seem to persist longer than we’ve ever recognized before. Sexual transmission still seems to be rare, but the sample size of survivors now is so much larger than we’ve ever had before (maybe 3,000-5,000 sexually active males versus 50-100 for the largest previous outbreak) that we’re picking up rare events.”
And we’re now actively looking for those rare events, too. The Liberia Men’s Health Screening Program already reports detection of Ebola virus in the semen at 565 days following symptoms, suggesting we will need to remain vigilant about survivors in both this and any future EVD epidemics. The challenges are clear–we need to investigate EVD survivors as patients, research participants, and possible viral reservoirs–each of which comes with unique difficulties. By continuing to learn as much as we can from this outbreak, perhaps we can contain future outbreaks more quickly–and prevent others from igniting.
[Obvious warning is obvious: potential spoilers for A Song of Ice and Fire novels/Game of Thrones TV series below].
While no one will claim that George R.R. Martin’s epic series, “A Song of Ice and Fire,” is historically accurate, there are a number of historical parallels that can be drawn from the characters and plotline–particularly from medieval Europe. While most of those relate to epic battles or former monarchs or other royalty, another of Martin’s characters, so to speak, is the disease greyscale (1).
Greyscale is a contagious disease that seems to come in at least two distinct forms: greyscale, an endemic and slow acting, highly contagious illness that can affect either adults or children; and the grey plague, a rapidly-spreading epidemic that can wipe out entire swaths of cities in a short period of time. Both versions of the illness have a high fatality rate (no exact details are given, but it seems to be close to 100%, especially in adults). Recovery from greyscale makes one immune to outbreaks of grey plague, so they seem to be caused either by the same microbe or ones which are very closely related.
The Epidemiology of Greyscale
Greyscale is a disfiguring disease. As its name suggests, it transforms the skin into a hardened, scaly tissue. As the skin dies, it becomes grey in color with permanent cracks and fissures. Infection that spreads across the face can cause blindness.
Like many diseases we consider to be “childhood” diseases (measles, mumps, smallpox, chickenpox, etc.), children seem to be spared the worst of the disease and are the most likely to recover from the illness, though recovery still appears to be quite rare. The disease is most common in Essos, but can also be found occasionally throughout Westeros, including north of the Wall (more on that below).
Greyscale is believed to be transmitted primarily person-to-person via direct skin contact. We see this in the books with the infection of Jon Connington and on the TV show with Jorah Mormont, as both characters are transporting/protecting Tyrion Lannister and apparently are exposed to the pathogen during a battle with the Stone Men (2, 3). The Stone Men are victims in the last stage of greyscale infection, where the skin is entirely calcified and there is involvement of muscle, bone, and internal organs, including the brain. Late signs of greyscale infection include violent insanity, leading sufferers to violently attack anyone who comes near. As these Stone Men are highly feared as sources of the disease, greyscale appears to be contagious for the entire duration of infection, from the development of symptoms to near-death.
If a person has been exposed to greyscale, but is not yet showing symptoms, they can check for impending infection by pricking their toes and fingers each day. Once they’re no longer able to feel the knife, that’s bad news–greyscale infection is likely, as insensitivity to touch is one of the early signs. Once the scaling begins, the victim no longer feels any pain in the affected areas, making the Stone Men essentially invulnerable to pain.
The incubation period of greyscale seems to be very short. As soon as Jorah and Tyrion realize they are safe and the Stone Men are defeated, Jorah rolls up his sleeve and we see that the initial small patch of greyscale has already appeared.
Another prominent victim of greyscale, Shireen Baratheon, is thought to have acquired greyscale via contact with a fomite (an inanimate object that serves as a vehicle to transmit an infectious agent between people)–in her case, a beloved wooden doll clothed in Baratheon House colors from when she was an infant. Her father, Stannis, implies that this may have been a form of bioterrorism–that Stannis received the doll from a Dornish trader on Dragonstone. He tells his daughter, “No doubt he’d heard of your birth, and assumed new fathers were easy targets” (S05E04). “I still remember how you smiled when I placed that doll in your cradle, and you pressed it to your cheek,” where evidence of greyscale is still present (4).
A number of remedies have been proposed to treat greyscale, but none of them are proven effective. They include treating it with boiling water containing limes; chopping off of the infected limbs; religious means/magic; and maybe fire–in A Dance with Dragons, Tyrion touches a Stone Man with his torch, and the Stone Man shrieks in pain (even while having bone showing through his skin, which apparently doesn’t bother him). Whether fire could be a cure is unclear.
Also in A Dance with Dragons, we read of Tyrion’s musings on treating greyscale: “He had heard it said that there were three good cures for greyscale: axe and sword and cleaver. Hacking off afflicted parts did sometimes stop the spread of the disease, Tyrion knew, but not always. Many a man had sacrificed one arm or foot, only to find the other going grey. Once that happened, hope was gone.” As such, the infectious agent seems to enter into the bloodstream and spread throughout the body at some point during the infection, and at this point, local measures such as amputation are no longer useful. Other home remedies, such as cleansing the infected area with vinegar, are also employed. In fact, Jon Connington, once he realizes he’s been infected, soaks his hand in bad wine instead of vinegar, because he believes that if he asks for vinegar, it will be an obvious “tell” that he has the disease.
In the TV series (S05E04), Stannis says to Shireen regarding her infection, “I called in every Maester in this side of the world, every healer, every apothecary. They stopped the disease and saved your life.” However, no details are given on the show regarding how it was stopped (medicine? magic?), or if a mechanism exists that could be used on an adult instead of an infant. When Daenerys asks Jorah if there is a cure, he tells her simply that he doesn’t know, and she directs him to leave, find one, and return to her.
Largely, those with greyscale are shunned and sent elsewhere, especially to the ruins of Valyria (5) where a whole colony of Stone Men live. Shireen asks Stannis, “Are you ashamed of me, Father?”, understanding that her obvious greyscale scars are a sign of stigma for their entire family. Stannis tells his daughter, “Everyone advised me to send you to the ruins of Valyria to live our your short life with the Stone Men before the sickness spread throughout the castle. I told them all to go to hell.” (Father of the Year before that whole burning stuff, Stannis!)
Similarly, both the books and show note the existence of greyscale beyond the wall among the Wildlings–and that the free folks’ response to greyscale infection is exile and/or death. In the books, a wildling named Val sees Shireen, and notes Shireen has a condition they call “the grey death,” which is always fatal in children–because they’re given either hemlock, a pillow, or a blade rather than be allowed to live. She also suggests that greyscale may become quiescent and return later, saying “The grey death sleeps, only to wake again. The child [Shireen] is not clean.”
On the TV version, the wildling Gilly takes the place of Val, and while she is not as frightened of Shireen’s greyscale, she notes she’s also had experience with the illness. She tells the tale of two of her sisters, who contracted greyscale (exactly how, we’re not told). Though he did not kill them as Val suggested, Gilly noted that her father “made them move out of the keep, into the hut outside. None of them were allowed to go near them, but we heard them, especially at night. They started to sound not like themselves.” Gilly saw them again “only once, at the end. They were covered with it. Their faces, their arms. They acted like animals. My father had to drag them out to the woods on a rope.” Shireen doesn’t find out what happened to them after that, but we can guess it’s not good.
What are some real-life parallels?
Clearly greyscale is another invention of Martin’s that doesn’t quite match up to any real infectious disease (6), and I’ll leave that linked article to summarize some of the pros and cons of the alternative diagnoses. But given the other historical parallels, leprosy (Hansen’s disease) is probably the closest real-life affliction to greyscale, due to the route of transmission (I’ll elaborate on that below), symptoms, incubation period, and particularly the cultural response to those who are affected.
Like those with leprosy, sufferers of greyscale can become disfigured, are considered “unclean” and shuffled off to the far corners of the map, feared and then ignored by their family and friends. Connington, when hiding his infection, noted that “Queer as it seemed, men who would cheerfully face battle and risk death to rescue a companion would abandon that same companion in a heartbeat if he were known to have greyscale”–a similar phenomenon to what still can happen today with stigmatized diseases such as leprosy. A case of greyscale is a source of stigma for both the sufferer (even if they survive, like Shireen) and for the family, as there will always be those who fear contagion.
Though evidence is gathering that leprosy is actually transmitted via the respiratory route (like its cousin, tuberculosis), for centuries people believed it could be spread by touch, as greyscale is. So even though the transmission route for the two diseases really isn’t the same, the *presumption* that leprosy can be spread by touch is still incredibly common. The lengthy period between infection and outward symptoms of the affliction is also similar, taking years from exposure to the final stages of infection that we see in the Stone Men. Leprosy can also take years or decades to progress, and while untreated leprosy is not typically a cause of death itself, it can lead to death indirectly due to secondary infections and other issues.
One of the early signs of leprosy is also numbness in an affected area as nerves are damaged by the infection, as Tyrion tried to evaluate after his exposure to the Stone Men, as well as a general thickening and stiffness of the skin. It doesn’t get to the level that’s seen with the Stone Men–one of the biggest problems with leprosy is actually secondary infections, which can lead to loss of digits or even whole limbs rather than a whole-body calcification of the skin–but many of the hallmarks of greyscale are very similar to leprosy.
While leprosy is now treatable with antibiotics, it wasn’t all that long ago that we had our own leper colonies in the U.S. (you can read about one of them here, also on a near-deserted island where the afflicted were largely left to fend for themselves with some occasional governmental assistance, similar to Valyria/the Sorrows). Martin himself even notes that Valyria is “like a leper colony.” Leprosy, and its stigma, remains an issue in some countries still today, and the purposeful isolation of those who have leprosy and exclusion from society persists.
However, while there are many similarities, leprosy doesn’t have an epidemic form equivalent to the grey plague. Described in A Dance with Dragons, it’s suggested that the grey plague wiped out half of Oldtown in the southwest of Westeros, and was only stopped by closing the gates and preventing anyone from entering or leaving. And like the Black Plague, the grey plague’s arrival in Pentos (a city in Essos) came by ship, and its spread into the city was possibly aided by rats. So is there an airborne form of greyscale that causes the grey plague? Could it be similar to Yersinia pestis, the bacterium that causes the Black Plague: transmitted by rats and fleas (or skin to skin in the case of greyscale) in its more mild form, but occasionally ending up in the lungs of an unfortunate victim and spread via the air after that, causing massive epidemics? Is it zoonotic, spread via rats? Will we see the grey plague on the TV series or not?
While comparisons to other real infections are interesting, my real question is–what is Martin going to do with greyscale? How does it feature into the larger end game, when we move beyond just a human “Game of Thrones” into the battle for humanity itself against the White Walkers and their army of undead wights? With all the time spent on the affliction in both the books and particularly in the show, there has to be some payoff somewhere, right?
In some ways, the wights beyond the wall and Stone Men are similar–undead, or nearly-dead, aggressive hunters of humans, with no sense of humanity left. When we last saw Jorah in the TV version, he had confessed his affliction to Daenerys, and she sent him off to find a cure. Will he find Dany after her arrival in Westeros and bring with him an army of (now healthy?) Stone Men–healed by fire perhaps, to fight against those brought back to life by ice? Will he return to Valyria–an area largely abandoned except as a place of exile for the Stone Men since The Doom a thousand years ago–and learn the truth of what happened there? Could Valyria provide a key to ending both greyscale and perhaps also the White Walkers? Or is the haunting poem Tyrion and Jorah recited as they rowed down the Rhoyne toward the ruins of the city foreshadowing what’s going to happen to Westeros?
(1) The information provided on greyscale in this article is a mix of literature from the books and the show. Note that the show, to my recollection, hasn’t delved into the grey plague, so information on that malady comes exclusively from the books. Also note some of the victims of greyscale differ in the books versus in the show (eg Jorah Mormont taking Jon Connington’s place in the TV version).
(2) Though Jorah denies any contact with the Stone Men initially, and it isn’t 100% clear if he was touched during the scene, he does back off from Daenerys when she moves toward him in S06E05, when he discloses his condition (which is now all the way up his forearm). This suggests he does believe he acquired it through direct contact with a Stone Man.
(3) Though these sufferers are uniformly called Stone Men, and the ones seen on-screen appear to be male, presumably there are also Stone Women. Possibly loss of hair as the skin calcifies could lead to a more androgynous look.
(4) I should note there are some alternative views about exactly how Shireen’s greyscale infection was acquired, and about the use of greyscale as a biological weapon.
(5) Or on “the Sorrows” in the novels.
(6) I don’t agree with several things in that article, written by a dermatologist. It concludes based mainly on symptoms and a bit on epidemiology that greyscale is something more like smallpox or HPV and largely rules out a leprosy-like illness. It also notes the potential for an infectious agent that’s only infectious to those with an underlying genetic susceptibility, but I don’t think there’s much evidence to suggest that.
Find other posts in today’s carnival on the science of Game of Thrones!
Yesterday, two article were released showing that MCR-1, the plasmid-associated gene that provides resistance to the antibiotic colistin, has been found in the United States. And not just in one place, but in two distinct cases: a woman with a urinary tract infection (UTI) in Pennsylvania, reported in the journal Antimicrobial Agents and Chemotherapy, and a positive sample taken from a pig’s intestine as part of the National Antimicrobial Resistance Monitoring System (NARMS), which tracks resistant bacteria related to retail meat products. Not surprising, not unexpected, but still, not good.
Colistin is an old antibiotic. Dating back to the 1950s, it’s been used sparingly over the decades because it can cause serious damage to the kidneys and nervous system. It’s also typically administered intravenously in humans, so you can’t just pop a colistin pill and be sent home from the doctor. Newer preparations appear to be safer, and because of the problem with antibiotic resistance in general and limited treatment options for multidrug-resistant Gram-negative infections in particular, colistin has seen a new life in the last decade or so as a last line of defense against some of these almost-untreatable infections.
Because of its sparing use in humans, resistance has not been much of an issue until recently. And while human use is relatively rare compared to other types of antibiotics, in animals, the story is different. Because colistin is old and cheap, it’s used as an additive to feed in Chinese livestock, to make them grow faster and fatter. (We do this here in the U.S. too, but using different antibiotics than colistin). So as would be expected, use of this antibiotic led to the evolution and spread of a resistant strain, due to the presence of the MCR-1 gene. By the first time they saw this resistance, it was already present in 20% of the pigs they tested near Shanghai, and 15% of the raw meat samples they tested. In this case, the gene is on a plasmid, which makes it easier to spread to other types of bacteria. To date, most of the reports of MCR-1 have been in E. coli, but it’s also been found in Salmonellaand Klebsiella pneunoniae–all gut bacteria that can be spread from animals via contaminated food products, or person-to-person when someone carrying the bacterium doesn’t wash their hands after using the bathroom.
So a question becomes, how exactly did it get here? And that’s very difficult to say right now. The hospital where the human case was reported notes that the patient reported no travel history in the past 5 months (so it’s unlikely that she traveled to China, for instance, and picked up the gene or bacterium carrying it there). The hospital says they’ve not found other MCR-1 positive isolates from other patients, but also that they’ve only been testing specimens for 3 weeks, so…yeah. Hard to say. People and animals (like the tested pig) can carry E. coli or other species that harbor MCR-1in their gut without becoming ill, so it may have been in the population for awhile (as they’ve seen in Brazil) before it came to the attention of medical researchers. Perhaps it’s been circulating in some of our meat products, or spreading in a chain of miniscule transfers of shit from person to person to person to person, for longer than we realize. Or both.
I was asked on Twitter yesterday, “Should I panic today or put that off until next week?” I’m not an advocate of panic myself, but I do think this is yet another concern and another hit on our antibiotic arsenal. It’s not widespread in this country and as mentioned, colistin is luckily not a first-line drug, so it won’t affect all *that* many people–for now, at least.
There are already papers out thereshowing bacteria that have both NDM-1 (or related variants) and MCR-1 genes. NDM-1 is a gene that provides resistance to another class of last-resort antibiotics, the carbapenems. (Maryn McKenna has covered this extensively on her blog). When carbapenems fail, treatment with colistin sometimes works. But if the bacterium is resistant to both colistin and carbapenems, well…not good. That hasn’t been reported yet in the U.S., but it’s only a matter of time, as McKenna notes.
It doesn’t mean that we’re out of antibiotics (yet) or that everyone who has one of these resistant infections will be unable to find a treatment that works (yet). But we’re inching ever closer to those days, one resistant bacterium at a time.
As you’ve probably seen, unless you’ve been living in a cave, Zika virus is the infectious disease topic du jour. From an obscure virus to the newest scare, interest in the virus has skyrocketed just in the past few weeks:
I have a few pieces already on Zika, so I won’t repeat myself here. The first is an introductory primer to the virus, answering the basic questions–what is it, where did it come from, what are its symptoms, why is it concerning? The second focuses on Zika’s potential risk to pregnant women, and what is currently being advised for them.
I want to be clear, though–currently, we aren’t 100% sure that Zika virus is causing microcephaly, the condition that is most concerning with this recent outbreak. The circumstantial evidence appears to be pretty strong, but we don’t have good data on 1) how common microcephaly really was in Brazil (or other affected countries) prior to the outbreak. Microcephaly seems to have increased dramatically, but some of those cases are not confirmed, and others don’t seem to be related to Zika; and if Zika really is causing microcephaly, 2) how Zika could be causing this, whether timing of the infection makes a difference, and whether women who are infected asymptomatically are at risk of medical problems in their developing fetuses.
The first question needs good epidemiological data for answers. This can be procured in a few ways. First, babies born with microcephaly, and their mothers, can be tested for Zika virus infection. This can be looked at a few ways: finding traces of the virus itself; finding antibodies to the virus (suggesting a past infection–but one can’t know the exact timing of this); and asking about known infections during pregnancy. Each approach has advantages and limitations. Tracking the virus or its genetic material is a gold standard, but the virus may only be present in body fluids for a short time. So if you miss that window, a false negative could result. This could be coupled with serology, to look at past infection–but you can’t be 100% certain in that case that the infection occurred during pregnancy–though with the apparently recent introduction of Zika into the Americas, it’s likely that infection would be fairly recent.
Serology coupled with an infection in pregnancy that has symptoms consistent with Zika (headache, muscle/joint pain, rash, fever) would be a step up from this, but has some additional problems. Other viral infections can be similar in symptoms to Zika (dengue, chikungunya, even influenza if the patient is lacking a rash), so tests to rule those out should also be done. On the flip side, about 80% of Zika infections show no symptoms at all–so a woman could still have come into contact with the virus and have positive serology, but she wouldn’t have any recollection of infection.
None of this is easy to carry out, but needs to be done in order to really establish with some level of certainty that Zika is the cause of microcephaly in this area. In the meantime, there are a few other possibilities to consider: that another virus (such as rubella) is circulating there. This is a known cause of multiple congenital issues, including microcephaly. This could explain why they’re seeing cases of microcephaly in Brazil, but none have been reported thus far in Colombia. Another is that there is no real increase in microcephaly at all–that, for some reason, people have just recently started paying more attention to it, and associated it with the Zika outbreak in the area–what we call a surveillance bias.
This is a fast-moving story, and we probably won’t have any solid answers to these questions for some time. In the interim, I think it’s prudent to take this as a possibility, and raise awareness of the potential this virus *may* have on the developing fetus, so that women can take precautions as they’re able. Public health is about prevention, and there have certainly been cases in the past of links between A and B that fell apart under further scrutiny. Zika/microcephaly may be one, but for now, it’s an unfortunate case where “more research is needed” is about the best answer one can currently give.
I’ve been involved in a few discussions of late on science-based sites around yon web on antibiotic resistance and agriculture–specifically, the campaign to get fast food giant Subway to stop using meat raised on antibiotics, and a graphic by CommonGround using Animal Health Institute data, suggesting that agricultural animals aren’t an important source of resistant bacteria. Discussing these topics has shown me there’s a lot of misunderstanding of issues in antibiotic resistance, even among those who consider themselves pretty science-savvy.
I think this is partly an issue of, perhaps, hating to agree with one’s “enemy.” Vani Hari, the “Food Babe,” recently also plugged the Subway campaign, perhaps making skeptics now skeptical of the issue of antibiotics and agriculture? Believe me, I am the farthest thing from a “Food Babe” fan and have criticized her many times on my Facebook page, but unlike her ill-advised and unscientific campaigns against things like fake pumpkin flavoring in coffee or “yoga mat” chemicals in Subway bread, this is one issue that actually has scientific support–stopped clocks and all that. Nevertheless, I think some people get bogged down in a lot of exaggeration or misinformation on the topic.
So, some thoughts. Please note that in many cases, my comments will be an over-simplification of a more complex problem, but I’ll try to include nuance when I can (without completely clouding the issue).
First–why is antibiotic resistance an issue?
Since the development of penicillin, we have been in an ongoing “war” with the bacteria that make us ill. Almost as quickly as antibiotics are used, bacteria are capable of developing or acquiring resistance to them. These resistance genes are often present on transmissible pieces of DNA–plasmids, transposons, phage–which allow them to move between bacterial cells, even those of completely different species, and spread that resistance. So, once it emerges, resistance is very difficult to keep under control. As such, much better to work to prevent this emergence, and to provide conditions where resistant bacteria don’t encounter selection pressures to maintain resistance genes (1).
In our 75-ish years of using antibiotics to treat infections, we’ve increasingly found ourselves losing this war. As bacterial species have evolved resistance to our drugs, we keep coming back with either brand-new drugs in different classes of antibiotics, or we’ve made slight tweaks to existing drugs so that they can escape the mechanisms bacteria use to get around them. And they’re killing us. In the US alone, antibiotic-resistant infections cause about 2 million infections per year, and about 23,000 deaths due to these infections–plus tens of thousands of additional deaths from diseases that are complicated by antibiotic-resistant infections. They cost at least $20 billion per year.
But we’re running out of these drugs. And where do the vast majority come from in any case? Other microbes–fungi, other bacterial species–so in some cases, that means there are also pre-existing resistance mechanisms to even new drugs, just waiting to spread. It’s so bad right now that even the WHO has sounded the alarm, warning of the potential for a “post-antibiotic era.”
This is some serious shit.
Where does resistance come from?
Resistant bacteria can be bred anytime an antibiotic is used. As such, researchers in the field tend to focus on two large areas: use of antibiotics in human medicine, and in animal husbandry. Human medicine is probably pretty obvious: humans get drugs to treat infections in hospital and outpatient settings, and in some cases, to protect against infection if a person is exposed to an organism–think of all the prophylactic doses of ciprofloxacin given out after the 2001 anthrax attacks, for example.
In human medicine, there is still much debate about 1) the proper dosing of many types of antibiotics–what is the optimal length of time to take them to ensure a cure, but also reduce the chance of incubating resistant organisms? This is an active area of research; and 2) when it is proper to prescribe antibiotics, period. For instance, ear infections. These cause many sleepless nights for parents, a lot of time off work and school, and many trips to clinics to get checked out. But do all kids who have an ear infection need antibiotics? Probably not. A recent study found that “watchful waiting” as an alternative to immediate prescription of antibiotics worked about as well as drug treatment for nonsevere ear infections in children–one data point among many that antibiotics are probably over-used in human medicine, and particularly for children. So this is one big area of interest and research (among many in human health) when it comes to trying to curb antibiotic use and employ the best practices of “judicious use” of antibiotics.
Another big area of use is agriculture (2). Just as in humans, antibiotics in ag can be used for treatment of sick animals, which is completely justifiable and accepted–but there are many divergences as well. For one, animals are often treated as a herd–if a certain threshold of animals in a population become ill, all will be treated in order to prevent an even worse outbreak of disease in a herd. Two, antibiotics can be, and frequently are, used prophylactically, before any disease is present–for example, at times when the producer historically has seen disease outbreaks in the herd, such as when animals are moved from one place to another (moving baby pigs from a nursery facility to a grower farm, as one example). Third, they can be used for growth promotion purposes–to make animals fatten up to market weight more quickly. The latter is, by far, the most contentious use, and the “low hanging fruit” that is often targeted for elimination.
From practically the beginning of this practice, there were people who spoke out against it, suggesting it was a bad idea, and that the use of these antibiotics in agriculture could lead to resistance which could affect human health. A pair ofpublications by Stuart Levy et al. in 1976 demonstrated this was more than a theoretical concern, and that antibiotic-resistant E. coli were indeed generated on farms using antibiotics, and transferred to farmers working there. Since this time, literally thousands of publications on this topic have demonstrated the same thing, examining different exposures, antibiotics, and bacterial species. There’s no doubt, scientifically, that use of antibiotics in agriculture causes the evolution and spread of resistance into human populations.
Why care about antibiotic use in agriculture?
A quick clarification that’s a common point of confusion–I’m not discussing antibiotic *residues* in meat products as a result of antibiotic use in ag (see, for example, the infographic linked above). In theory, antibiotic residues should not be an issue, because all drugs have a withdrawal period that farmers are supposed to adhere to prior to sending animals off to slaughter. These guidelines were developed so that antibiotics will not show up in an animal’s meat or milk. The real issue of concern for public health are the resistant bacteria, which *can* be transmitted via these routes.
Agriculture comes up many times for a few reasons. First, because people have the potential to be exposed to antibiotic-resistant bacteria that originate on farms via food products that they eat or handle. Everybody eats, and even vegetarians aren’t completely protected from antibiotic use on farms (I’ll get into this below). So even if you’re far removed from farmland, you may be exposed to bacteria incubating there via your turkey dinner or hamburger.
Second, because the vast majority of antibiotic use, by weight, occurs on farms–and many of these are the very same antibiotics used in human medicine (penicillins, tetracyclines, macrolides). It’s historically been very difficult to get good numbers on this use, so you may have seen numbers as high as 80% of all antibiotic use in the U.S. occurs on farms. A better number is probably 70% (described here by Politifact), which excludes a type of antibiotic called ionophores–these aren’t used in human medicine (3). So a great deal of selection for resistance is taking place on farms, but has the potential to spread into households across the country–and almost certainly has. Recent studies have demonstrated also that resistant infections transmitted through food don’t always stay in your gut–they can also cause serious urinary tract infections and even sepsis. Studies from my lab and others (4) examining S. aureus have identified livestock as a reservoir for various types of this bacterium–including methicillin-resistant subtypes.
How does antibiotic resistance spread?
In sum–in a lot of different ways. Resistant bacteria, and/or their resistance genes, can enter our environment–our water, our air, our homes via meat products, our schools via asymptomatic colonization of students and teachers–just about anywhere bacteria can go, resistance genes will tag along. Kalliopi Monoyios created this schematic for the above-mentioned paper I wrote earlier this year on livestock-associated Staphyloccocus aureus and its spread, but it really holds for just about any antibiotic-resistant bacterium out there:
And as I noted above, once it’s out there, it’s hard to put the genie back in the bottle. And it can spread in such a multitude of different ways that it complicates tracking of these organisms, and makes it practically impossible to trace farm-origin bacteria back to their host animals. Instead, we have to rely on studies of meat, farmers, water, soil, air, and people living near farms in order to make connections back to these animals.
And this is where even vegetarians aren’t “safe” from these organisms. What happens to much of the manure generated on industrial farms? It’s used as fertilizer on crops, bringing resistant bacteria and resistance genes along with it, as well as into our air when manure is aerosolized (as it is in some, but not all, crop applications) and into our soil and water–and as noted below, antibiotics themselves can also be used in horticulture as well.
So isn’t something being done about this? Why are we bothering with this anymore?
Kind of, but it’s not enough. Scientists and advocates have been trying to do something about this topic since at least 1969, when the UK’s Swann report on the use of Antibiotics in Animal Husbandry and Veterinary Medicine was released. As noted here:
One of its recommendations was that the only antimicrobials that should be permitted as growth promotants in animals were those that were not depended on for therapy in humans or whose use was not likely to lead to resistance to antimicrobials that were important for treating humans.
And some baby steps have been made previously, restricting use of some important types of antibiotics. More recently in the U.S., Federal Guidelines 209 and 213 were adopted in order to reduce the use of what have been deemed “medically-important” antibiotics in the livestock industry. These are a good step forward, but truthfully are only baby steps. They apply only to the use of growth-promotant antibiotics (those for “production use” as noted in the documents), and not other uses including prophylaxis. There also is no mechanism for monitoring or policing individuals who may continue to use these in violation of the guidelines–they have “no teeth.” As such, there’s concern that use for growth promotion will merely be re-labeled as use for prophylaxis.
Further, even now, we still have no data on the breakdown of antibiotic use in different species. We know over 32 million pounds were used in livestock in 2013, but with no clue how much of that was in pigs versus cattle, etc.
We do know that animals can be raised using lower levels of antibiotics. The European Union has not allowed growth promotant antibiotics since 2006. You’ll read different reports of how successful that has been (or not); this NPR article has a balanced review. What’s pretty well agreed-upon is that, to make such a ban successful, you need good regulation and a change in farming practices. Neither of these will be in place in the U.S. when the new guidance mechanisms go into place next year–so will this really benefit public health? Uncertain. We need more.
So this brings me back to Subway (and McDonald’s, and Chipotle, and other giants that have pledged to reduce use of antibiotics in the animals they buy). Whatever large companies do, consumers are demonstrating that they hold cards to push this issue forward–much faster than the FDA has been able to do (remember, it took them 40 freaking years just to get these voluntary guidelines in place). Buying USDA-certified organic or meat labeled “raised without antibiotics” is no 100% guarantee that you’ll have antibiotic-resistant-bacteria-free meat products, unfortunately, because contamination can be introduced during slaughter, packing, or handling–but in on-farm studies of animals, farmers, and farm environment, studies have typically found reduced levels of antibiotic-resistant bacteria on organic/antibiotic-free farms than their “conventional” counterparts (one example here, looking at farms that were transitioning to organic poultry farming).
Nothing is perfect, and biology is messy. Sometimes reducing antibiotic use takes a long time to have an impact, because resistance genes aren’t always quickly lost from a population even when the antibiotics have been removed. Sometimes a change may be seen in the bacteria animals are carrying, but it takes longer for human bacterial populations to change. No one is expecting miracles, or a move to more animals raised antibiotic-free to be a cure-all. And it’s not possible to raise every animal as antibiotic-free in any case; sick animals need to be treated, and even on antibiotic-free farms, there is often some low level of antibiotic use for therapeutic purposes. (These treated animals are then supposed to be marked and cannot be sold as “antibiotic-free”). But reducing the levels of unnecessary antibiotics in animal husbandry, in conjunction with programs promoting judicious use of antibiotics in human health, is a necessary step. We’ve waited too long already to take it.
(1) Though we know that, in some cases, resistance genes can remain in a population even in the absence of direct selection pressures–or they may be on a cassette with other resistance genes, so by using any one of those selective agents, you’re selecting for maintenance of the entire cassette.
(2) I’ve chosen to focus on use in humans & animal husbandry, but antibiotics are also used in companion animal veterinary medicine and even for aquaculture and horticulture (such as for prevention of disease in fruit trees). The use in these fields is considerably smaller than in human medicine and livestock, but these are also active areas of research and investigation.
(3) This doesn’t necessarily mean they don’t lead to resistance, though. In theory, ionophores can act just like other antibiotics and co-select for resistance genes to other, human-use antibiotics, so their use may still contribute to the antibiotic resistance problem. Studies from my lab and others have shown that the use of zinc, for instance–an antimicrobial metal used as a dietary supplement on some pig farms, can co-select for antibiotic resistance. In our case, for methicillin-resistant S. aureus.
(4) See many more of my publications here, or a Nature profile about some of my work here.
I’ve been working on livestock-associated Staphylococcus aureus and farming now for almost a decade. In that time, work from my lab has shown that, first, the “livestock-associated” strain of methicillin-resistant S. aureus (MRSA) that was found originally in Europe and then in Canada, ST398, is in the United States in pigs and farmers; that it’s present here in raw meatproducts; that “LA” S. aureus can be found not only in the agriculture-intensive Midwest, but also in tiny pig states like Connecticut. With collaborators, we’ve also shown that ST398 can be found in unexpected places, like Manhattan, and that the ST398 strain appears to have originated as a “human” type of S. aureuswhich subsequently was transmitted to and evolved in pigs, obtaining additional antibiotic-resistance genes while losing some genes that help the bacterium adapt to its human host. We also found a “human” type of S. aureus, ST5, way more commonly than expected in pigs originating in central Iowa, suggesting that the evolution of S. aureus in livestock is ongoing, and is more complicated than just ST398 = “livestock” Staph.
However, with all of this research, there’s been a big missing link that I repeatedly get asked about: what about actual, symptomatic infections in people? How often do S. aureus that farmers might encounter on the farm make them ill? We tried to address this in a retrospective survey we published previously, but that research suffered from all the problems that retrospective surveys do–recall bias, low response rate, and the possibility that those who responded did so *because* they had more experience with S. aureus infections, thus making the question more important to them. Plus, because it was asking about the past, we had no way to know that, even if they did report a prior infection, if it was due to ST398 or another type of S. aureus.
So, in 2011, we started a prospective study that was just published in Clinical Infectious Diseases, enrolling over 1,300 rural Iowans (mostly farmers of some type, though we did include individuals with no farming exposures as well, and spouses and children of farmers) and testing them at enrollment for S. aureus colonization in the nose or throat. Like previous studies done by our group andothers in the US, we found that pig farmers were more likely to be carrying S. aureus that were resistant to multiple antibiotics, and especially to tetracycline–a common antibiotic used while raising pigs. Surprisingly, we didn’t find any difference in MRSA colonization among groups, but that’s likely because we enrolled relatively small-scale farmers, rather than workers in concentrated animal feeding operations (CAFOs) like we had examined in prior research, who are exposed to many more animals living in more crowded conditions (and possibly receiving more antibiotics).
What was unique about this study, besides its large size, was that we then followed participants for 18 months to examine development of S. aureus infections. Participants sent us a monthly questionnaire telling us that they had a possible Staph infection or not; describing the infection if there was one, including physician diagnosis and treatment; and when possible, sending us a sample of the infected area for bacterial isolation and typing. Over the course of the study, which followed people for over 15,ooo “person-months” in epi-speak, 67 of our participants reported developing over 100 skin and soft tissue infections. Some of them were “possibly” S. aureus–sometimes they didn’t go to the doctor, but they had a skin infection that matched the handout we had given them that gave pictures of what Staph infections commonly look like. Other times they were cellulitis, which often can’t be definitively confirmed as caused by S. aureus without more invasive tests. Forty-two of the infections were confirmed by a physician, or at the lab as S. aureus due to a swab sent by the patient.
Of the swabs we received that were positive, 3/10 were found to be ST398 strains–and all of those were in individuals who had contact with livestock. A fourth individual who also had contact with pigs and cows had an ST15 infection. Individuals lacking livestock contact had infections with more typical “human” strains, such as ST8 and ST5 (usually described as “community-associated” and “hospital-associated” types of Staph). So yes, ST398 is causing infections in farmers in the US–and very likely, these are flying under the radar, because 1) farmers really, really don’t like to go to the doctor unless they’re practically on their deathbed, and 2) even if they do, and even if the physician diagnoses and cultures S. aureus (which is not incredibly common–many diagnoses are made on appearance alone), there are very limited programs in rural areas to routinely type S. aureus. Even in Iowa, where invasive S. aureus infections were previously state-reportable, we know that fewer than half of the samples even from these infections ever made it to the State lab for testing–and for skin infections? Not even evaluated.
As warnings are sounded all over the world about the looming problem of antibiotic resistance, we need to rein in the denial of antibiotic resistance in the food/meat industry. Some positive steps are being made–just the other day, Tyson foods announced they plan to eliminate human-use antibiotics in their chicken, and places like McDonald’s and Chipotle are using antibiotic-free chicken and/or other meat products in response to consumer demand. However, pork and beef still remain more stubborn when it comes to antibiotic use on farms, despite a recent study showing that resistant bacteria generated on cattle feed yards can transmit via the air, and studies by my group and others demonstrating that people who live in proximity to CAFOs or areas where swine waste is deposited are more likely to have MRSA colonization and/or infections–even if it’s with the “human” types of S. aureus. The cat is already out of the bag, the genie is out of the bottle, whatever image or metaphor you prefer–we need to increase surveillance to detect and mitigate these issues, better integrate rural hospitals and clinics into our surveillance nets, and work on mitigation of resistance development and on new solutions for treatment cohesively and with all stakeholders at the table. I don’t think that’s too much to ask, given the stakes.
Measles has come to the happiest place on Earth. As of this writing, a total of 32 cases of measles have been linked to Disneyland visits that took place between December 17th and 20th. About 75% of the cases identified to date were not vaccinated, either because they chose to forgo vaccines or because they were too young, and at least 6 have been hospitalized.
A measles outbreak is a public health disaster, which can cost into the millions of dollars in health resources. You can be sure that public health workers in California and beyond are working overtime trying to identify cases, educate those who were possibly exposed about how dangerous measles can be, and implement practices so that those who may have been exposed to measles don’t further put others at risk. This includes avoiding public places, and practices such as calling ahead to a doctor’s offices so possible cases can be ushered into private rooms rather than languishing in the waiting room. A clinic in La Mesa recently closed because of a potential measles exposure. An unvaccinated South Pasadena woman, Ylsa Tellez, received a quarantine order after her younger sister was diagnosed with measles. Tellez is fighting the order and “taking immune-boosting supplements” instead.
Why such extreme measures on the part of public health?
Measles is highly contagious. It’s spread by air, and so contagious that if an infected person enters a room, leaves, and an unvaccinated person enters the room hours later, they still can contract measles. Remember a few months back, when that figure was circulating showing that Ebola wasn’t particularly easy to spread? Well, measles very much is. The basic reproductive rate for Ebola is around 2, meaning on average each infected person will cause an additional 2 infections in susceptible individuals.
And what’s the reproductive number for measles?
Eighteen. Eight. Teen. I’m not exaggerating when I say that it is literally one of the most contagious diseases we know of. On average, if you have 10 susceptible individuals exposed to a measles patient, 9 will end up getting sick.
How do we break the cycle of transmission? Vaccination is one way–if one has been vaccinated for measles, chances are very low (but not zero, because nothing is perfect) that they will contract measles. Beyond vaccination, the next-best intervention is to keep those who are infected away from everyone else. The way we do this is by quarantining them.
In public health terms, quarantine specifically refers to the separation of individuals who have been exposed to an infectious agent, *but are not yet ill themselves,* from the rest of society. That way, they’re unable to spread the infection to others. Quarantine makes the most sense when individuals can transmit the infection before they realize they’re sick, which is exactly the case with measles. Infected individuals can spread the virus fully 4 days before the characteristic rash starts to appear, and continue to spread it for another 4 or so days after the rash begins—potentially infecting a lot of people. The problem is, like Ylsa Tellez, they’ll feel fine while they’re out there in the general population. They don’t even have to be coughing or sneezing to spread it (symptoms which can appear prior to the rash)—they can just be breathing (something many of us like to do on a regular basis), and still contaminate their environment with the measles virus.
The difference in transmissibility also makes measles a very different situation from Ebola. Public health officials almost universally condemned quarantine for Ebola exposures, for two reasons: 1) Ebola wasn’t highly transmissible, and isn’t airborne like measles is; and 2) because Ebola isn’t efficiently transmitted until late in the infection when the patient is very ill and likely bedridden. Quarantining Ebola patients was a political stunt, not a public health necessity.
This is why states have the legal authority to enforce quarantine for infectious diseases: it reduces the risk that asymptomatic, potential disease-spreaders will act as “Typhoid Marys” (another asymptomatic, deadly-disease-spreader), which is in the public interest. And while unvaccinated Tellez feels “attacked” and her mother thinks people are being “not nice” when they demand that Tellez submit to quarantine, their choice not to vaccinate has already put many others at risk of disease and, and is resulting in the quarantine of many other exposed individuals as well. In the 2011 Utah measles outbreak, 184 were quarantined and thousands of contacts traced, at an expense of approximately $300,000. The Disneyland outbreak has already spread into 4 states (California, Utah, Washington, and Colorado). Quarantine is one of our tools to stem the epidemic. In our recent outbreak among Ohio Amish, most willingly submitted to quarantine, and over 10,000 doses of the MMR vaccine were administered. Quarantine is undoubtedly a difficult prospect to face, but perhaps if Tellez and others had been vaccinated in the first place, they, and we, wouldn’t be in this situation.
I’ve been asked several times about this NY Post article on the CDC’s “admission” that a sneeze could spread Ebola. The Post (which, I should note, is the least credible newspaper in New York City, for those not familiar with the paper) suggests that the CDC has changed their tune regarding the spread of Ebola.
Except, they haven’t, and this is a ridiculous, trumped-up non-story, passed along not only by the Post but by others of the typical suspects like conspiracy theorist extraordinaire Mike Adams, aka “The Health Ranger” of Natural News.
Here’s what the NY Post claims:
“Droplet spread happens when germs traveling inside droplets that are coughed or sneezed from a sick person enter the eyes, nose or mouth of another person,” the poster states.
Nass slammed the contradiction.
“The CDC said it doesn’t spread at all by air, then Friday they came out with this poster,” she said. “They admit that these particles or droplets may land on objects such as doorknobs and that Ebola can be transmitted that way.”
Of course, no poster is linked in their article, so I feel like I’m playing a game of telephone, trying to figure out just what has been added.
The NY Post article is basically messing up the definition of “airborne,” as Iand othershave discussed ad nauseum. The kind of contact the NY Post describes above isn’t “airborne,” as measles or chickenpox are, where one can come into a space that had been occupied by an infected person, breathe in the suspended virus, and get ill. With Ebola, you have to have *direct contact* with a person’s secretions. So their entire story (not surprisingly, due to their tabloid-y nature) is based on either a purposeful or accidental incorrect definition of just what it means to be “airborne.”
Adams takes it one step further, suggesting that CDC not only misinformed, but revised history; that a poster was “scrubbed” from CDC’s site because it supported “airborne” transmission.
From what I can tell, Adams claims this poster (which he saved) was removed from the CDC site, and replaced by this file. Adams claims that the latter is “entirely empty,” so he may have tried the link before it went live? I have no idea. In any case, the two documents are almost identical in content. Both note that droplet spread can happen, when “germs traveling inside droplets that are coughed or sneezed from a sick person enter the eyes, nose, or mouth of another person” in the first poster, and “droplets that are coughed or sneezed from a sick person splash the eyes, nose, or mouth of another person” in the second poster.
Wow, that’s a sinister difference there.
You can see that both documents still show a picture of doorknobs as possible fomites for transmission (possible in theory, but they’d have to be heavily contaminated by a person late in the disease). It appears that CDC just did a minor redesign of the poster, with the first having an emphasis just on Ebola and the second version trying to be more of an explainer on “air vs. droplet spread,” with Ebola as the example. The content is almost exactly the same: the first portion defines “airborne” spread; the second “droplet” spread; the third focuses on how one protects oneself from getting sick; and the final one clarifies that Ebola is not spread by air, but it could be by droplets. There are minor wording changes as I noted above, but that’s it.
This is nothing new. There’s never been a conspiracy to suggest that droplet transmission can’t happen–but the CDC and others have tried to emphasize that droplet transmission is still direct contact. That’s what people like Adams don’t want to accept. They assume because those droplets travel via air, it’s “airborne,” taking a layman term instead of one accepted and used by the scientific community. Now, given, I understand this can be a source of confusion as scientific terms frequently are. Virologist Ian Mackay has even solicited ideas for other terms to describe such transmission, and make it more clear to the general public what the difference is. But either way, the usage has been clear from the beginning and I guarantee Adams understands the difference. He just doesn’t care.
And now I just spent a half hour of my life to uncover that vast governmental conspiracy-that-wasn’t. Not that it will stop Adams or the NY Post from misinforming and driving fear of the virus and distrust of the government, because *that’s what they do.* Adams is making a pretty penny, I’m sure, off of his absurd Pandemic Prevention kits (only $99 or $199! Bargain!). Perhaps I should get into a different and more lucrative business, because if you believe shtick from Adams or the Post over the CDC or, hey, a trained epidemiologist like myself, I just may have a shiny bridge to sell you.
My Great-Grandpa and Granny Beck were, in some ways, ahead of their time. My Grandpa’s mom and step-dad, they both went through scandalous divorces and then switched partners with another couple, Granny Orpha marrying Wade and my Grandpa’s dad Lee marrying Wade’s ex-wife, Edna. Orpha and Wade raised 5 of Orpha’s boys together, and had a daughter after the divorce/remarriage.
By the time I was born, my Granny Beck was in her 80s, and I have only vague recollections of going over to visit her at her home. But I remember hearing about her cooking. I was a picky eater anyway, and my mom once told me she was always afraid to eat Granny Beck’s stew, because it could be rabbit, it could be ‘possum, it could be squirrel, it could be groundhog…you just never knew. I never ate anything over there.
Grandpa Beck used to have coon dogs, and would bring home anything that the dogs would catch. My great-aunt affirmed my mom’s recollection of Granny Beck’s cooking (and Grandpa Beck’s eating):
My mom did cook some pretty weird things. We always had wild game such as rabbit and pheasant, but I do remember when she cooked a raccoon (I didn’t try it!). My dad was the one that would eat anything, and I do mean anything! We used to bring him such things as chocolate covered ants, pickled pigs feet, and pickled rooster combs. He loved them!
Over the weekend, my neighbor sent along some meat packages for us. He had recently gotten back from another hunt and bagged his third deer of the season (you’re allowed four per year in my county). He was grilling when my partner stopped over on the way home, and sent some ground deer (I think–I’ve not opened the package yet), deer steaks, and a still-warm hunk of a deer heart, well done.
All of this is to say that we can eat some really weird things here in the “civilized,” first-world, developed United States.
Why bring this up now? The current Ebola outbreak has brought out all kinds of biased to outright racist views of Africa and disease. Because it’s postulated that the outbreak started with the consumption of or contact with an infected animal—possibly a fruit bat, which the index family noted they do hunt—people have come out of the woodwork to pontificate on how those in Guinea and other countries “brought this on themselves” because of their consumption of “bushmeat,” and that they’re so uneducated and backwards to eat that in the first place–because really, how could people eat that stuff, especially when it could be diseased?
“Is it time that we drag ignorant, superstitious third world Africans kicking and screaming into the 21st century or should we stop giving aid to Africa and let them fend for themselves? Would the later propel the former?”
Even though we do the same. damn. thing. in the United States.
“Bushmeat” is the name given to pretty much any kind of wild game hunted in Africa–bats (obviously a concern given their possible role in Ebola spread and maintenance of the virus); primates; birds, duikers, lizards, crocodile, various rodents, even elephant, and more.
What do we call “bushmeat” in the US? Or just about everywhere else?
Just “wild game,” or some variation thereof.
In the U.S., we hunt thousands of deer, elk, pheasant, turkey, rabbit, and other animals every year. There are even wild game restaurants that cater to those tastes (though many “wild game” species are actually farmed to some degree). Yet even the bushmeat page at United States’ Fish and Wildlife Service ignores the hunting that goes on in the United States, noting that:
Here in the United States, we have laws that control the preparation, consumption, and trade of meat, ensuring that animals are treated appropriately, kept healthy, and sold legally. This is not the case in some countries in Africa and other parts of the world.
This seems to refer mostly to domestically-raised meats, as it’s much harder to police the treatment, health, and sale of hunted animals. Though one needs a license to hunt many animals and generally to fish, laws vary from state to state. Here in Ohio, though a hunting license or permit needs to be obtained for most types of hunting or trapping, and there may be limits on the number of animals of certain species one can kill per season (such as deer and turkey), for most animals, there’s merely a daily limit (6 squirrels, 4 rabbits, etc. per day). For other animals, including fox, raccoon, skunk, opossum, weasel, crow, groundhog, and coyote, there is no daily bag limit. So one could, conceivably, feed themselves fairly well on just a diet of wild game if they had the time and inclination to do so.
Of course, most people in the U.S. don’t get our food this way. We look at Daryl Dixon of the Walking Dead and his squirrel-hunting prowess as something that could carry one through the zombie apocalypse, but not school lunches for a family of 4. We think it’s awesome when he finds an opossum in a cupboard and proclaims, “Dinner!” I’m sure many readers have plans for their own apocalypse survival plan, which likely involve some kind of wild source for food.
But in modern-day Africa, such hunting is somehow “barbaric” and “backward,” regardless of whether it is for sustenance or trade.
Though Ebola has not been identified in wild animals in the US, our animals are far from disease-free. No wild (or domesticated) animal is. We certainly can find Tularemia and Pasturella in rabbits; deer can carry tuberculosis, Brucella, Hepatitis E, and maintain transmission of Lyme disease and potentially Erlichia. Other zoonotic pathogens that could be acquired from a variety of wild animals include Campylobacter, E. coli, plague (mainly in the Southwestern United States); Cryptosporidia, Giardia, avian influenza from waterfowl, rabies (more likely from handling than ingestion); hantavirus, Trichinella, Leptospira, Salmonella, Histoplasma, and I’m sure many more from handling or consumption of wild animals.
So perhaps rather than looking to countries in Africa and judging their food consumption habits as they relate to infection, we should turn a mirror to our own. If we don’t judge Granny Beck for her wild game consumption, neither should we judge those a continent away.