HIV’s supposed “Patient Zero” in the U.S., Gaetan Dugas, is off the hook! He wasn’t responsible for our outbreak!
This is presented as new information.
It is not, and I think by focusing on the “exoneration” of Dugas, a young flight attendant and one of the earliest diagnosed cases of AIDS in the U.S., these articles (referencing a new Nature paper) are missing the true story in this publication–that Dugas was really a victim of Shilts and the media, and remains so, no matter how many times the science evidence has cleared his name.
First, the idea that Dugas served to 1) bring HIV to the U.S. and 2) spark the epidemic and infect enough people early on that most of the initial cases could be traced back to him is simply false. Yes, this was the hypothesis based on some of the very early cases of AIDS, and the narrative promoted in Randy Shilts’s best-selling 1987 book, “And the Band Played On.” But based on the epidemiology of first symptomatic AIDS cases, and later our understanding of the virus behind the syndrome, HIV, we quickly understood that one single person in the late 1970s could not have introduced the virus and spread it rapidly enough to lead to the level of infections we were seeing by the early 1980s. Later understanding of the virus’s African origin and its global spread made the idea of Dugas as the epidemic’s originator in America even more impossible.
When we think of Dugas’s role in the epidemiology of HIV, we could possibly classify him as, at worst, a “super-spreader“–and individual who is responsible for a disproportionate amount of disease transmission. Dugas acknowledged sexual contact with hundreds of individuals between 1979 and 1981–but his numbers were similar to other gay men interviewed, averaging 227 per year (range 10-1560). And while Shilts portrayed Dugas as a purposeful villain, actively and knowingly spreading HIV to his sexual partners, that does not jibe with both our scientific knowledge of HIV/AIDS or with the assistance Dugas provided to scientists studying the epidemic. Dugas worked with researchers to identify as many of his partners as he could (~10% of his estimated 750), as the scientific and medical community struggled to figure out whether AIDS stemmed from a sexually-transmitted infection, as several lines of evidence suggested. There’s no evidence Dugas was maliciously infecting others, though that was the reputation he received. Dugas passed away from complications of AIDS in March of 1984–weeks before the discovery of HIV was announced to the general public.
Furthermore, the information in the new publication is not entirely novel. Molecular analyses carried out in part by Michael Worobey, also an author on the new paper, showed almost a decade ago that Dugas could not have been the true “Patient Zero.” The 2007 paper, “The emergence of HIV/AIDS in the Americas and beyond,” had the same conclusions as the new paper: HIV entered the U.S. from the Caribbean, probably Haiti, and was circulating in the U.S. by the late 1960s–when Dugas was only about 16 years old, and long before his career as a flight attendant traveling internationally. So this 2007 molecular analysis should have been the nail in the coffin of the Dugas-as-Patient-Zero ideas.
But apparently we’ve forgotten that paper, or other work that has followed the evolution of HIV over the 20th century.
What is unique about the new publication is that it included a sample from Dugas himself, via a plasma contribution Dugas donated in 1983, and other samples banked since the late 1970s. The new paper demonstrated that Dugas’s sample is not in any way unique, nor is it a “basal” virus–one of the earliest in the country, from which others would diverge. Instead, it was representative of what was already circulating among others infected with HIV at that time. In supplemental information, the authors also demonstrated how notation for Dugas in scientific notes changed from Patient 057, then to Patient O (for “Outside California”) to Patient 0/”Zero” in the published manuscript–which Shilts then named as Dugas and ran with in his narrative.
The media then extended Shilts’s ideas, further solidifying the assertion that Dugas was the origin of the U.S. epidemic, and in fact that he was outright evil. The supplemental material notes that Shilts didn’t want the focus of the media campaign initially to be about Dugas, but was convinced by his editor, who suggested the Dugas/Patient Zero narrative would result in more attention than the drier critiques of policy and inaction in response to the AIDS epidemic by the Reagan administration.
And the media certainly talked about it. A 1987 edition of U.S. News and World Report included a dubious quote attributed to Dugas: “‘I’ve got gay cancer,’ the man allegedly told bathhouse patrons after having sex with them. ‘I’m going to die, and so are you.’” NPR’s story adds “The New York Post ran a huge headline declaring “The Man Who Gave Us AIDS. Time magazine jumped in with a story called ‘The Appalling Saga Of Patient Zero.’ And 60 Minutes aired a feature on him. ‘Patient Zero. One of the first cases of AIDS. The first person identified as the major transmitter of the disease,’ host Harry Reasoner said.”
This is the real scandal and lingering tragedy of Dugas. His story was used to stoke fear of HIV-infected individuals, and especially gay men, as predators seeking to take others down with them. His story was used in part to justify criminalization of HIV transmission. So while science has exonerated him again and again, will the public–and the media–finally follow?
[Obvious warning is obvious: potential spoilers for A Song of Ice and Fire novels/Game of Thrones TV series below].
While no one will claim that George R.R. Martin’s epic series, “A Song of Ice and Fire,” is historically accurate, there are a number of historical parallels that can be drawn from the characters and plotline–particularly from medieval Europe. While most of those relate to epic battles or former monarchs or other royalty, another of Martin’s characters, so to speak, is the disease greyscale (1).
Greyscale is a contagious disease that seems to come in at least two distinct forms: greyscale, an endemic and slow acting, highly contagious illness that can affect either adults or children; and the grey plague, a rapidly-spreading epidemic that can wipe out entire swaths of cities in a short period of time. Both versions of the illness have a high fatality rate (no exact details are given, but it seems to be close to 100%, especially in adults). Recovery from greyscale makes one immune to outbreaks of grey plague, so they seem to be caused either by the same microbe or ones which are very closely related.
The Epidemiology of Greyscale
Greyscale is a disfiguring disease. As its name suggests, it transforms the skin into a hardened, scaly tissue. As the skin dies, it becomes grey in color with permanent cracks and fissures. Infection that spreads across the face can cause blindness.
Like many diseases we consider to be “childhood” diseases (measles, mumps, smallpox, chickenpox, etc.), children seem to be spared the worst of the disease and are the most likely to recover from the illness, though recovery still appears to be quite rare. The disease is most common in Essos, but can also be found occasionally throughout Westeros, including north of the Wall (more on that below).
Greyscale is believed to be transmitted primarily person-to-person via direct skin contact. We see this in the books with the infection of Jon Connington and on the TV show with Jorah Mormont, as both characters are transporting/protecting Tyrion Lannister and apparently are exposed to the pathogen during a battle with the Stone Men (2, 3). The Stone Men are victims in the last stage of greyscale infection, where the skin is entirely calcified and there is involvement of muscle, bone, and internal organs, including the brain. Late signs of greyscale infection include violent insanity, leading sufferers to violently attack anyone who comes near. As these Stone Men are highly feared as sources of the disease, greyscale appears to be contagious for the entire duration of infection, from the development of symptoms to near-death.
If a person has been exposed to greyscale, but is not yet showing symptoms, they can check for impending infection by pricking their toes and fingers each day. Once they’re no longer able to feel the knife, that’s bad news–greyscale infection is likely, as insensitivity to touch is one of the early signs. Once the scaling begins, the victim no longer feels any pain in the affected areas, making the Stone Men essentially invulnerable to pain.
The incubation period of greyscale seems to be very short. As soon as Jorah and Tyrion realize they are safe and the Stone Men are defeated, Jorah rolls up his sleeve and we see that the initial small patch of greyscale has already appeared.
Another prominent victim of greyscale, Shireen Baratheon, is thought to have acquired greyscale via contact with a fomite (an inanimate object that serves as a vehicle to transmit an infectious agent between people)–in her case, a beloved wooden doll clothed in Baratheon House colors from when she was an infant. Her father, Stannis, implies that this may have been a form of bioterrorism–that Stannis received the doll from a Dornish trader on Dragonstone. He tells his daughter, “No doubt he’d heard of your birth, and assumed new fathers were easy targets” (S05E04). “I still remember how you smiled when I placed that doll in your cradle, and you pressed it to your cheek,” where evidence of greyscale is still present (4).
A number of remedies have been proposed to treat greyscale, but none of them are proven effective. They include treating it with boiling water containing limes; chopping off of the infected limbs; religious means/magic; and maybe fire–in A Dance with Dragons, Tyrion touches a Stone Man with his torch, and the Stone Man shrieks in pain (even while having bone showing through his skin, which apparently doesn’t bother him). Whether fire could be a cure is unclear.
Also in A Dance with Dragons, we read of Tyrion’s musings on treating greyscale: “He had heard it said that there were three good cures for greyscale: axe and sword and cleaver. Hacking off afflicted parts did sometimes stop the spread of the disease, Tyrion knew, but not always. Many a man had sacrificed one arm or foot, only to find the other going grey. Once that happened, hope was gone.” As such, the infectious agent seems to enter into the bloodstream and spread throughout the body at some point during the infection, and at this point, local measures such as amputation are no longer useful. Other home remedies, such as cleansing the infected area with vinegar, are also employed. In fact, Jon Connington, once he realizes he’s been infected, soaks his hand in bad wine instead of vinegar, because he believes that if he asks for vinegar, it will be an obvious “tell” that he has the disease.
In the TV series (S05E04), Stannis says to Shireen regarding her infection, “I called in every Maester in this side of the world, every healer, every apothecary. They stopped the disease and saved your life.” However, no details are given on the show regarding how it was stopped (medicine? magic?), or if a mechanism exists that could be used on an adult instead of an infant. When Daenerys asks Jorah if there is a cure, he tells her simply that he doesn’t know, and she directs him to leave, find one, and return to her.
Largely, those with greyscale are shunned and sent elsewhere, especially to the ruins of Valyria (5) where a whole colony of Stone Men live. Shireen asks Stannis, “Are you ashamed of me, Father?”, understanding that her obvious greyscale scars are a sign of stigma for their entire family. Stannis tells his daughter, “Everyone advised me to send you to the ruins of Valyria to live our your short life with the Stone Men before the sickness spread throughout the castle. I told them all to go to hell.” (Father of the Year before that whole burning stuff, Stannis!)
Similarly, both the books and show note the existence of greyscale beyond the wall among the Wildlings–and that the free folks’ response to greyscale infection is exile and/or death. In the books, a wildling named Val sees Shireen, and notes Shireen has a condition they call “the grey death,” which is always fatal in children–because they’re given either hemlock, a pillow, or a blade rather than be allowed to live. She also suggests that greyscale may become quiescent and return later, saying “The grey death sleeps, only to wake again. The child [Shireen] is not clean.”
On the TV version, the wildling Gilly takes the place of Val, and while she is not as frightened of Shireen’s greyscale, she notes she’s also had experience with the illness. She tells the tale of two of her sisters, who contracted greyscale (exactly how, we’re not told). Though he did not kill them as Val suggested, Gilly noted that her father “made them move out of the keep, into the hut outside. None of them were allowed to go near them, but we heard them, especially at night. They started to sound not like themselves.” Gilly saw them again “only once, at the end. They were covered with it. Their faces, their arms. They acted like animals. My father had to drag them out to the woods on a rope.” Shireen doesn’t find out what happened to them after that, but we can guess it’s not good.
What are some real-life parallels?
Clearly greyscale is another invention of Martin’s that doesn’t quite match up to any real infectious disease (6), and I’ll leave that linked article to summarize some of the pros and cons of the alternative diagnoses. But given the other historical parallels, leprosy (Hansen’s disease) is probably the closest real-life affliction to greyscale, due to the route of transmission (I’ll elaborate on that below), symptoms, incubation period, and particularly the cultural response to those who are affected.
Like those with leprosy, sufferers of greyscale can become disfigured, are considered “unclean” and shuffled off to the far corners of the map, feared and then ignored by their family and friends. Connington, when hiding his infection, noted that “Queer as it seemed, men who would cheerfully face battle and risk death to rescue a companion would abandon that same companion in a heartbeat if he were known to have greyscale”–a similar phenomenon to what still can happen today with stigmatized diseases such as leprosy. A case of greyscale is a source of stigma for both the sufferer (even if they survive, like Shireen) and for the family, as there will always be those who fear contagion.
Though evidence is gathering that leprosy is actually transmitted via the respiratory route (like its cousin, tuberculosis), for centuries people believed it could be spread by touch, as greyscale is. So even though the transmission route for the two diseases really isn’t the same, the *presumption* that leprosy can be spread by touch is still incredibly common. The lengthy period between infection and outward symptoms of the affliction is also similar, taking years from exposure to the final stages of infection that we see in the Stone Men. Leprosy can also take years or decades to progress, and while untreated leprosy is not typically a cause of death itself, it can lead to death indirectly due to secondary infections and other issues.
One of the early signs of leprosy is also numbness in an affected area as nerves are damaged by the infection, as Tyrion tried to evaluate after his exposure to the Stone Men, as well as a general thickening and stiffness of the skin. It doesn’t get to the level that’s seen with the Stone Men–one of the biggest problems with leprosy is actually secondary infections, which can lead to loss of digits or even whole limbs rather than a whole-body calcification of the skin–but many of the hallmarks of greyscale are very similar to leprosy.
While leprosy is now treatable with antibiotics, it wasn’t all that long ago that we had our own leper colonies in the U.S. (you can read about one of them here, also on a near-deserted island where the afflicted were largely left to fend for themselves with some occasional governmental assistance, similar to Valyria/the Sorrows). Martin himself even notes that Valyria is “like a leper colony.” Leprosy, and its stigma, remains an issue in some countries still today, and the purposeful isolation of those who have leprosy and exclusion from society persists.
However, while there are many similarities, leprosy doesn’t have an epidemic form equivalent to the grey plague. Described in A Dance with Dragons, it’s suggested that the grey plague wiped out half of Oldtown in the southwest of Westeros, and was only stopped by closing the gates and preventing anyone from entering or leaving. And like the Black Plague, the grey plague’s arrival in Pentos (a city in Essos) came by ship, and its spread into the city was possibly aided by rats. So is there an airborne form of greyscale that causes the grey plague? Could it be similar to Yersinia pestis, the bacterium that causes the Black Plague: transmitted by rats and fleas (or skin to skin in the case of greyscale) in its more mild form, but occasionally ending up in the lungs of an unfortunate victim and spread via the air after that, causing massive epidemics? Is it zoonotic, spread via rats? Will we see the grey plague on the TV series or not?
While comparisons to other real infections are interesting, my real question is–what is Martin going to do with greyscale? How does it feature into the larger end game, when we move beyond just a human “Game of Thrones” into the battle for humanity itself against the White Walkers and their army of undead wights? With all the time spent on the affliction in both the books and particularly in the show, there has to be some payoff somewhere, right?
In some ways, the wights beyond the wall and Stone Men are similar–undead, or nearly-dead, aggressive hunters of humans, with no sense of humanity left. When we last saw Jorah in the TV version, he had confessed his affliction to Daenerys, and she sent him off to find a cure. Will he find Dany after her arrival in Westeros and bring with him an army of (now healthy?) Stone Men–healed by fire perhaps, to fight against those brought back to life by ice? Will he return to Valyria–an area largely abandoned except as a place of exile for the Stone Men since The Doom a thousand years ago–and learn the truth of what happened there? Could Valyria provide a key to ending both greyscale and perhaps also the White Walkers? Or is the haunting poem Tyrion and Jorah recited as they rowed down the Rhoyne toward the ruins of the city foreshadowing what’s going to happen to Westeros?
(1) The information provided on greyscale in this article is a mix of literature from the books and the show. Note that the show, to my recollection, hasn’t delved into the grey plague, so information on that malady comes exclusively from the books. Also note some of the victims of greyscale differ in the books versus in the show (eg Jorah Mormont taking Jon Connington’s place in the TV version).
(2) Though Jorah denies any contact with the Stone Men initially, and it isn’t 100% clear if he was touched during the scene, he does back off from Daenerys when she moves toward him in S06E05, when he discloses his condition (which is now all the way up his forearm). This suggests he does believe he acquired it through direct contact with a Stone Man.
(3) Though these sufferers are uniformly called Stone Men, and the ones seen on-screen appear to be male, presumably there are also Stone Women. Possibly loss of hair as the skin calcifies could lead to a more androgynous look.
(4) I should note there are some alternative views about exactly how Shireen’s greyscale infection was acquired, and about the use of greyscale as a biological weapon.
(5) Or on “the Sorrows” in the novels.
(6) I don’t agree with several things in that article, written by a dermatologist. It concludes based mainly on symptoms and a bit on epidemiology that greyscale is something more like smallpox or HPV and largely rules out a leprosy-like illness. It also notes the potential for an infectious agent that’s only infectious to those with an underlying genetic susceptibility, but I don’t think there’s much evidence to suggest that.
Find other posts in today’s carnival on the science of Game of Thrones!
My Great-Grandpa and Granny Beck were, in some ways, ahead of their time. My Grandpa’s mom and step-dad, they both went through scandalous divorces and then switched partners with another couple, Granny Orpha marrying Wade and my Grandpa’s dad Lee marrying Wade’s ex-wife, Edna. Orpha and Wade raised 5 of Orpha’s boys together, and had a daughter after the divorce/remarriage.
By the time I was born, my Granny Beck was in her 80s, and I have only vague recollections of going over to visit her at her home. But I remember hearing about her cooking. I was a picky eater anyway, and my mom once told me she was always afraid to eat Granny Beck’s stew, because it could be rabbit, it could be ‘possum, it could be squirrel, it could be groundhog…you just never knew. I never ate anything over there.
Grandpa Beck used to have coon dogs, and would bring home anything that the dogs would catch. My great-aunt affirmed my mom’s recollection of Granny Beck’s cooking (and Grandpa Beck’s eating):
My mom did cook some pretty weird things. We always had wild game such as rabbit and pheasant, but I do remember when she cooked a raccoon (I didn’t try it!). My dad was the one that would eat anything, and I do mean anything! We used to bring him such things as chocolate covered ants, pickled pigs feet, and pickled rooster combs. He loved them!
Over the weekend, my neighbor sent along some meat packages for us. He had recently gotten back from another hunt and bagged his third deer of the season (you’re allowed four per year in my county). He was grilling when my partner stopped over on the way home, and sent some ground deer (I think–I’ve not opened the package yet), deer steaks, and a still-warm hunk of a deer heart, well done.
All of this is to say that we can eat some really weird things here in the “civilized,” first-world, developed United States.
Why bring this up now? The current Ebola outbreak has brought out all kinds of biased to outright racist views of Africa and disease. Because it’s postulated that the outbreak started with the consumption of or contact with an infected animal—possibly a fruit bat, which the index family noted they do hunt—people have come out of the woodwork to pontificate on how those in Guinea and other countries “brought this on themselves” because of their consumption of “bushmeat,” and that they’re so uneducated and backwards to eat that in the first place–because really, how could people eat that stuff, especially when it could be diseased?
“Is it time that we drag ignorant, superstitious third world Africans kicking and screaming into the 21st century or should we stop giving aid to Africa and let them fend for themselves? Would the later propel the former?”
Even though we do the same. damn. thing. in the United States.
“Bushmeat” is the name given to pretty much any kind of wild game hunted in Africa–bats (obviously a concern given their possible role in Ebola spread and maintenance of the virus); primates; birds, duikers, lizards, crocodile, various rodents, even elephant, and more.
What do we call “bushmeat” in the US? Or just about everywhere else?
Just “wild game,” or some variation thereof.
In the U.S., we hunt thousands of deer, elk, pheasant, turkey, rabbit, and other animals every year. There are even wild game restaurants that cater to those tastes (though many “wild game” species are actually farmed to some degree). Yet even the bushmeat page at United States’ Fish and Wildlife Service ignores the hunting that goes on in the United States, noting that:
Here in the United States, we have laws that control the preparation, consumption, and trade of meat, ensuring that animals are treated appropriately, kept healthy, and sold legally. This is not the case in some countries in Africa and other parts of the world.
This seems to refer mostly to domestically-raised meats, as it’s much harder to police the treatment, health, and sale of hunted animals. Though one needs a license to hunt many animals and generally to fish, laws vary from state to state. Here in Ohio, though a hunting license or permit needs to be obtained for most types of hunting or trapping, and there may be limits on the number of animals of certain species one can kill per season (such as deer and turkey), for most animals, there’s merely a daily limit (6 squirrels, 4 rabbits, etc. per day). For other animals, including fox, raccoon, skunk, opossum, weasel, crow, groundhog, and coyote, there is no daily bag limit. So one could, conceivably, feed themselves fairly well on just a diet of wild game if they had the time and inclination to do so.
Of course, most people in the U.S. don’t get our food this way. We look at Daryl Dixon of the Walking Dead and his squirrel-hunting prowess as something that could carry one through the zombie apocalypse, but not school lunches for a family of 4. We think it’s awesome when he finds an opossum in a cupboard and proclaims, “Dinner!” I’m sure many readers have plans for their own apocalypse survival plan, which likely involve some kind of wild source for food.
But in modern-day Africa, such hunting is somehow “barbaric” and “backward,” regardless of whether it is for sustenance or trade.
Though Ebola has not been identified in wild animals in the US, our animals are far from disease-free. No wild (or domesticated) animal is. We certainly can find Tularemia and Pasturella in rabbits; deer can carry tuberculosis, Brucella, Hepatitis E, and maintain transmission of Lyme disease and potentially Erlichia. Other zoonotic pathogens that could be acquired from a variety of wild animals include Campylobacter, E. coli, plague (mainly in the Southwestern United States); Cryptosporidia, Giardia, avian influenza from waterfowl, rabies (more likely from handling than ingestion); hantavirus, Trichinella, Leptospira, Salmonella, Histoplasma, and I’m sure many more from handling or consumption of wild animals.
So perhaps rather than looking to countries in Africa and judging their food consumption habits as they relate to infection, we should turn a mirror to our own. If we don’t judge Granny Beck for her wild game consumption, neither should we judge those a continent away.
Yambuku, Zaire, 1976. A new disease was spreading through the population. Patients were overcome by headaches and bloody diarrhea. The disease was spreading through entire families and wiping them out.
Eight hundred and twenty-five kilometers to the northeast, a similar epidemic was reportedly raging across the border in Maridi, Sudan. Were these outbreaks connected? Despite enormous challenges trying to navigate both the logistics of crossing a landscape of unpaved and unmarked roads, as well as the political difficulties of an attempt to enter and collect samples in an area marked by recent civil strife, samples were finally collected and shipped to the World Health Organization for testing.
All told, these outbreaks caused 602 cases and 431 deaths. The Zaire outbreak wasn’t stopped until the hospital was closed, because 11 of its 17 workers (65%) had died of the disease. Investigators went door-to-door in 550 villages in the Yambuku area to find and isolate new cases. Roadblocks were set up to restrict access to the area.
In Sudan, a number of cases were traced to workers in a cotton factory (probably due to bat exposure) and their families. The epidemic increased when one case went to the Maridi hospital, and the virus then was transmitted within that hospital. Note what the write-up describes:
“The hospital served as an efficient amplifier from which the virus was disseminated throughout the town. The number of cases gradually increased until mid-September and at the end of the month there was a large number of cases, particularly in hospital staff. The number of cases declined in early October, possibly as a result of the use of protective clothing. A considerable increase in the number of cases was observed in late October and early November, which may have been partly due to a lack of protective clothing when supplies ran out in mid-October.”
In Maridi, the doctor-in-charge, along with 61 members of the nursing staff came down with Ebola. Thirty-three of them died. Eight additional deaths occurred among the ancillary and cleaning staff. This outbreak was only contained because, again, the hospital was made safer via extensive training and the use of good personal protective equipment, and cases were identified in the town by going door-to-door. Buy-in from local officials was obtained, which is critical–while families may not trust outsiders, they more often will listen to local leaders. Cases were isolated in their homes or taken to the hospital. Eventually every village in a 30-mile radius from Maridi was screened, and the outbreak burned out.
Now imagine you’re looking at this in real time, via 24-hour news networks, from halfway across the world. You’re hearing news reports of cases spiking. Healthcare workers are contracting the disease. You don’t have all the information but you’re coming to your own conclusion that the virus must be mutating in Sudan.
You would, however, be wrong. These outbreaks were actually separate epidemics (and led to the identification of Zaire ebolavirus and Sudan ebolavirus, respectively), but collectively, that was a lot of Ebolavirus disease in 1976–the most deadly single year for Ebola until 2014, in fact. It took an enormous effort on the ground in these two areas to stop the outbreak.
Though not wholly analogous to today’s West African epidemic, there are lessons here to take away. There is a steep learning curve for dealing with Ebola. Besides the single case from the Ivory Coast, Ebola has not historically been a West African disease. Liberia, and Guinea and Sierra Leone in particular, do not have a great history of governmental stability, and are still recovering from civil wars, government coups, and a general lack of stable national leadership. Infrastructure is also substandard, as early reports on the main hospital in Conakry, Guinea noted. Each country seems to be dealing with this largely on their own without solid cross-border cooperation, and since the borders tend to be flexible in any case, patients and those incubating Ebola have been able to travel and move the virus into new areas. The public in general does not understand the disease, and in some cases keeping doctors out with knives and machetes, accusing physicians of murdering their loved ones and bringing Ebola to their villages.
It’s reasons like this–structural and sociological issues, by and large–that have led the WHO to declare the outbreak to be “out of control.” As far as has been reported, there is nothing particularly notable about the virus itself, which is very closely related to previous Zaire ebolavirus isolates. The infection rate in healthcare workers–about 60 out of 1300 total cases reported at the time–is actually quite low, given the conditions they’re working in and the lack of experience most of them would have had with Ebola. (Again, in Sudan, it was 61 out of 284 cases–so 21% of the total cases were doctors and nurses–versus about 5% in this outbreak).
The outbreaks in these countries are bad currently, but for the future, we can look at Uganda as a model. The first outbreak in that country, beginning in 2000, resulted in 425 cases and 224 deaths. The second outbreak in 2008 resulted in 149 cases and 37 deaths. In 2011, they had a single case with no secondary spread. In 2012, 11 cases and 4 deaths. 2012, 6 cases and 3 deaths. It’s probably impossible to stop Ebola from spilling over into the human population, but Uganda has done a great job responding. They are able to do early detection of suspected cases in their biosafety level 4 lab in Gulu. They alert local authorities if something is suspected, then send a task force to assist with containment. They communicate effectively with the public about what they can do, and how effective treatment in hospitals can lower the mortality rate. They work with community leaders when a quarantine needs to be put in place. These things can all be employed in West Africa as well, but it takes time and a lot of commitment to get such networks up and running. We need this cooperation as much as we need PPE and even more than we need “secret serums,” because it is only with prevention of new cases that this epidemic will finally die out.
I can hardly do Dr. William Foege justice with a short introduction. He is one of the scientists who led the global smallpox eradication efforts. He developed the concept of ring vaccination, which targeted vaccination to those individuals around a known case of smallpox. This concept really made eradication possible, as it eliminated the need for universal vaccination. Following the success of the smallpox campaign, he has worked tirelessly to increase global vaccination rates. He led the effort to provide low-cost treatments for river blindness, resulting in an immense reduction in that disease in Africa. To read some of his amazing stories of his time in the field, check out his autobiography and chronicle of smallpox’s demise, “House on Fire: The Fight to Eradicate Smallpox.” He’s currently a fellow at the Gates Foundation after serving with the Carter Foundation and spending time as the director of the CDC. I could go on and on about his positions and awards, but suffice it to say, he’s a man who knows his stuff.
Dr. Foege was generous enough to answer my questions on vaccination and on smallpox in particular, after I ran across a particularly egregious anti-vaccine article which suggested that use of the vaccine actually increased smallpox cases, and did nothing to eradicate the virus. He also discusses what vaccine supporters can do to promote vaccination.
TS: You have certainly encountered resistance to vaccination in your day, and much misinformation about and fear of vaccinations. How did you overcome this misinformation and fear when working to eradicate smallpox, and later in your career to increase rates of vaccination overall in the developing world?
WF: The first anti-vaccination movements appeared immediately after Edward Jenner introduced smallpox vaccine in 1796. I start from the premise that parents want to do the right thing for their children, and when they don’t it is because of misinformation rather than evil intent. While resistance was found in many countries, the fear of smallpox overcame many of the problems. When people observed a decreased risk of the disease in those vaccinated it was a powerful message. Now parents don’t always see the disease impact and therefore have a harder time weighing risks.
An overwhelming concern in recent years has been the concern about autism. We know the studies are good showing no increased risk in children who have been vaccinated, but parents are often left with the first impression when that concern was voiced some years ago and have no way of following the literature. They have been poorly served by Andrew Wakefield. We now know that his article in the Lancet was more than poor science. He fabricated and received money from a lawyer. He lost his license and yet he continues to give talks to parents looking for an answer to autism. So he not only lost his license but he lost his way.
The research, by the way, is pointing to something that happens in the second trimester of pregnancy as a cause of autism, not something that happens after birth.
TS: Unfortunately, there is rampant misinformation today even in developed countries. A common anti-vaccine message is that “vaccines didn’t reduce” measles, polio, etc. A recent post even claims that “Small pox had greatly declined before the vaccine, increased after the vaccine in westernized countries, and was effectively eradicated in third-world countries due to the surveillance and containment quarantine program. The small pox vaccine was actually flawed, deadly, and ineffective, killing many and inflicting even more with serious adverse reactions. Small pox eventually exterminated itself when people had access to clean water, good food, clean living conditions, and proper hygiene.” (Source) As one who led smallpox eradication efforts in Africa and India, can you address the claim that what led to the elimination of smallpox was hygiene and quarantine rather than vaccination?
WF: I would never speak against clean water, good food, clean living conditions and proper hygiene, but that is not what eradicated smallpox. Smallpox was almost a universal disease in Europe at the time the vaccine was developed in 1796. Even in the 20th century there were an estimated 300 million deaths from smallpox worldwide. The surveillance/containment strategy was based on using vaccine and getting it to the people at immediate risk because they were in the vicinity of people with smallpox. Quarantine of smallpox patients in their homes was part of the strategy in order to reduce the number of people they could expose. Visitors were allowed into their homes but only after being vaccinated. The science of vaccine preventing smallpox is so abundantly clear that it is difficult to imagine one looking at the evidence and reaching the conclusions cited above. The same is true of measles and polio. Measles killed large numbers of children in Africa and Asia before vaccine was introduced. The number of measles deaths was over 3 million a year in the early 1960’s and has been reduced by over 90% with the introduction of vaccine. The people who ascribe the reductions of cases in measles, polio and smallpox to hygiene provide proof to Mark Twain’s comment that people who don’t read have no advantage over people who can’t read.
TS: Another common anti-vaccine claim is that “herd immunity is a myth.” Would smallpox eradication have succeeded if this was the case?
WF: Herd immunity is a complex subject. There is no percentage of protection that will automatically protect others. For example, 90% smallpox vaccination in a state such as Bihar, India, would still allow more susceptible people per square mile than 10% smallpox vaccination in most of the United States. Measles virus is so contagious that it seeks out susceptible children with tenacity. Introduce a person with smallpox to a room of susceptible children and only about a third will have smallpox one incubation period later. Do the same with a case of measles and 80% will have the disease one incubation period later.
However, the concept is correct that increasing the number of persons protected in the population will decrease the chances that an organism will be passed on. Not only is the vaccinated person protected but they can’t pass on the disease to others. At high levels of coverage the remaining population receives great protection. With surveillance/containment in smallpox eradication, our objective was to vaccinate all contacts of a person with smallpox and in that sense attempt to get 100% of close contacts protected, a form of herd immunity in a small defined group.
TS: You were instrumental in increasing vaccination rates in developing countries in years past. What are your thoughts on those who sit in a position of privilege in the U.S., eschewing vaccines and declaring vaccine-preventable diseases “harmless” while many of them have never seen a case of polio or measles?
WF: It would be nice to be protected from both the diseases and the small risks of vaccine. But it doesn’t work. The social contract requires that we all participate or the diseases will come back. Every pregnant woman in this country should be aware that their child will not have Congenital Rubella Syndrome. Why? Because other children have been vaccinated against rubella and while they get no direct benefit, they have stopped the transmission of the virus to a fetus. It is a social contribution they can be proud of providing. Parents who withhold vaccines from their children are doing their children a disservice. They put others at risk if their children get a vaccine-preventable disease and they have also put their children at increased risk if they want to travel abroad as students or later as adults. It has turned out to be a heavy burden for parents who have lost their children in recent measles outbreaks. Indigenous measles has disappeared in this country so every case can be traced to importations. But we have continuous importations and at times we have had large outbreaks because of the many children are unvaccinated.
TS: In your campaigns against smallpox, for global vaccinations, and for the distribution of ivermectin to treat river blindness, it seems like you have frequently played the role of diplomat in addition to scientist/physician: bringing together people from opposing “turfs” to come to a mutually beneficial understanding and outcome. Do you have any advice for those of us working on raising vaccine awareness in the current U.S. climate, and how we can work to better play the role of diplomat as well?
WF: First, know what we all want. Our children to be healthy. Both sides can’t be right (although as Lincoln pointed out they could both be wrong!) so what information is needed to reach a decision? Immunization rates have improved on Vashon Island because parents got together to try and understand why the difference of opinion. My belief is that Andrew Wakefield so poisoned the well that the facts have to be explained before there can be common ground.
I often tell students that health leadership today is found not in a title but in a person that can make a coalition work effectively. We need many coalitions to discuss this at local levels to discover what do the anti-vaccination people need in the way of information. There has been a movement by some pediatricians to say they can’t care for children who are not immunized. I understand the feeling but feel those are exactly the children who need care because they already have a strike against them because of parents not understanding the science.
You can make a great contribution by making the science clear, encouraging feedback and providing reassurance to the parents who need support. Autism is such a difficult burden for parents that they want answers. They need help in getting what is known about the science. They are not helped by erroneous information.
Eleven years ago, two scientists made a bet. One scientist wagered that a new type of antimicrobial agent, called antimicrobial peptides, would not elicit resistance from bacterial populations which were treated with the drugs. Antimicrobial peptides are short proteins (typically 15-50 amino acids in length) that are often positively charged. They are also a part of our body’s own innate immune system, and present in other species from bacteria to plants. It is thought that these peptides work primarily by disrupting the integrity of the bacterial cell, often by poking holes in them. Sometimes they work with the host to ramp up the immune response and overwhelm the invading microbe. Because the peptides are frequently targeted at the bacterial cell wall structure, it was thought that resistance to these drugs would require a fundamental change in membrane structure, making it an exceedingly rare event. Therefore, these antimicrobial peptides might make an excellent weapon in the fight against multiply drug-resistant bacteria.
Additionally, the remarkable diversity of these peptides, combined with the presence of multiple types of peptides with different mechanisms of action present at the infection site, rendered unlikely the evolution of resistance to these molecules (or so some reasoning went). However, evolutionary biologists have pointed out that therapeutic use of these peptides would differ from natural exposure: concentration would be significantly higher, and a larger number of microbes would be exposed. Additionally, resistance to these peptides has been detailed in a few instances. For example, resistance to antimicrobial peptides has been shown to be essential for virulence in Staphylococcus aureus and Salmonella species, but we didn’t *witness* that resistance develop–therefore, it might simply be that those species have physiological properties that render them naturally resistant to many of these peptides, and were never susceptible in the first place.
Studies both in the laboratory and in the clinic confirm that emergence of resistance against antimicrobial peptides is less probable than observed for conventional antibiotics, and provides the impetus to develop antimicrobial peptides, both natural and laboratory conceived, into therapeutically useful agents.
Certainly in the short term, resistance may be unlikely to evolve for reasons described above. However, if these peptides are used over an extended period of time, could the mutations necessary to confer resistance accumulate? This was the question asked in a new study by Dr. Zasloff along with colleagues Gabriel Perron and Graham Bell. Following publication of his 2002 paper where he called evolution of resistance to these peptides “improbable,” Bell challenged Zasloff to test this theory. Zasloff took him up on the offer, and they published their results in Proceedings of the Royal Society.
Zasloff had egg on his face. Resistance not only evolved, but it evolved independently in almost every instance they tested (using E. coli and Pseudomonas species), taking only 600-700 generations–a relative blip in microbial time. Oops.
Well, everything old is new again. A very similar claim has been making the rounds recently, originating from the press release for a new paper claiming to have found bacteria’s “Achilles’ heel,” advancing the claim that “Because new drugs will not need to enter the bacteria itself, we hope that the bacteria will not be able to develop drug resistance in future.” A grand claim, but history suggests otherwise. It was argued that bacteria could not evolve resistance to bacteriophage, as the ancient interaction between viruses and their bacterial hosts certainly must have already exploited and overcome any available defense. Now a plethora of resistance mechanisms are known.
Alexander Fleming, who won the 1945 Nobel Prize in Physiology or Medicine, tried to sound the warning that the usefulness of antibiotics would be short-lived as bacteria adapted, but his warnings were (and still are?) largely ignored. There is no “magic bullet;” there are only temporary solutions, and we should have learned by now not to underestimate our bacterial companions.
Off the southeastern coast of Australia lies a small island that in the 1700 and 1800’s was inhabited by the very worst of Europe’s criminals and is now the only natural home in the world to a species named after the devil himself. Decades later beginning in 1996 Tasmanian devils were going about their nocturnal lifestyle in normal devilish fashion feasting on small mammals and birds, finding mates and reproducing, occasionally fighting with one another and so on. (1) Just as criminals divvied up their booty hundreds of years before, the devils were sharing something of their own—only something of much less value. It turns out they were transmitting to one another a rare and contagious form of cancer known as Devil Facial Tumor Disease or DFTD. Once infected, facial tumors developed and the devil faced 100% mortality most often due to inability to eat or airway obstruction. Over the last 17 years the result of this highly contagious and fatal cancer has been the elimination of over half of the devil population throughout Tasmania. (2)
DFTD is not alone when it comes to transmissible forms of cancer. For over six thousand years dogs, jackals, wolves, and coyotes across the globe have experienced their own “contagious” cancer in the form of canine transmissible venereal tumor—C TVT and also called Sticker’s sarcoma. (2) CTVT is generally considered the first known cell line to be malignant having been described in the mid 1800’s. These unique growths like DFTD can spread from one individual to the next, but in the case of CTVT this most commonly occurs during coitus, licking, and biting infected areas. CTVT lesions usually establish in the genitals or in close proximity as a result. CTVT is unique in that only an estimated 7% of cases metastasize unlike in DFTD cases where 65% of them result in metastasis. CTVT rarely results in severe clinical illness but instead nearly always regresses on its own. (3)
So what is it that makes DFTD and CTVT so “contagious”? Essentially it boils down to host immunity. In the case of DFTD, devils pass on tumor cells when they are in close physical contact with others during mating or fighting. The Tasmanian devil population simply lacks the genetic diversity to be able to immunologically recognize and ward off the tumor and thus, these highly virulent and metastatic cells set up camp in the new host tissue and invade in no time. Interestingly, studies have shown that the DFTD cells are unique, containing only 13 pairs of chromosomes instead of 14 like most cells. Technology has also shown the very same cell line that began the DFTD devastation—thought to be of Schwann cell origin—is the very same one being transmitted throughout devil populations today. (2)
In contrast, CTVT, a histiocytic tumor (4), affects mammals rather than marsupials which have much greater diversity within the population and a more advanced capability to detect foreign and potentially invasive cells. This is due to the MHC-1 molecules or multiple histocompatibility complexes that help the body’s immune system to recognize foreign substances. CTVT is so effective in transmission because it down regulates these MHC-1 molecules effectively “hiding” the invasive cells from the body’s immune system. At some point however, this mechanism is overcome and the CTVT is recognized and killed by the body in animals that are immunologically sound. (2)
What about transmissible cancer in humans? The good news is that no comparable strain of such a killer contagious cancer has been recognized in humans compared to what devils in the “land down under” are experiencing. The bad news is that there are technically forms of cancer affecting man that result from contagious agents. Estimations attribute 15% of tumors world-wide to contagious pathogens including mainly viruses but also bacteria and parasites as well. Most documentation of cancer transmission cases in humans are reported in individual case reports, however, highlighting the rarity and unlikelihood of this occurrence. (2) Nonetheless, it still occurs. Hepatitis B and C viruses, herpes viruses, human immunodeficiency virus (HIV), and papilloma viruses are just a few examples of viruses that can develop into cancer in patients or predispose them to tumor formation. Bacterial etiologies include members of the Chlamydia, Helicobacter, Borellia, and Campylobacter families. There are also a few select parasites classified as Group I and Group II carcinogens including members of the Schistosoma, Opisthorchis, and Clonorchis families. So really, “contagious cancer” in humans is due to contagious or infectious etiologies and not necessarily direct contact transmission. Although there are documented and potential exceptions including cancer spread through tissue grafts, organ transplants, papillomavirus transmission during sexual intercourse and other isolated events. (1)
At the end of the day, the presence, history, transmission, and pathogenesis of transmissible cancers in Tasmanian devils, dogs, and the few cases documented in humans provides insight regarding the immune mechanisms that do and those that do not allow cancer to develop. The key difference here is mammals verses marsupials and the reality that mammals have a more advanced immune system allowing them to better overcome cancer and other foreign invasions. A better understanding of both CTVT and DFTD has and will likely continue to allow researchers better insight into mechanisms of immune system invasion of various types of cancer. (1)
No, this isn’t a clip from a science fiction movie. Although dramatic, this does occur in the brains of some people and animals around on our home planet. What is a prion you ask? Prions are almost as mysterious to the scientists that research them as they are to me, you and the neighbor down the street. Prion is a term used to describe an abnormal and particularly destructive strand of protein found in the brain. Proteins are the building blocks of the muscles and tissues of our bodies that work combine together to perform different functions. Prions are in their most basic form just an abnormal protein. They create damage by causing neighboring proteins (in the brain tissue) to become abnormal and misshapen as well. Like a house of cards, the structure of the brain tissue begins to break down. When pathologists look at tissue samples from brains affected by prions, they find multiple holes like cutting through a section of sponge. Thus giving this condition the medical term “spongiform encephalopathy.“
Microscopic picture of affected brain tissue from Wikipedia commons
Computerized model of prion from Wikipedia commons
Diseases caused by prions are generally termed TSE (transmissible spongiform encephalopathy). Some of these diseases occur spontaneously and some are transferred from one infected animal (or human) to another. We know that the prion can occur be passed from mother to offspring, through bodily fluids, or by ingestion  of or exposure to brain or spinal tissue of an infected animal  or man (Zombie fanatics, may I please have your attention back.)
Prions are unique because like cancer cells, the body can’t recognize them as abnormal and mount an immune response against them. Also, we have not developed an effective way of testing for prions until after the patient has already died. A third concern is that prions cannot be destroyed or controlled by chemicals, medications and treatments that we have available for other diseases. They are very hardy and seem to be unaffected by traditional and even extreme means of disinfection like ionizing or ultraviolet radiation and even formalin.
The Prion Family
TSE’s have been found in humans, cervids (elk, deer, moose), mink, sheep and goats, and even cats. Listed below are several.
Scrapie was first discovered in the United States in 1947 and causes progressive neurologic and behavioral changes in sheep and goats that eventually lead to death. There is no treatment and the only way to diagnose is post mortem examinations of brain tissue however, a test has been developed to identify susceptible live animals. Scrapie is thought to be transferred from mother to offspring during pregnancy.
The USDA began actively monitoring this disease in 2003 and started the RSSS (Regulatory Scrapie Slaughter Surveillance) program. This is a voluntary program that allows flock owners to test and identify susceptible animals and are compensated for having these animals removed from the flocks. Animals are also tested at the time of slaughter and any infected animals are traced back to the producers and allow them to further identify infected or exposed animals in their flocks for voluntary removal. Research is ongoing to find better ways to detect infected live animals. In laboratory environments, scientists have been able to transmit scrapie to other species but there is no evidence linking its potential spread to humans .
CWD was first seen in the U.S. in 1967 but it wasn’t until 1978 that a prion was discovered to be the cause . Similar to other TSE’s, this disease cause a progressively fatal neurologic disease in deer, elk and moose. Much less is known about this disease but it is thought to be spread by direct contact with saliva feces or urine from infected animals. Through surveillance programs started in 2002 , this disease has been found in both wild and domestic deer, elk and moose in multiple states. Research and control is much more difficult since the vast majority of these species are wild. There is no evidence of this condition being spread to humans.
BSE or Mad Cow Disease has been a hot topic over the past 15 years since its possible link to humans. Just like scrapie and CWD in their prospective animals, BSE causes a fatal TSE in cows. The name “Mad Cow disease” came from the symptoms that are observed in some cows with this disease. This prion disease is transmitted through ingestion of brain or nervous system tissues of infected animals. It used to be common practice to feed ground bone meal from deceased animals to live animals as a source of mineral but since the discovery of this disease transmission, the practice has been almost universally discontinued. Unlike TSE’s in other species, the medical and research community has discovered a link between cows and man.
There are several types and descriptions of prion diseases in people. These TSE’s are much more studied and researched because they affect humans. All of these conditions are a TSE but have variations based on how they are transmitted and differences in symptoms. CJD has been seen since the 1920’s  and Kuru was discovered in cannibalistic tribes in New Guinea in 1957 . The occurrence of prion diseases in humans varies from sporadic (unknown cause), iatrogenic (inadvertent transplant of infected tissue like corneal grafts), to genetic predisposition (familial CJD) and even ingestion of infected tissues (Kuru – human cannibalism and variant CJD – infected cattle). The onset and length of symptomatic disease and death in humans varies between these types of prion diseases but are always fatal.
Overall, prion disease in humans is rare with 1-2 deaths per 1 million people in the population worldwide but it does tend to be more common after the age of 50 with 3.4 deaths per 1 millions .
Am I going to get this disease?
Probably not. Statistically, TSE is rare in humans. You are much more likely to develop any number of common types of cancer or even be struck by lightning than to develop prion disease.
Then why are prions a concern?
They are scary because they look normal to your body, we can’t test for them while the person or animal is still alive and they are not susceptible to any types of treatments that we have available. TSE is rare in people but always fatal.
Is there anything I can do to protect myself?
If you’re a cannibal, please seek intervention right away. Wild game hunters, the CDC recommends that you consult state health departments regarding precautions in states that have cases of CWD. You should generally avoid hunting sick or abnormal appearing animals, reduce exposure to brain or spinal tissues and where gloves. Overall there is little you can do to personally limit your risk.
There are many good resources but there are many bad ones too. Good places to consult is the CDC (Centers for Disease Control), APHIS (Animal and Plant Health Inspection Service) or your local and state public health services.
Should I avoid contact with types of animals that can have prion disease?
No, except in the normal precautions around wildlife. We have no evidence that prions are transmitted from animals to humans other than with BSE as mentioned above. You are at more risk of being trampled by an animal than catching prions from it.
Will RG, Ironside JW, Zeidler M, Cousens SN, Estibeiro K, Alperovitch A, Poser S, Pocchiari M, Hofman A , Smith PG (1996) Apr. 6. A new variant of Creutzfeldt-Jakob Disease in UK. Lancet 347(9006):921-5
6. Johnson, Richard T. (2005). Prion Diseases. The Lancet Neurology, Vol. 4, Iss. 10, 635-42
In many areas of the country there is a vile blood sucker that lurks in our forests, our parks and even our backyards. What concerns us is not what this creature takes but rather what it leaves in our body after it bites us: corkscrew shaped bacteria called spirochetes and with the name Borrelia burgdorferi. When the bacteria invade our bodies and cause problems along the way we call it Lyme disease.
It is Lyme, not “Lymes” disease, and here’s how it got that name. In the early 1970’s a large number of cases emerged involving children with a “bulls-eye” rash followed by arthritis and they were concentrated in a small area in and near Lyme, Connecticut. Initially, the cause of the disease was unknown. A clue to the mystery was that most of the kids lived near a wooded area. After more investigation, ticks that feed on deer were identified as likely suspects. The medical community learned that the “deer tick” transmitted the spirochete bacteria which was likely infecting the children and causing symptoms. A researcher named Willy Burgdorfer helped identify the organism and in honor of his contribution the bacterium was named Borrelia burgdorferi.  Wouldn’t it be fun to have a nasty bacteria named after you?
Signs of Lyme disease can vary from a mild rash to serious pain and disability. If infected, a “bull’s-eye” rash occurs in most people because of the inflammation left in the trail of the migrating bacteria. They move from the bite site away leaving the classic target appearance. When this bug spirals though your joints, organs and tissues it can cause damage and a wide range of symptoms including fever, headache, lethargy, stiffness and general soreness. In some cases, more serious and long-term problems with swollen joints, arthritis, Bell’s palsy, and even heart disease can result. The symptoms can come and go and may last a lifetime.  This is one serious problem if you have the disease-spreading tick in your environment. Most people refer to the species as the “deer tick” or the “black-legged tick” although the proper name is Ixodes. Don’t forget about your dogs either. They are also commonly infected in endemic states, can get permanent arthritis, and can rarely even die from the disease. 
Lyme disease has been diagnosed in all 50 states but is heavily concentrated in the northeast and upper Mid-West.  Approximately 96% of cases come from only 13 states.  CDC data by state, maps and disease forecast models show a clearly increasing trend. Why such a steep increase in the number of cases? There are probably many reasons. First, surveillance is probably higher now than it was 15 years ago. We simply weren’t looking for it as much then. Another contributor is the increased population of the white-footed mouse in some regions.  This rodent is a reservoir for the bacteria meaning they harbor the bug until the tick larva come for a meal. They are like a bank filled with Borrelia ready for every tick to make a withdrawal. Once the tick has the bug in its gut, it is a loaded gun.
The recent increase in the white-footed mouse population may be the result of a cascade of events. “Change one thing. Change everything.” Ohio reported two recent “banner years” for acorn production with a 36% increase in white oak acorns between 2011 and 2012. A more abundant supply of “mouse food” in the way of acorns could result in more mice because they replicate much more efficiently. More white-footed mice means more banks filled with Borrelia. Ticks have a much better chance of making a withdrawal with every meal. More ticks with Borrelia means more animals and humans infected. So that’s how more acorns could mean more Lyme disease.
If you are unsure if Lyme bacteria are in your area just ask your dog. Some studies suggest that dogs that live in your area may be able to predict your risk level.    For those in the northeast and upper Mid-West you don’t have to ask. It is ubiquitous in these parts of the country. Veterinarians in many non-endemic states are now screening dogs yearly to see if they have ever been infected with Borrelia. If some dogs in your region are positive you should be more vigilant. The CDC will keep track as well so check their Lyme page annually.
Preventing Lyme disease can be a very big challenge. For dogs, very effective vaccines are available to protect them.   So if this is such a potentially devastating disease in people why don’t we have a vaccine for humans? Well, we did. In the late 1990’s a vaccine was approved by the FDA to aid in the prevention of Lyme disease in humans. While the safety and efficacy during the approval studies were good, there were skeptics and strong opponents in the public and medical community. Some people claimed that the vaccine caused Lyme disease rather than prevent it. Ultimately, the vaccine manufacturer withdrew it from the market citing poor demand.  Allen Steere, the man who discovered Lyme disease, also led one of the SmithKline Beecham (SKB) Lyme vaccine trials. He said, “the withdrawal of the SKB vaccine . . . represents the most painful event in our Lyme disease history . . . the vaccine was really withdrawn because of fear and lawsuits, not because of scientific findings”  Some advocates are attempting to rekindle efforts to make a vaccine for Lyme disease available again while others are opposed to the idea.
Tick prevention is our best strategy to prevent Lyme disease. Without a tick bite you cannot get Lyme disease. In fact, even if an Ixodes tick bites you, it takes at least 24-48 hours of attachment to transmit the bacteria into your body. The National Institutes of Health suggest that you follow their fashion advice and tuck in your shirt into your light colored pants, tuck your pants into your socks and then put tape around the bottom.  I’ve never seen this method utilized in a public place but I’m sure you will end up on the “People of Wal-Mart” site if you try it. The CDC recommends pyrethrins on your clothing and DEET on your skin and clothing to repel ticks.  The EPA also has a nice online tool. Check yourself everyday for ticks during peak months. Deer ticks are tiny so take some time and inspect your nooks and crannies. The nymph stage, the second smallest tick in this picture, is the one that usually infects people. Again, don’t forget about your dog. While he can’t give you Lyme disease, he is also susceptible to it. Ask your veterinarian which flea and tick preventative is right for your pets.
Lyme disease is no longer just a Lyme, Connecticut problem. If it is not yet in your backyard, it could be soon. Lyme disease can have lifelong, debilitating consequences. Arm yourself with information and your body with tick protection. We are not defenseless. Protect yourself. Protect your dog. Please.
Nothing could be worse than watching your seven-year-old lying in a hospital bed fighting for his life after being diagnosed with hemolytic uremic syndrome. Unfortunately, Mary McGonigle-Martin experienced it first hand as her son, Chris, fought for his life after being poisoned by E. coli 0157:H7 found in contaminated raw milk. Like many mothers, Mary was coerced into believing the inaccurate “facts” given to her by the farm she purchased raw milk from. Too often across the US, parents are given incorrect information about the safety of the milk they drink and unfortunately, it is often children that pay the price.
Few people today know of a time when they didn’t have the choice to drink anything but raw milk. Now there is an overwhelming argument that pasteurization is decreasing the nutritional value and safety of the milk. During the 1800s, Louis Pasteur developed the germ theory which claimed that germs outside the body, like those found in raw milk, have the ability to cause infection. As a result, he developed the process of pasteurization which is used in many foods we consume today, including milk, to kill bacteria. In the past couple decades, many consumers have decided that they would rather consume more natural and organic foods rather than those that were produced by more modern methods. It is frequently believed that these natural foods, such as raw milk, are healthier which is not the case.
Pasteurizing milk has had many benefits through history. One of its major contributions is the massive reduction in human tuberculosis cases as the bacteria that causes bovine tuberculosis can also infect humans. Bovine tuberculosis can be spread to humans through contact with an infected animal but most commonly through ingestion of raw milk. Although the prevalence of tuberculosis in humans in the US has significantly reduced since pasteurization began, there are still a number of other zoonotic pathogens that can be transmissible from milk to humans including salmonella, campylobacter, listeria, and E. coli 0157:H7, all of which can have dangerous or unpleasant consequences or even potentially be fatal. E. coli is possibly the most dangerous since it only needs less than 100 organisms to cause infection. It can lead to a dangerous condition called hemolytic uremic syndrome (HUS) which may cause kidney failure. According to the CDC, there have been 148 dairy product-associated outbreaks from 1998 – 2011 that were a result of consumption of raw milk or cheese which resulted in 2,384 illnesses, 284 hospitalizations, and 2 deaths. Today, milk is heated to 161°F for 15 seconds to destroy the bacteria. This is called high temp, short time pasteurization. Another form of pasteurization is low temp, long time, 145°F for 30 minutes.
Proponents of raw milk often argue it has a greater nutritional value than pasteurized milk. There have been numerous studies that have de-bunked this myth according to the CDC. Many factors are involved when determining the nutritional value of a product. One thing that is analyzed is how readily a body breaks down and utilizes a nutrient. If a mineral or vitamin is passed through the body quickly, any loss would be irrelevant. A second analysis would be the percent contribution of the nutrient to the recommended daily intake. If people don’t rely on a certain product as the main source of an essential nutrient, the loss of the nutrient is almost negligible. It is true that some nutrients or enzymes are slightly reduced during pasteurization. For example, lysine is the most relevant essential amino acid found in milk. After heating the milk, only 1-4% loss of the amino acid was observed1,5,7. But like what was previously alluded to, reduction of nutrients like vitamin C is not considered a significant concern as milk is not a major source of vitamin C. It would take 20 liters of milk to consume the daily requirement of vitamin C, regardless if it was raw or pasteurized milk3. The availability of nutritionally relevant vitamins such as B2 or B12 were found to be affected minimally or not at all by most common heat treatments2,5. Finally, while milk is a significant source of calcium and phosphorus, neither were affected by heating the milk3.
Another raw milk marketing tool some farmers use is to claim that their cattle are grass fed. When cattle were first domesticated, they were raised on a grass diet. As the understanding of animal nutrition grew, farmers discovered that cattle would gain weight faster if they fed them grain which is economically beneficial. Proponents of grass fed cattle argue that grain is not their natural food and the growth at a faster than normal rate is unhealthy. A grass-fed animal does not mean the animal is healthier. Nutritionally speaking, it was hypothesized that grass fed cattle have a lower milk fat content. The amount of grain does play some role in the fat development but energy and dietary protein intake and the current state of pasture the cattle are being grazed on must also be accounted for. In addition, milk fat content is also determined by the genetics of the individual animal, whether or not it currently has or had infections in her udder, and her stage of lactation3.
In addition, grass-fed cattle don’t necessarily carry a lower bacterial load. Even at the most sanitary facilities, milk is often contaminated by fecal material carrying pathogens during the milking process. According to Dr. Jim Kazmierczak, a Public Health Veterinarian with the Wisconsin Department of Health Services, there have been numerous studies that proved grass-fed cattle shed E. coli 0157:H7 in the feces longer than grain-fed cattle4 and that “cattle fed a forage diet were 0157:H7 culture-positive longer and with higher numbers of bacteria in their feces than cattle fed a grain diet.6” This means that E. coli 0157:H7 remained alive in their feces longer than grain-fed cattle. In addition to being found in fecal material, E. coli 0157:H7 can be found where cattle are grazing, can live on different environmental reservoirs for many months, such as gates, walkways, or water troughs, and is found more frequently during the summer.
We are blessed in the US to have a relatively safe food supply. Sure there will be occasional food-borne illness outbreaks but we are fortunate to have the technology, sanitary methods, and capabilities to keep the food we consume free from diseases to the best of our abilities. The invention of pasteurization reduced the number of illnesses and deaths caused by contaminated dairy products while maintaining the integrity and nutritional value of the milk. But there is a misconception of pasteurized milk across the country that has led to people making deadly decisions. Mary McGonigle-Martin would have never given her child raw milk had she known that it had the potential to harm or kill him. The risks are high when consuming raw milk and people need to be properly informed before their put their families and themselves in unnecessary danger.
1. Andersson, I., and Öste, R. (1995). Nutritional quality of heat processed liquid milk. In P. F. Fox (Ed.), Heat-induced changes in milk (2nd ed.) (pp. 279e307). Brussels: International Dairy Federation.
2. Burton, H. (1984). Reviews of the progress of dairy science: the bacteriological, chemical, biochemical and physical changes that occur in milk at temperatures of 100e150 _C. Journal of Dairy Research, 51, 341e363.
3. Claeys, W. L., et. Al (2013). Raw or heated cow milk consumption: Review of risks and benefits.
4. Hovde, C. J., et al. (1999). Effect of cattle diet on Escherichia coli O157:H7 acid resistance. Appl Environ Microbiol 65:3233–32
5. Schaafsma, G. (1989). Effects of heat treatment on the nutritional value of milk. Bulletin of the International Dairy Federation, 238, 68e70.
6. Van Baale, M. J., at al. (2004). Effect of Forage or Grain Diets with or without Monensin on Ruminal Persistence and Fecal Escherichia coli O157:H7 in Cattle. Appl Envir Microbiol 70:5336-5342.
7. Walstra, P., and Jeness, R. (1984). Dairy chemistry and physics (p. 467). New York: John Wiley & Sons.