Antibiotic resistance: myths and misunderstandings

DSCN2864
A pig flying at the Minnesota state fair. Picture by TCS.

I’ve been involved in a few discussions of late on science-based sites around yon web on antibiotic resistance and agriculture–specifically, the campaign to get fast food giant Subway to stop using meat raised on antibiotics, and a graphic by CommonGround using Animal Health Institute data, suggesting that agricultural animals aren’t an important source of resistant bacteria. Discussing these topics has shown me there’s a lot of misunderstanding of issues in antibiotic resistance, even among those who consider themselves pretty science-savvy.

I think this is partly an issue of, perhaps, hating to agree with one’s “enemy.” Vani Hari, the “Food Babe,” recently also plugged the Subway campaign, perhaps making skeptics now skeptical of the issue of antibiotics and agriculture? Believe me, I am the farthest thing from a “Food Babe” fan and have criticized her many times on my Facebook page, but unlike her ill-advised and unscientific campaigns against things like fake pumpkin flavoring in coffee or “yoga mat” chemicals in Subway bread, this is one issue that actually has scientific support–stopped clocks and all that. Nevertheless, I think some people get bogged down in a lot of exaggeration or misinformation on the topic.

So, some thoughts. Please note that in many cases, my comments will be an over-simplification of a more complex problem, but I’ll try to include nuance when I can (without completely clouding the issue).

First–why is antibiotic resistance an issue?

Since the development of penicillin, we have been in an ongoing “war” with the bacteria that make us ill. Almost as quickly as antibiotics are used, bacteria are capable of developing or acquiring resistance to them. These resistance genes are often present on transmissible pieces of DNA–plasmids, transposons, phage–which allow them to move between bacterial cells, even those of completely different species, and spread that resistance. So, once it emerges, resistance is very difficult to keep under control. As such, much better to work to prevent this emergence, and to provide conditions where resistant bacteria don’t encounter selection pressures to maintain resistance genes (1).

In our 75-ish years of using antibiotics to treat infections, we’ve increasingly found ourselves losing this war. As bacterial species have evolved resistance to our drugs, we keep coming back with either brand-new drugs in different classes of antibiotics, or we’ve made slight tweaks to existing drugs so that they can escape the mechanisms bacteria use to get around them. And they’re killing us. In the US alone, antibiotic-resistant infections cause about 2 million infections per year, and about 23,000 deaths due to these infections–plus tens of thousands of additional deaths from diseases that are complicated by antibiotic-resistant infections. They cost at least $20 billion per year.

But we’re running out of these drugs. And where do the vast majority come from in any case? Other microbes–fungi, other bacterial species–so in some cases, that means there are also pre-existing resistance mechanisms to even new drugs, just waiting to spread. It’s so bad right now that even the WHO has sounded the alarm, warning of the potential for a “post-antibiotic era.”

This is some serious shit.

Where does resistance come from?

Resistant bacteria can be bred anytime an antibiotic is used. As such, researchers in the field tend to focus on two large areas: use of antibiotics in human medicine, and in animal husbandry. Human medicine is probably pretty obvious: humans get drugs to treat infections in hospital and outpatient settings, and in some cases, to protect against infection if a person is exposed to an organism–think of all the prophylactic doses of ciprofloxacin given out after the 2001 anthrax attacks, for example.

In human medicine, there is still much debate about 1) the proper dosing of many types of antibiotics–what is the optimal length of time to take them to ensure a cure, but also reduce the chance of incubating resistant organisms? This is an active area of research; and 2) when it is proper to prescribe antibiotics, period. For instance, ear infections. These cause many sleepless nights for parents, a lot of time off work and school, and many trips to clinics to get checked out. But do all kids who have an ear infection need antibiotics? Probably not. A recent study found that “watchful waiting” as an alternative to immediate prescription of antibiotics worked about as well as drug treatment for nonsevere ear infections in children–one data point among many that antibiotics are probably over-used in human medicine, and particularly for children. So this is one big area of interest and research (among many in human health) when it comes to trying to curb antibiotic use and employ the best practices of “judicious use” of antibiotics.

Another big area of use is agriculture (2). Just as in humans, antibiotics in ag can be used for treatment of sick animals, which is completely justifiable and accepted–but there are many divergences as well. For one, animals are often treated as a herd–if a certain threshold of animals in a population become ill, all will be treated in order to prevent an even worse outbreak of disease in a herd. Two, antibiotics can be, and frequently are, used prophylactically, before any disease is present–for example, at times when the producer historically has seen disease outbreaks in the herd, such as when animals are moved from one place to another (moving baby pigs from a nursery facility to a grower farm, as one example). Third, they can be used for growth promotion purposes–to make animals fatten up to market weight more quickly.  The latter is, by far, the most contentious use, and the “low hanging fruit” that is often targeted for elimination.

From practically the beginning of this practice, there were people who spoke out against it, suggesting it was a bad idea, and that the use of these antibiotics in agriculture could lead to resistance which could affect human health. A pair of publications by Stuart Levy et al. in 1976 demonstrated this was more than a theoretical concern, and that antibiotic-resistant E. coli were indeed generated on farms using antibiotics, and transferred to farmers working there. Since this time, literally thousands of publications on this topic have demonstrated the same thing, examining different exposures, antibiotics, and bacterial species. There’s no doubt, scientifically, that use of antibiotics in agriculture causes the evolution and spread of resistance into human populations.

Why care about antibiotic use in agriculture?

A quick clarification that’s a common point of confusion–I’m not discussing antibiotic *residues* in meat products as a result of antibiotic use in ag (see, for example, the infographic linked above). In theory, antibiotic residues should not be an issue, because all drugs have a withdrawal period that farmers are supposed to adhere to prior to sending animals off to slaughter. These guidelines were developed so that antibiotics will not show up in an animal’s meat or milk. The real issue of concern for public health are the resistant bacteria, which *can* be transmitted via these routes.

Agriculture comes up many times for a few reasons. First, because people have the potential to be exposed to antibiotic-resistant bacteria that originate on farms via food products that they eat or handle. Everybody eats, and even vegetarians aren’t completely protected from antibiotic use on farms (I’ll get into this below). So even if you’re far removed from farmland, you may be exposed to bacteria incubating there via your turkey dinner or hamburger.

Second, because the vast majority of antibiotic use, by weight, occurs on farms–and many of these are the very same antibiotics used in human medicine (penicillins, tetracyclines, macrolides). It’s historically been very difficult to get good numbers on this use, so you may have seen numbers as high as 80% of all antibiotic use in the U.S. occurs on farms. A better number is probably 70% (described here by Politifact), which excludes a type of antibiotic called ionophores–these aren’t used in human medicine (3). So a great deal of selection for resistance is taking place on farms, but has the potential to spread into households across the country–and almost certainly has. Recent studies have demonstrated also that resistant infections transmitted through food don’t always stay in your gut–they can also cause serious urinary tract infections and even sepsis. Studies from my lab and others (4) examining S. aureus have identified livestock as a reservoir for various types of this bacterium–including methicillin-resistant subtypes.

How does antibiotic resistance spread?

In sum–in a lot of different ways. Resistant bacteria, and/or their resistance genes, can enter our environment–our water, our air, our homes via meat products, our schools via asymptomatic colonization of students and teachers–just about anywhere bacteria can go, resistance genes will tag along. Kalliopi Monoyios created this schematic for the above-mentioned paper I wrote earlier this year on livestock-associated Staphyloccocus aureus and its spread, but it really holds for just about any antibiotic-resistant bacterium out there:

And as I noted above, once it’s out there, it’s hard to put the genie back in the bottle. And it can spread in such a multitude of different ways that it complicates tracking of these organisms, and makes it practically impossible to trace farm-origin bacteria back to their host animals. Instead, we have to rely on studies of meat, farmers, water, soil, air, and people living near farms in order to make connections back to these animals.

And this is where even vegetarians aren’t “safe” from these organisms. What happens to much of the manure generated on industrial farms? It’s used as fertilizer on crops, bringing resistant bacteria and resistance genes along with it, as well as into our air when manure is aerosolized (as it is in some, but not all, crop applications) and into our soil and water–and as noted below, antibiotics themselves can also be used in horticulture as well.

So isn’t something being done about this? Why are we bothering with this anymore?

Kind of, but it’s not enough. Scientists and advocates have been trying to do something about this topic since at least 1969, when the UK’s Swann report on the use of Antibiotics in Animal Husbandry and Veterinary Medicine was released. As noted here:

One of its recommendations was that the only antimicrobials that should be permitted as growth promotants in animals were those that were not depended on for therapy in humans or whose use was not likely to lead to resistance to antimicrobials that were important for treating humans.

And some baby steps have been made previously, restricting use of some important types of antibiotics. More recently in the U.S., Federal Guidelines 209 and 213 were adopted in order to reduce the use of what have been deemed “medically-important” antibiotics in the livestock industry. These are a good step forward, but truthfully are only baby steps. They apply only to the use of growth-promotant antibiotics (those for “production use” as noted in the documents), and not other uses including prophylaxis. There also is no mechanism for monitoring or policing individuals who may continue to use these in violation of the guidelines–they have “no teeth.” As such, there’s concern that use for growth promotion will merely be re-labeled as use for prophylaxis.

Further, even now, we still have no data on the breakdown of antibiotic use in different species. We know over 32 million pounds were used in livestock in 2013, but with no clue how much of that was in pigs versus cattle, etc.

We do know that animals can be raised using lower levels of antibiotics. The European Union has not allowed growth promotant antibiotics since 2006. You’ll read different reports of how successful that has been (or not); this NPR article has a balanced review. What’s pretty well agreed-upon is that, to make such a ban successful, you need good regulation and a change in farming practices. Neither of these will be in place in the U.S. when the new guidance mechanisms go into place next year–so will this really benefit public health? Uncertain. We need more.

So this brings me back to Subway (and McDonald’s, and Chipotle, and other giants that have pledged to reduce use of antibiotics in the animals they buy). Whatever large companies do, consumers are demonstrating that they hold cards to push this issue forward–much faster than the FDA has been able to do (remember, it took them 40 freaking years just to get these voluntary guidelines in place). Buying USDA-certified organic or meat labeled “raised without antibiotics” is no 100% guarantee that you’ll have antibiotic-resistant-bacteria-free meat products, unfortunately, because contamination can be introduced during slaughter, packing, or handling–but in on-farm studies of animals, farmers, and farm environment, studies have typically found reduced levels of antibiotic-resistant bacteria on organic/antibiotic-free farms than their “conventional” counterparts (one example here, looking at farms that were transitioning to organic poultry farming).

Nothing is perfect, and biology is messy. Sometimes reducing antibiotic use takes a long time to have an impact, because resistance genes aren’t always quickly lost from a population even when the antibiotics have been removed. Sometimes a change may be seen in the bacteria animals are carrying, but it takes longer for human bacterial populations to change. No one is expecting miracles, or a move to more animals raised antibiotic-free to be a cure-all. And it’s not possible to raise every animal as antibiotic-free in any case; sick animals need to be treated, and even on antibiotic-free farms, there is often some low level of antibiotic use for therapeutic purposes. (These treated animals are then supposed to be marked and cannot be sold as “antibiotic-free”). But reducing the levels of unnecessary antibiotics in animal husbandry, in conjunction with programs promoting judicious use of antibiotics in human health, is a necessary step. We’ve waited too long already to take it.

Footnotes:

(1) Though we know that, in some cases, resistance genes can remain in a population even in the absence of direct selection pressures–or they may be on a cassette with other resistance genes, so by using any one of those selective agents, you’re selecting for maintenance of the entire cassette.

(2) I’ve chosen to focus on use in humans & animal husbandry, but antibiotics are also used in companion animal veterinary medicine and even for aquaculture and horticulture (such as for prevention of disease in fruit trees). The use in these fields is considerably smaller than in human medicine and livestock, but these are also active areas of research and investigation.

(3) This doesn’t necessarily mean they don’t lead to resistance, though. In theory, ionophores can act just like other antibiotics and co-select for resistance genes to other, human-use antibiotics, so their use may still contribute to the antibiotic resistance problem. Studies from my lab and others have shown that the use of zinc, for instance–an antimicrobial metal used as a dietary supplement on some pig farms, can co-select for antibiotic resistance. In our case, for methicillin-resistant S. aureus.

(4) See many more of my publications here, or a Nature profile about some of my work here.

 

What is the harm in agricultural-use antibiotics?

After this post on antibiotic resistance, many of you may have seen an exchange on Twitter calling me out for being “knee-jerk” about my call to action to do something about the overuse of antibiotics. In that post, I focused on antibiotic use in agriculture, giving only brief mention to human clinical use. There are a number of reasons for this, and while I didn’t discuss them extensively on Twitter, I did want to provide an overview here in order to better explain my position and concern about antibiotic use in agriculture.

How are antibiotics used in animal production?

To start, some background on the issues. Antibiotics are used in agriculture in a number of different ways. Like humans, they’re used to treat disease when animals get sick. This type of use isn’t disputed for the most part–no one wants animals to die from treatable disease, nor do they want sick animals to enter the food chain. Antibiotics can also be used to prevent disease, such as when animals are stressed (as when they’re moved from farm to farm) and disease has a tendency to break out, or if a few animals in the herd are sick and owners want to prevent the rest of the herd from falling ill. This type of use is somewhat controversial, and many have argued that this type of use is only necessary because hygienic conditions on farms aren’t up to snuff–and that if better husbandry was practiced, this prophylactic use could also be significantly decreased or eliminated. Others argue that it’s necessary even with good husbandry.

The practice which is most widely disputed is the use of antibiotics for growth promotion. We’ve known for roughly 60 years that animals, when fed antibiotics at low doses (below the level required for disease treatment),  grow to their slaughter weight faster (and therefore, with less food input). This is the “low-hanging fruit;” the practice that even some in industry agree could end with pretty much no (or minimal) side effects to industry; and the practice that the European Union has already ended. It’s also the target of FDA guidance 213, which asks the phamaceutical industry to voluntarily phase out the use of growth promotant antibiotics in feed and water given to livestock. Twenty-five of 26 companies have agreed to this already, so again, there’s really not much dispute that this is a process that will be ending, after over 60 years of use and 45 years after a government report suggested that rising rates of antibiotic resistance in humans was tied to agricultural antibiotic use in the Swann report. (Maryn McKenna has a great timeline of other developments here).

Why am I (and many others) concerned about the use of antibiotics in agriculture?

First, and most compelling to me, is the fact that between 70-80% of all antibiotics used in the United States are used in agriculture. I’m linking to a PolitiFact report because they drill down into the caveats with that number in more detail than I want to go into for this post, but I will note that it’s tough to get good numbers because the industry won’t release them, and that the numbers we do have include drugs that are not used in human medicine–but that doesn’t mean they may not be important. More on that later.

Second, this is my area of expertise. I study antibiotic-resistant pathogens in the agricultural environment, so naturally this is my interest and where I know the literature the best. Third, antibiotic use in agriculture just isn’t as intensively studied when it comes to methods to reduce antibiotic-resistant microbes that may emerge from this setting. In the hospital and clinics, patients need a prescription to get antibiotics. The amount of antibiotics that are prescribed are tracked and those data are available. Hospitals often have stringent infection-control policies put in place to reduce the generation and spread of antibiotic-resistant “superbugs.” Hell, there’s enough research on these policies that my colleagues have a blog devoted just to that topic. In human medicine, no one is ignoring the generation and spread of resistant pathogens.

None of these control and monitoring policies are present on livestock farms as a matter of routine. Rather, as my colleague Lance Price has noted more than once, if he was going to try to create a superbug, farm use of antibiotics–subclinical dosing of thousands of animals at a time–would be an ideal way to create one.

What if we remove “growth promotant” antibiotics?

What remains an issue is what will happen after growth-promotant antibiotic use is stopped. There is already a “natural  experiment” going on in the EU, where such antibiotics were banned back in 2006. As I noted here, the results have been mixed when antibiotics have been removed from agricultural practices. Sometimes resistance persists, sometimes it goes down. A modeling paper examined the use of antibiotics for agricultural use, and suggested that their biggest impact happens before we even realize it via surveillance, and by the time we notice it, it may be too late to make much of a difference, which is depressing.  So even if antibiotics are banned for growth promotion purposes, there is a chance that we won’t see much of a dent in antibiotic resistance overall–or if we do, it may take years to see it decrease. This is an argument against removal of these sub-therapeutic uses–if we can’t 100% guarantee it will help, why change the status quo?–but at this point, even the current status quo is better than an ever-increasing arc of resistant bacteria.

Another concern that persists and muddies the waters is that no meaningful reduction in antibiotic use in animals will occur, but that rather antibiotics used for growth promotion will just be repackaged as “prophylactic” use, which will still be allowed under the new guidance. The industry says this won’t happen, but without meaningful and transparent surveillance, how can we know if it is or not?

Additionally, other sources of low-level antibiotics may still be present on farms and in feed, such as the use of distiller’s grains in animal feed which may still contain some antibiotics. And even if antibiotics that are important for human medicine are removed altogether, resistance still may linger or even climb if we allow for other classes of antimicrobials (such as ionophores, which are part of that group I mentioned above that are used in agriculture but not in human medicine) to still be used on the farms. Why could this be an issue? Right now, we really don’t know if any of these drugs co-select for resistance to important human medicines. For example, in some cases, antibiotic resistance genes are together as cassettes that can move around between bugs, such as on a plasmid or other mobile genetic element. That’s why using tetracycline on a pig farm can select for methicillin resistance–not because the drugs are the same (they’re totally different classes), but because the resistance genes come as a package deal. Is this happening with ionophores? Don’t know. It’s a messy area and makes any clear-cut cause-and-effect research very difficult to carry out.

To make matters even messier, because there’s so much transport of animals across state, national, and international lines, even if antibiotics are reduced in one place, new resistant bugs could be imported from elsewhere where no reduction in antibiotic use has taken place, mucking up the data and making it appear that antibiotic withdrawal has had no effect.

Furthermore, there is no directive for companies to actually track and report antibiotic usage differences after growth-promotant antibiotics are removed. We can’t even get good data on the industry as a whole, much less finer-level data describing how much goes to pigs, how much to cattle, how much on Iowa pig farms versus North Carolina, or for Smithfield versus Hormel farms, etc. It’s a surveillance nightmare. Even if we did have this data, surveillance of resistant pathogens is quite limited, especially on the farms themselves. Most of the data we have comes from NARMS–the national antimicrobial resistance monitoring system, which examines gram negative pathogens in people, meat samples, and live animals (taken at slaughterhouses). It’s a start, but what if we don’t see an effects in these organisms–but might in other commensal pathogens, or in the microbiome as a whole? Or in gram positives like my pet bug, Staphylococcus aureus? NARMS right now would miss those, and so might lead to false impressions of how reduction in antibiotics is really affecting resistance in the bacteria originating on farms.

Soooo….as you can see, this is a messy area. However, as I noted on Twitter, one should look at the totality of the research rather than searching for any particular “smoking gun” publication (a fallacy, I might add, that is employed by many types of science “skeptics”). There have been many, many papers that have shown, usually in ecological studies, that use of antibiotics on the farm is linked to generation of resistant bacteria, and that these bacteria (and associated resistance genes) can spread to humans via food, water, environmental runoff/contamination, air, and other mechanisms. Pew Health has an extensive bibliography of many of these studies here, and it’s barely even scratching the surface when it comes to publications in this field. In the end, though it’s messy, it breaks down to a simple truth: antibiotic use leads to antibiotic resistance, and reduced use is a goal to strive for–be it use in humans or in animals.

Eastern Equine Encephalitis: The Mosquito that bit the Snake

Guest post by Hillary Craddock

Last week a new study regarding Eastern Equine Encephalitis (EEE) was published online (Bingham et.al.). EEE is a mosquito-borne virus that can cause serious, and sometimes deadly, disease in humans and equines. In warmer parts of North America, the virus is spread year-round, but in areas where mosquitoes get killed off in the winter it has been something of a mystery as to how the virus makes it from year to year. Humans and equines are both dead-end hosts, which means that a mosquito can not be infected from biting an infected person or horse. Researchers in Alabama found that wild snakes in the Tuskegee National Forest were positive for  Eastern Equine Encephalitis virus (EEEV), which could explain how EEE was maintained after the first frosts killed off infected mosquitoes. Essentially, what would happen is that an infected mosquito bites a snake, probably during the summer or early fall, and the snake harbors the virus in its blood during the winter. Then, in the spring, an uninfected mosquito (which overwinters as a larva) bites the snake and acquires the virus. This now-infected mosquito can bite a horse or a human, who can then get sick. (I’m sensing a Chad Gadya theme here. Just me? Ok…)

Amphibians and/or reptiles as the winter reservoir of EEE is not a recent research question. A book, Reptiles as possible reservoir hosts for eastern encephalitis virus, (which I was unfortunately unable to get my hands on, since apparently only the University of Alberta has an available copy) was published in 1961, and another  study in 1980 by Smith and Anderson stated that two New England species of turtles could be infected by the virus. Interestingly enough, a 2012 study by Graham et. al. (same research group as Bingham et.al.) found that, out of 27 species surveyed, only snakes showed high seropositivity (positive for virus antibodies in the blood), while amphibians, turtles, and lizards had low to no seropositivity. A 2004 study by Cupp et.al., also in Alabama, found that mosquitoes carrying EEEV had fed on amphibians and reptiles in addition to birds and mammals. Now, it’s all well and good to show that a reptile can act as a host, but just because something can be the host doesn’t mean that it is the host in the actual system. The crucial step was testing their hypothesis in a wild population.

And test they did. The researchers were careful to state that the question of snakes acting as reservoir hosts is “unresolved,” but there is “mounting evidence” that snakes are the winter hosts of the virus. Cottonmouths (Agkistrodon piscivorus) were the most common snake sampled, making up 41% of sampled reptiles. They were also frequently seropositive, with 35.4% testing positive for EEEV. Of the five species sampled, one other, the copperhead (Agkistrodon contortrix) was found to be positive. The researchers tested for active infection in addition to antibodies, and found that some snakes were actively infected. This means that, if a mosquito bit the snake, the mosquito could possibly acquire the virus and pass it on to other creatures.

So why am I so excited? When I took my first Emerging Infectious Diseases class in college, the professor explained to us that zoonotic infectious diseases were most likely to jump between closely related species. Granted, I’m using the word “close” loosely here. She meant that diseases were far more likely to jump mammal to mammal or bird to mammal than, say, fish to mammal or reptile to mammal. I was also taught that if you can understand how a disease is transmitted, you’re one step closer to controlling it.

Which answers the ultimate question – so what does this all mean? When we better understand how a disease is transmitted, it’s easier to control it. Further research in other parts of the country is needed to see if snakes are harboring the virus in the North East and Midwest regions, but the implications for disease control are there. If we understand where or when snakes congregate, we might be able to better predict disease dynamics, specifically outbreaks. If the first outbreaks in the summer originate from mosquitoes biting snakes, then it’s possible that scientists could conduct heavier surveillance in areas where snakes are known to congregate. In this case, we have two entire categories of experts – herpetologists (reptile specialists) and wildlife scientists – that public health practitioners can work with to try to control the disease. This paper is amazing because it unlocks a whole new cavalcade of questions and potential solutions.

______________________________________________________________________________________

This post was republished with permission by the author, and was originally published at Mind the Science Gap.

Hillary is a second year master’s student in Epidemiology at the University of Michigan, and she is currently working in influenza research. Her primary interests include zoonotic, emerging, and vector-borne infectious diseases, disaster preparedness and response, and public health practice.

Ebola: Back in the DRC

August, 1976. A new infection was causing panic in Zaire. Hospitals became death zones, as both patients and medical staff succumbed to the disease. Reports of nightmarish symptoms trickled in to scientists in Europe and the US, who sent investigators to determine the cause and stem the epidemic. Concurrently, they would find out, the same thing was happening hundreds of miles to the north in Sudan. In all, 284 would be infected in that country, and another 358 in Zaire–over 600 cases (and almost 500 deaths) due to a mysterious new disease in just a few months’ time.

The new agent was Ebola, but remarkably, the outbreaks were unrelated, at least as far as any direct epidemiological links go. No one had brought the virus from Sudan to Zaire, or vice-versa. Molecular analysis showed that the viruses causing the outbreaks were two distinct subtypes, subsequently named for their countries of origin, Ebola Zaire and Ebola Sudan.

While Uganda is currently battling another outbreak of Ebola Sudan, rumors in the past week have suggested that this virus may have spread to former Zaire (now the Democratic Republic of Congo), where Ebola has reappeared 4 additional times since the first discovery there in 1976. It’s now been confirmed that Ebola is again present in the DRC, with an (unconfirmed) 6 deaths. However, it’s not related to the Uganda outbreak. Reminiscent of 1976, the strain that’s circulating currently in the DRC is the Bundibugyo subtype, which was first identified in Uganda in a 2007-8 outbreak in that country, rather than the Sudan type causing the current Ugandan epidemic. Interestingly, every previous outbreak of Ebola in the DRC has been caused by the Zaire type of Ebola, so the appearance of Bundibugyo is a first–though not altogether surprising given that the outbreak province borders Uganda.

Is this just coincidence that Ebola has twice now broken out in two different places at the same time, but with different viral subtypes? Hard to say. Though we can now say it’s fairly likely that bats are a reservoir host for Ebola and other filoviruses, we can’t say for sure that bats are the *only* reservoir. Indeed, we know that some outbreaks have occurred because the index case was in contact with an infected ape or their meat–were these animals originally infected by a bat, or by another source? How does the ecology of an area affect the chances of an outbreak occurring? Were there reasons that humans might be increasingly exposed to the virus in these different areas–Zaire and Sudan in 1976, DRC and Uganda in 2012–at the same time? Weather conditions? Trade/industry? Host migration or dispersal? We know with another bat-borne virus, Nipah, that changes in farming practices led to increased proximity of fruit bats and farmed pigs–allowing pigs to come into contact with virus-laden bat guano, become infected with Nipah, and subsequently transmit the virus to farmers. Things that may seem completely inconsequential–like the placement of fruit trees–can actually be risk factors for viral emergence. Is there a common factor here, or just bad luck? Only additional hard-won knowledge of filovirus ecology will be able to tell.

Here come the ticks: is global warming leading to an increase in Lyme disease?

This is the last of 16 student posts, guest-authored by Jessica Waters. 

Climatologists have been warning us about the ongoing and impending consequences of global warming for years. But the results of climate change affect more than just polar bears and penguins  – if you live anywhere in the northeastern, north-central or west coast states of the U.S.., you could be at a greater risk for contracting Lyme Disease.

Lyme disease is an infection of the Borrelia burgdorferi bacterium that is spread through black legged ticks (otherwise known as deer ticks) who feed on the white footed mouse species, also known as the wood mouse, which carries the bacteria.  The symptoms of the disease itself include fever, headache, fatigue, and a telltale “bulls eye” rash near the site of the tick-bite. Left untreated, Lyme disease can spread to affect the joints (causing arthritis), heart, and nervous system – often causing irritability and mood swings.

Lyme disease transmission occurs in a Reservoir à Vector à Host cycle.  A Reservoir is the habitat in which an infectious agent normally lives, grows and multiplies – in this case, it is the white-footed mouse. A disease vector is a carrier animal (usually an arthropod) that transfers an infective agent from one host to another- i.e. the blacklegged tick.  And the host in this scenario is an organism that harbors an infective agent – us, our pets, and other animals.

Lyme disease is transmitted when a nymphal (young) tick feeds on a B. burgdorferi carrying white-footed mouse. The contaminated bloodmeal that it ingests allows the bacterium to live on in the tick (the vector), and the infected tick can then transmit the bacteria to its next host – a dog, your child, you, or any other animal roaming around in a wooded area.

Nearly a quarter of all Lyme disease cases are in children, as they play near to the ground, where host-seeking ticks are often waiting.  The CDC reports that pet owners and outdoorsy types are also at higher risk, as dogs and people traipsing through thick brush can easily pick up a tick or two without realizing it.

So how does climate change factor into this? According to ecologist Rick Osfeldt, a small mammal expert in Millbrook , New York, it all comes down to acorns.

“ Acorn abundance gives rodents a jump start on breeding. By the next summer, mice numbers are through the roof”.

This phenomenon gave rise to a “mouse-boom” in 2010, a low-acorn year in 2011, and what promises to be a busy summer for public health officials in 2012.  As the theory goes, as nymphal ticks wake up to a low mouse count (from 2011), they will feed on the existing mice and then turn to the next best thing – humans.

While the exact science behind what causes oak trees to produce more acorns is not yet identified, studies suggest that plants in warmer climates produce more seeds.

More acorns means a bumper crop for hungry mice, and milder winters mean higher breeding rates and higher survival rates for the B. burdorferi carrying rodents.

Maria Diuk-Wasser, an assistant professor of epidemiology at the Yale school of public health also attributes an increase in Lyme disease to higher average temperatures, but for a different reason.

“One possible way in which temperature may limit tick populations is by increasing the length of their life cycle from two to three years in the north, where it is colder.”  As average temperatures increase, climate change could be reverting the normal temperature pattern and increasing the production Lyme disease carrying ticks.

If both hypothesis prove to be true (and so far, CDC reported cases of Lyme disease have increased from 15,000 in the mid 1990s to over 40,000 today), an increase in both mouse and tick populations could indicate an increased prevalence of Lyme disease in years to come.

It may also be that the number of (geographically) susceptible people will increase as well. Nick Ogden, a zoonoses  researcher with the Public Health agency of Canada recently published a paper suggesting that the tick-inhabitable regions of North America may be increasing – in Eastern Canada, the tick inhabitable region will expend from 18% to over 80% by 2020, while the average temperatures in Canada have simultaneously increased by 2.5 degrees Fahrenheit over the past 60 years.

While some measures can be taken to prevent infection of Lyme disease once a tick has made a meal of you, cautionary measures are the best way to prevent you and your loved ones from becoming hosts.

The CDC recommends using insect repellant, applying pesticides, reducing tick habitat (i.e. cutting down heavy brush areas in your yard), and wearing long sleeves and pants when in wooded areas.  Prompt removal of ticks is also necessary, so continually check exposed skin areas when you are outdoors  -the backs of your legs, the back of your neck, the ears of your dog, etc.

One creepy-but-saving grace in tick removal may be that once a tick has landed on you, it will not immediately attach, instead crawling around for up to three hours to find an ideal location to feed. While not pleasant to imagine, it may give you enough time to jump in a hot shower after time outdoors and wash off any unattached ticks. Even attached ticks still require 24 to 36 hours to spread the B. burgorferi bacteria into your blood – if you remove a tick within 24 hours, you are greatly reducing your chances of getting Lyme disease.   Attached ticks  should be removed gently with tweezers.

If diagnosed early, Lyme disease can be cured with antibiotics. If you find an attached tick, see a general practitioner. You may be offered a single dose of antibiotics if you were bitten by a Lyme disease carrying tick species and the tick has probably been attached for at least 36 hours.

So, perhaps most importantly, if you suspect that you may have been bitten by a tick or have symptoms of Lyme disease – get thee to a doctor, and consider saving the planet from further warming by riding your bike there.

Online Sources:

http://www.cdc.gov/lyme/

http://www.cdc.gov/lyme/transmission/blacklegged.html

http://www.who.int/topics/zoonoses/en/

http://www.huffingtonpost.com/2012/04/04/global-warming-lyme-disease-west-nile_n_1400692.html

http://www.mnn.com/health/fitness-well-being/blogs/experts-predict-major-increase-in-lyme-disease-for-2012

Patrick A. Leighton, Jules K. Koffi, Yann Pelcat, L. Robbin Lindsay, Nicholas H. Ogden. Predicting the speed of tick invasion: an empirical model of range expansion for the Lyme disease vector Ixodes scapularis in Canada.Journal of Applied Ecology, 2012; DOI: 10.1111/j.1365-2664.2012.02112.x

 

Waste not, want not? Poultry “feather meal” as another source of antibiotics in feed

The ecology of antibiotic resistance on farms is complicated. Animals receive antibiotic doses in their food and water, for reasons of growth promotion, disease prophylaxis, and treatment. Other chemicals in the environment, such as cleaning products or antimicrobial metals in the feed, may also act as drivers of antibiotic resistance. Antibiotic-resistant organisms may also be present in the environment already, from the air, soil, or manure pits within or near the barns. Ecologically, it’s a mess and makes it more difficult to attribute the evolution and spread of resistance to one particular variable.

A new paper emphasizes just what a mess it really is, and what animals are exposed to in addition to “just” antibiotics. Led by Keeve Nachman at the Johns Hopkins University Center for a Livable Future, his team took a different approach to examining farm exposures, by looking at “feather meal.” What is feather meal, you may ask? I did when I met with Keeve last month at Hopkins as we discussed his research. Well, feathers are one obvious byproduct of chicken slaughtering, and waste not, want not, right? So feathers are processed into meal, which can then be used in a number of ways–among them fertilizer, and as an additive to feed for chickens, pigs, fish, and cattle.

We already knew that chickens receive antibiotics in their food and water supplies, just as other farm animals do. It was also known that some antibiotic residues persisted on chicken feathers–another potential driver of resistance in farm animals. However, Nachman and colleagues wanted to assess what other chemicals may be present in this feed meal besides antibiotics, and also whether those antibiotic residues persisted in the feather meal after processing/treatment of the feathers. As lead author David Love notes:

Why study feather meal? We know that antibiotics are fed to poultry to stimulate growth and to make up for crowded living conditions in poultry houses, but the public does not know what types of drugs are used and in what amounts. It turns out that many of these drugs accumulate in poultry feathers, so by testing feathers we have a non-invasive way of learning about what drugs are actually fed to poultry.

To do this, they examined 12 feather meal samples from the U.S. (n=10) and China (n=2). All 12 samples contained at least one antibiotic residue, and some contained residues of 10 different drugs (both of those were from China). While many of the antibiotics were ones used in poultry farming (or their metabolites), they also found drugs they did not expect. Most significantly, this included residues of fluoroquinolones, which they found in 6 of 10 U.S. feather meal samples. Why is this important? Fluoroquinolone use was banned in U.S. poultry production as of 2005 because of the risk to human health–so where are these residues coming from? The authors make a few suggestions for this:

These findings may suggest that the ban is not being adequately enforced or that other pathways, for example, through use of commodity feed products from livestock industries not covered by the ban, may inadvertently contaminate poultry feed with fluoroquinolones. Furthermore, if feather meal with fluoroquinolone residues is fed back to poultry, this practice could create a cycle of re-exposure to the banned drugs. Unintended antimicrobial contamination of poultry feed may help explain why rates of fluoroquinolone-resistant Campylobacter isolates continue to persist in poultry and commercial poultry meat products half a decade after the ban.

Interestingly, the authors tested whether antibiotic residues at the level they found could influence bacterial growth, and found that they did inhibit growth of wild-type E. coli, but allowed a resistant strain to flourish.

Besides antibiotic residues, a number of other chemicals were also detected, including many I’d never thought to associate with farming. In the U.S. samples, they found caffeine–apparently chickens may be fed coffee pulp and green tea powder, which may account for this finding; acetaminophen (Tylenol), which can be used to treat fevers in poultry just as it can for humans; diphenhydramine (the active ingredient in Benadryl), which apparently is used for anxiety issues in poultry; and norgestimate, a sex hormone. Any kind of health significance to these (either to people or to the animals who are ingesting these via feather meal) is uncertain. In an interview with Nick Kristof in the New York Times, Nachman noted:

“We haven’t found anything that is an immediate health concern,” Nachman added. “But it makes me question how comfortable we are feeding a number of these things to animals that we’re eating. It bewilders me.”

So what we’re seeing here are the presence of antibiotics and other drugs in feather meal, which is spread around as a fertilizer or fed to many species of domestic animals as an additive. It’s difficult to keep up with these additional feed additives–in addition to feather meal, many animals could also receive distiller’s grains in their diet, ethanol by-products which are another potential source of antibiotic residues.

This, my friends, is a clusterfuck.

Though I’ve focused on the U.S. data here, the paper notes that the Chinese samples are relevant as well–while most feather meal used here is domestically produced, we do import some, and about a quarter of what we import is from China, where antibiotics that are restricted or banned in the U.S. may still be in use. Furthermore, farmers may not even know this is in the feed they’re using, as many mixes are proprietary. (And if farmers don’t know, you can imagine how difficult it is for a researcher to determine if this is playing a role in antibiotic resistance or other public health issues on these farms).

Works cited

Love, D., Halden, R., Davis, M., & Nachman, K. (2012). Feather Meal: A Previously Unrecognized Route for Reentry into the Food Supply of Multiple Pharmaceuticals and Personal Care Products (PPCPs) Environmental Science & Technology, 46 (7), 3795-3802 DOI: 10.1021/es203970e

Climate change and public health

I rarely write about climate change. As much as it’s been hashed out amongst climate scientists, and even many of the former “climate skeptics” have now changed their tune, I readily accept that climate change is happening, and is happening largely due to human activities. More importantly for my field, climate change is also having effects on human health in a number of different ways, from the movement of insect vectors into new areas, to warming of the seas leading to more extreme weather conditions, to the loss of coral reefs and the freshwater that these reefs protect from the surrounding oceans. It’s an immense field, and it seems that every time I turn around, another paper is published detailing the public health effects of climate change.

Luckily for me, many of these examples have been carefully documented in a recent book by Paul Epstein and Dan Ferber, Changing Planet, Changing Health. Epstein was a maverick in this field, trained as a physician who had carried out global health research in several African countries. In his previous position helping to run the Center for Health and the Global Environment at Harvard, he led research into a variety of areas in tropical medicine, including the role of climate in disease epidemiology. Unfortunately, as I was finishing up this book last night, the New York Times reported that Dr. Epstein passed away at the age of 67. This is a huge loss to the field, but work in this area will certainly continue, and we’re likely to only see more connections between disease and global warming in the coming years and validation of his passions and ideas.

“Changing Planet, Changing Health” is deceptively expansive. It’s a mere 300 pages before notes and index, but it takes you on a journey investigating the movement of mosquitoes in Africa, cyanide in Honduras, soybean rust in Illinois, pine beetles in Colorado, and even flooding in Cedar Rapids, Iowa. And yet, the book never felt disconnected to me–Epstein & Ferber manage to draw the myriad climate-associated threads together into a well-woven tapestry, and fluidly move from one topic to another. They also discuss what needs to be done to curb this destruction in the last chapter.

Of course, the last chapter is also one of the toughest. While climate change is harming our health in a thousand different ways every day, there’s still denial in many circles that it’s even happening, and none of the solutions to curb it are easy. Furthermore, too many people still see it as “just a polar bear problem” rather than something that actually makes a difference in their lives. This needs to change. Epstein and Ferber succeed in making climate change personal: something everyone who eats and breathes should be concerned about.

What is the Hygiene Hypothesis?

Guest post by Zainab Khan

In most western countries, germs have become synonymous with the idea of something bad that needs to be killed as quickly as possible. However, people have long been questioning the validity of these ideas; a few decades ago it was hypothesized that not enough exposure to germ can and does cause insufficient development of an individuals immune system. New studies have recently shown that this idea of getting rid of all germs, and keeping children exposure to them at an absolute minimum, may possibly cause more harm then good; over cleanliness is suspected to be one of the main reasons that there is such an increased number in asthma and allergy ridden people in western countries. Also, compared to just a generation or two ago, people today have an increased chance of having/developing allergies. Is this all due to society’s craze over germs?

It is important when talking about allergies to have some working knowledge on what happens when an individual has allergies or an allergic attack. Allergies are an extreme and inappropriate reaction by an individual’s immune system to what typically is a common harmless stimuli found in a normal environment; the body takes something such as hay, food, pollen, etc. and has a hypersensitivity reaction to it. The body ends up activating its white blood cells (these are the cells that defend the body against any foreign bad stimuli), which typically are what help humans ward of virus and bacteria, for example the flu or an infection, which results in an inflammatory response. This inflammatory response manifests itself in different ways: asthma, eczema, hives, runny nose or eyes, coughing etc.

The Hygiene Hypothesis

Over two decades ago, the idea that there is such a thing as too much cleanliness was first proposed by David P. Strachan in his Hygiene Hypothesis. The idea behind this theory is that a lack of early exposure to the types of germs and stimuli that people used to have is the cause of allergies. In developing nations and in earlier time periods families tended to be larger then today. It was uncommon to have just one or two children; the idea behind having more children is that the elder child exposes the younger children to more germs and in turn the children end up having to develop a stronger immune system because the immune system has been fully developed by all the early stimuli [1,2]. This idea of exposure to other children has also held true for children who attend daycare at an early age. Daycare children tend to develop fewer allergies then those who are never in such environments. Research has gone even farther to say that children who are exposed to hepatitis A or the measles are less likely to have certain types of allergies [3].

Arguments against the hygiene hypothesis emerged when statistics were followed about inter city African American children in the United States, who have very high numbers of asthma. A study was done that showed that many of these children had been sensitized to the common allergens found around them; however, they still developed asthma at the same rate as those kids who were not sensitized to the same allergens [4]. Also, it is a scientific fact that some allergies have a genetic component. A child who has two parents with allergies has a 75% chance of also developing allergies. There are genetic links that have been found between certain types of allergic responses which complicates the idea of how much immunity is inherited and how much can be developed [5].

Although the idea of germ exposure has been building momentum within the last few years, the debate and research behind it is certainly not complete. If the hygiene hypothesis is true, this opens up another type of debate on how much and what kinds of bacteria, exposure, and caution should be taken around children. What exactly are the “right” germs, and how many are too many? In a society obsessed with antibacterial hand soaps, disinfectants, and bottled water it is going to be quite a challenge trying to convince people that germs are not all that bad.

Works Cited

1. Am. J. Respir. Crit. Care Med., Volume 164, Number 7, October 2001. The Increase in Asthma Ca Be Ascribed to Cleanliness 1106-1107 Link.

2.Strachan David, Thorax. Family Size, Infection and Atopy: The first Decade of the ‘hygiene hypothesis’ Link.

3. Matricardi Paolo, Rosmini Franceso, Riondino Silvia, Fortini Michele, Ferrigno Luigina, Rapicetta Maria, Sergio Bonini, BMJ 2000;320 Exposure to foodborne and orofecal microbes versus airborne viruses in relation to atopy and allergic asthma: epidemiological study 412-417. Link

4. R. Call, T. Smith, E. Morris, M. Chapman, T. Platts-Mills, The Journal of Pediatrics, Volume 121, Issue 6 Risk factors for asthma in inner city children, 862-866. Link

5. Mackay, Rosen, Volume 344, January 2001. Allergy and Allergic Diseases 30-37
Link.

Ebola in pigs! [UPDATED]

I’ve mentioned repeatedly how little we know about Ebola ecology–what the reservoir host(s) are, how it’s transmitted to humans (and other species), why it causes outbreaks when it does. We know even less about the Reston subtype of Ebola, which–in contrast to the Zaire, Sudan, Ivory Coast, and Bundibugyo subtypes, originated in Asia and was first found in monkeys imported into the United States for research purposes. It also is different from the other subtypes in that it appears to be only mildly lethal to monkeys, and several asymptomatic human infections have been documented (but none where humans appear to have developed symptoms).

Now we might have another chance to study Ebola Reston in nature, as Ebola Reston has been found in pigs from the Philippines:
Continue reading “Ebola in pigs! [UPDATED]”