What counts as good evidence?

In medical research, a professional might answer this question as you would expect: evidence can be trusted if it is the result of a randomized, controlled, double-blind experiment, meaning the evidence is only as strong as the experiment design. And in medicine, it’s possible (and important) to procure this kind of strong evidence.

But when it comes to conservation, it’s a whole different story.

Dr. David Gill (photo from The Nicholas School)

The natural world is complicated, and far beyond our control. When studying the implications of conservation, it’s not so easy to design the kind of experiment that will produce “good” evidence.

David Gill, a professor in Duke’s Nicholas School for the Environment, recently led a study featured in the journal Nature that needed to  define what constitutes good evidence in the realm of marine conservation. Last Wednesday, he made a guest appearance in my Bass Connections meeting to share his work and a perspective on the importance of quality evidence.

Gill’s research has been centered around evaluating the effectiveness of Marine Protected Areas (or MPAs) as a way of protecting marine life. Seven percent of the world’s oceans are currently designated as MPAs, and by 2020, the goal is to increase this number to 10 percent. MPAs arguably have massive effects on ecosystem health and coastal community functioning, but where is the evidence for this claim?

Although past investigations have provided support for creating MPAs,  Gill and his team were concerned with the quality of this evidence, and the link between how MPAs are managed and how well they work. There have historically been acute gaps in study design when researching the effects of MPAs. Few experiments have included pre-MPA conditions or an attempt to control for other factors. Most of these studies have been done in hindsight, and have looked only at the ecological effects within the boundaries of MPAs, without any useful baseline data or control sites to compare them to.

As a result of these limitations, the evidence base is weak. Generating good evidence is a massive undertaking when you are attempting to validate a claim by counting several thousand moving fish.

Gill’s measure of ecosystem health includes counting fish. (Photo from Avoini)

So is there no way to understand the impacts of MPAs? Should conservation scientists just give up? The answer is no, absolutely not.

To produce better evidence, Gill and his team needed to design a study that would isolate the effects of MPAs. To do this, they needed to account for location biases and other confounding variables such as the biophysical conditions of the environment, the population density of nearby human communities, and the national regulations in each place.

The solution they came up with was to compare observations of current conditions within MPAs to “counterfactual” evidence, which is defined as what would have happened had the MPA not been there. Using statistical matching of MPAs to nearby non-MPA and pre-MPA sites, they were able to obtain high-quality results.

A happy sea turtle pictured in a marine protected area (photo from English Foreign and Commonwealth Office.)

The research showed that across 16,000 sampled sites, MPAs had positive ecological impacts on fish biomass in 71 percent of sites. They also discovered that MPAs with adequate staffing had far greater ecological impacts than those without, which is a pretty interesting piece of feedback when it comes to future development. It’s probably not worth it to create MPAs before there is sufficient funding in place to maintain them.

Gill doesn’t claim that his evidence is flawless; he fully admits to the shortcomings in this study, such as the fact that there is very little data on temperate, coldwater regions — mostly because there are few MPAs in these regions.

The field is ripe for improvement, and he suggests that future research look into the social impacts of MPAs and the implications of these interventions for different species. As the evidence continues to improve, it will be increasingly possible to maximize the win-wins when designing MPAs.

Conservation science isn’t perfect, but neither is medicine. We’ll get there.