Over three days we have posted a collection of blog posts on a topical Forum discussion published in Issue 2 about the methods used in wildlife conservation and in particular the role of dingoes in conservation. Following acceptance of a peer-reviewed Forum critique of another article in the Journal it is the Journal’s process to invite the original authors to write a peer-reviewed response to the critique. Both opinions are then presented side by side in an issue to enable readers to form their own opinions.
In this post Matt Hayward talks about his recent Forum article ‘Ecologists need robust survey designs, sampling and analytical methods’, which was written in response to a critique from Dale Nimmo and colleagues ‘Dingoes can help conserve wildlife and our methods can tell’ on a Practitioner’s Perspective from Matt Hayward and Nicky Marlow ‘Will dingoes really conserve wildlife and can our methods tell?
Here you can read a post from Dale Nimmo and a post from the Associate Editor, Jacqueline Frair and her postdoc Paul Schuette. (Please note that although the articles comment on each other the blog posts were received separately and are not intended as comments on the previous post.)
Debates are not uncommon in the scientific literature. Where one study yields results that cannot be replicated by subsequent studies, this is particularly common. In Australia, there has been huge debate about whether the native dingo Canis dingo suppresses the abundance of introduced red foxes Vulpes vulpes and feral cats Felis catus, and therefore provides benefits to native wildlife. This is because two groups of scientists have produced completely opposite results on the topic, often from almost exactly the same study areas and using the same methods. A ping-pong of rebuttals of each groups’ findings by the other group has ensued.
Nicky Marlow and I discussed how we could use dingoes to conserve the native wildlife species we were tasked with conserving in our roles as conservation managers. We reviewed the literature, and felt the evidence for one management strategy or another was lacking due to these conflicting results, and we contended that this was due to the methods used. Essentially, all studies of dingo-fox-cat interactions in Australia have relied upon unvalidated indices. Nicky and I knew most of the scientists involved in producing the primary literature on the subject, and were convinced they honestly believed their results were accurate – despite conflicting with those of others. We then looked at the failings of these indices and particularly how they ignored detectability, and highlighted these problems. We further suggested that ecologists get back into the field to collect more data, ideally via experimental manipulation, and analyse them using robust methods that accounted for differential detectability (e.g. mark-recapture, distance sampling, occupancy modelling, random encounter model).
Dale Nimmo and colleagues from Australia disagreed with our assessment. They contended that indices are perfectly fine and cited three examples as proof – notably of where indices had been validated.
Concerned that I may have been ignorant as to the virtues of unvalidated indices, I asked some of the world’s most esteemed carnivore biologists and statisticians their views. Many leapt on board to join me in reiterating the weaknesses of indices (including one of the scientists who published a paper that Nimmo et al. held up as evidence for the quality of indices). Others leant verbal support but wished to avoid being drawn into such a controversial debate. The consensus was that indices are indeed weak tools that rely on almost impossible assumptions. We concluded that the key problems are:
- Indices require repeated validation but this is very difficult. Validating an index requires several other unlikely assumptions to be met, not least of which is equal detectability. Indeed, the analysis by Gopalaswamy et al. (2015) suggests validating indices is almost impossible.
- Calibration of indices is not constant across contexts. Indices need to be validated in exactly the same situation that they are applied. For example, you can’t rely on an index of footprints on clay soils in sandy soils. Also, you can’t rely on indices of two species when one species might influence the behaviour of the other.
These are not new concerns, but sadly many ecologists regularly ignore them. The apparent ease of recording indices is likely to be misleading, because of the substantial additional work required to interpret the index or to validate it as a reliable measure of relative abundance across a range of conditions. You cannot learn much from methods with big and unmeasured uncertainty that can vary in either direction. Thus, we do not share the apparent optimism of the Editors of the Journal of Applied Ecology in the use of indices.
Corey Bradshaw (a co-author of Nimmo’s) has since accused Nicky and I of cherry-picking citations to prove our point. We illustrated that Nimmo et al. did likewise in our response. Clearly, the fact that anyone can cherry-pick the literature suggests there isn’t consensus on a topic, which was the reason Nicky and I wrote the paper in the first place. Hence, ecologists need to collect more data using robust field and data analysis methods.
Perhaps most concerning to Nicky and I was that, despite devoting a large section of our paper to extolling the virtues of dingoes irrespective of their mesopredator suppressive value, this was utterly ignored by Nimmo et al. They have since lumped us into an ‘anti-dingo’ camp with claims our initial paper was driven by some “political” aversion to dingoes. Notwithstanding the facts that moving the debate from the scientific literature to blogs and news articles politicises it, or that the original H&M paper had this entire section extolling the virtues of dingoes, our current paper includes the views of international carnivore and statistical experts (Italian, Indian, Kiwi, South African, Australian, British and American). Hopefully the views of these international experts won’t be smeared by Australian perspectives. Our combined view is that the reliance upon indices in Australia’s dingo debate explains the divergent results obtained to date. While blogs etc. may further add to the political debate, only well designed, replicated experiments coupled with robust methods of survey and analysis will solve this (and other scientific) debate and we highlight how such experimental studies should be designed.
To exclude political views biasing this debate, we urge readers to consider our arguments rather than focusing on the specific species they pertain to. Perhaps more importantly, it is crucial for ecologists to collaborate with statisticians and conservation managers to conduct the large-scale, replicated experimentation needed, coupled with robust data collection and analytical methods, to produce primary data to solve this conservation problem once and for all. Indeed, excluding misrepresentations, the only thing Nimmo et al. and ourselves disagree on is the validity of indices.
There are lessons to be learned from this debate for other scientific debates. It is fundamental that robust data and analysis be used to avoid this kind of time wasting, stressful and diversionary debate. Without robust data and analysis, results will be invariably open to interpretation and scrutiny.
Gopalaswamy, A., Delampady, M., Karanth, K.U., Kumar, N.S. & Macdonald, D.W. (2015) An examination of index-calibration experiments: counting tigers at macroecological scales. Methods in Ecology and Evolution, DOI: 10.1111/2041-210X.12351.