Models are core to many working in the various fields that constitute applied ecology. However, models can mislead researchers and decision makers if used incorrectly. Uncertainty is pervasive in applied ecology, and if not handled correctly can lead those using models having a false sense of security about their ability to contrast the outcomes of alternative management decisions. In a recent paper in Journal of Applied Ecology, EJ Milner-Gulland and Katriona Shea highlight some of the ways uncertainty can trap modellers, as well as ways to avoid and overcome the traps.
Various sources of uncertainty challenge modelling studies, with some types of uncertainty such as sampling error better understood and accounted for than other types such as uncertainty about which model is correct. Unfortunately, it is often the poorly-understood types of uncertainty that are likely to ‘trap’ models and modellers.
Milner-Gulland and Shea synthesize studies across several fields and identify four common traps that plague modellers. The simplest of these is simply ignoring important uncertainties, or acknowledging uncertainty, but doing nothing about it. In some cases, ignoring uncertainty may be appropriate, especially if it can be shown that the research needed to understand and address the uncertainty would not change the optimum management action, or a management scheme can be developed that is robust to the uncertainties.
A potentially serious problem is to identify the ‘wrong’ uncertainty and the cost of missing the ‘key’ uncertainties. This is likely when “being seen to be doing something” is important for managers. Unfortunately, it is seldom easy to identify which are the non-trivial uncertainties from a management viewpoint. In this context, Milner-Gulland and Shea highlight the value of model-based experimentation such as management strategy evaluation, which has been shown to be a valuable tool for developing robust management schemes for fish and terrestrial systems by Nils Bunnefeld and colleagues. Use of techniques such as management strategy evaluation could also avoid ‘decision paralysis’ whereby decision makers postpone decisions given the desire to resolve all ‘key’ uncertainties before they make a decision.
The final trap that Milner-Gulland and Shea focus on is “believing your models”. They identify scenario modelling as a way to quickly assess the consequences of as many uncertainties as possible. Scenario modelling need not be very sophisticated, because even checking a very wide range of possible uncertainties quickly using a simple model can highlight areas of uncertainty that need to be thought about more.
Establishing groups of scientists from various disciplines as well as the decision makers who will be using the models to guide modelling studies is one way to avoid many of the traps identified Milner-Gulland and Shea. Such groups (see, for example, the Ocean Modeling Forum) should also ensure that the objectives and aims of the models address the actual needs of the decision makers and provide them with the consequences of a range of alternative management actions, as well as the benefits of various possible research programs.
It is naïve to believe that it will ever be possible to identify all uncertainties (the ‘unknown unknowns’ problem). However, the concerns about inadequate treatment of uncertainty in applied ecology should not be taken as a reason to avoid using models because all scientists use some form of model, be they sophisticated mathematical models, qualitative models or simply conceptual models. Nevertheless, the value (and reputation) of models will be low unless modellers acknowledge and address uncertainties – the advice that Milner-Gulland and Shea provide should help the community create models that are appropriate and useful, but without leading to a workload that means that models will never be developed quickly enough to support decision making.