INTRODUCTION – SENSITIVITY ANALYSIS
Sensitivity analysis is the study of how the uncertainty in the output of a mathematical model or system (numerical or otherwise) can be apportioned to different sources of uncertainty in its inputs.
It is desirable to prove that the findings from a systematic review are not dependent on such arbitrary or unclear decisions. A sensitivity analysis is a repeat of the primary analysis or meta-analysis, substituting alternative decisions or ranges of values for decisions that were arbitrary or unclear. For example, if the eligibility of some studies in the meta-analysis is dubious because they do not contain full details, sensitivity analysis may involve undertaking the meta-analysis twice: first, including all studies and second, only including those that are definitely known to be eligible. A sensitivity analysis asks the question, “Are the findings robust to the decisions made in the process of obtaining them?”.
In models involving many input variables, sensitivity analysis is an essential ingredient of model building and quality assurance. National and international agencies involved in impact assessment studies have included sections devoted to sensitivity analysis in their guidelines. Examples are the European Commission, the White House Office of Management and Budget, the Intergovernmental Panel on Climate Change and US Environmental Protection Agency‘s modelling guidelines.
Models That Generate Need For Sensitivity Analysis
There are many decision nodes within the systematic review process which can generate a need for a sensitivity analysis. Examples include:
Searching for studies:
· Should abstracts whose results cannot be confirmed in subsequent publications be included in the review?
Eligibility criteria:
· Characteristics of participants: where a majority but not all people in a study meet an age range, should the study be included?
· Characteristics of the intervention: what range of doses should be included in the meta-analysis?
· Characteristics of the comparator: what criteria are required to define usual care to be used as a comparator group?
· Characteristics of the outcome: what time-point or range of time-points are eligible for inclusion?
· Study design: should blinded and unblinded outcome assessment be included, or should study inclusion be restricted by other aspects of methodological criteria?
What data should be analysed?
· Time-to-event data: what assumptions of the distribution of censored data should be made?
· Continuous data: where standard deviations are missing, when and how should they be imputed? Should analyses be based on change scores or on final values?
Ordinal scales: what cut-point should be used to dichotomize short ordinal scales into two groups?
· Cluster-randomized trials: what values of the intraclass correlation coefficient should be used when trial analyses have not been adjusted for clustering?
· Cross-over trials: what values of the within-subject correlation coefficient should be used when this is not available in primary reports?
· All analyses: what assumptions should be made about missing outcomes to facilitate intention-to-treat analyses? Should adjusted or unadjusted estimates of treatment effects used?
Analysis methods:
· Should fixed-effect or random-effects methods be used for the analysis?
· For dichotomous outcomes, should odds ratios, risk ratios or risk differences be used?
· And for continuous outcomes, where several scales have assessed the same dimension, should results be analysed as a standardized mean difference across all scales or as mean differences individually for each scale?
CONCLUSION
Sensitivity analysis is closely related with uncertainty analysis; while the latter studies the overall uncertainty in the conclusions of the study, sensitivity analysis tries to identify what source of uncertainty weighs more on the study’s conclusions.
The problem setting in sensitivity analysis also has strong similarities with the field of design of experiments. In a design of experiments, one studies the effect of some process or intervention (the ‘treatment’) on some objects (the ‘experimental units’). In sensitivity analysis one looks at the effect of varying the inputs of a mathematical model on the output of the model itself. In both disciplines one strives to obtain information from the system with a minimum of physical or numerical experiments.
REFERENCES
Saltelli, A., Ratto, M., Andres, T., Campolongo, F., Cariboni, J., Gatelli, D. Saisana, M., and Tarantola, S., 2008, Global Sensitivity Analysis. The Primer, John Wiley & Sons.
Pannell, D.J. (1997). Sensitivity analysis of normative economic models: Theoretical framework and practical strategies, Agricultural Economics 16: 139-152.
Bahremand A., and De Smedt F. (2008). Distributed Hydrological Modeling and Sensitivity Analysis in Torysa Watershed, Slovakia, Water Resources Management, 22: 293-408.
Der Kiureghian, A., Ditlevsen, O. (2009) Aleatory or epistemic? Does it matter?, Structural Safety 31(2), 105-112.
J.C. Helton, J.D. Johnson, C.J. Salaberry, and C.B. Storlie, 2006, Survey of sampling based methods for uncertainty and sensitivity analysis. Reliability Engineering and System Safety, 91:1175–1209.
Tavakoli, Siamak; Alireza Mousavi (2013). “Event tracking for real-time unaware sensitivity analysis (EventTracker)”. IEEE Transactions on Knowledge and Data Engineering 25 (2): 348–359.
No comments:
Post a Comment