A framework for meta-analysis of prediction model studies with binary and time-to-event outcomes

Stat Methods Med Res. 2019 Sep;28(9):2768-2786. doi: 10.1177/0962280218785504. Epub 2018 Jul 23.

Abstract

It is widely recommended that any developed-diagnostic or prognostic-prediction model is externally validated in terms of its predictive performance measured by calibration and discrimination. When multiple validations have been performed, a systematic review followed by a formal meta-analysis helps to summarize overall performance across multiple settings, and reveals under which circumstances the model performs suboptimal (alternative poorer) and may need adjustment. We discuss how to undertake meta-analysis of the performance of prediction models with either a binary or a time-to-event outcome. We address how to deal with incomplete availability of study-specific results (performance estimates and their precision), and how to produce summary estimates of the c-statistic, the observed:expected ratio and the calibration slope. Furthermore, we discuss the implementation of frequentist and Bayesian meta-analysis methods, and propose novel empirically-based prior distributions to improve estimation of between-study heterogeneity in small samples. Finally, we illustrate all methods using two examples: meta-analysis of the predictive performance of EuroSCORE II and of the Framingham Risk Score. All examples and meta-analysis models have been implemented in our newly developed R package "metamisc".

Keywords: Meta-analysis; aggregate data; calibration; discrimination; evidence synthesis; prediction; prognosis; systematic review; validation.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Bayes Theorem
  • Calibration
  • Humans
  • Meta-Analysis as Topic*
  • Models, Statistical*
  • Prognosis
  • Research Design*
  • Risk Assessment / methods*
  • Systematic Reviews as Topic