“Prediction is very difficult, especially about the future” – Danish Physicist Niels Bohr, also attributed to Yogi Berra, NY Yankees
There is something about knowing the future that appeals to us all. This yearning has created a veritable industry of crystal ball gazers, tarot card readers, and astrologers, to name a few of the charlatans that prey on this innate human longing. From the world of science fiction, the psycho-historian Hari Seldon is said to have inspired Nobel prize-winning economist Paul Krugman. But looking beyond the mystical methods, science also has more recently stepped in by using data about past happenings to model probabilities about future events. Specifically, models using statistical methods have been developed in medicine to predict cardiovascular and other important clinical outcomes, based on demographic and laboratory values. Despite the promise, and unlike the non-medical world, these models have found little uptake in routine clinical practice. In a review published in AJKD, Kadatz, Lee, and Levin explain the underlying principles of prediction models in medicine, and nephrology in particular, and lay a framework for the future work that is needed.
A small community near Boston has helped create the most popular risk prediction model ever used in medicine. However, unlike the Framingham model which is used widely for clinical decision making, prediction models haven’t found much traction in nephrology. Some of the more notable models in nephrology include the Cattran model for progression of membranous nephropathy, the Mehran risk score for contrast-induced acute kidney injury (AKI) (arguably from the cardiovascular world), and more recently the Tangri/Kidney Failure Risk equation (which has undergone multinational validation in a recent meta-analysis). Why have these and other models not been adopted by health professionals and policy makers in the nephrology world? The heterogeneity of chronic kidney disease (CKD), the static nature of models that do not account for change in risk over time, and the additional value over clinical judgment are some of the reasons put forth. In addition, many prediction models, especially in the CKD population as detailed by Tangri et al in a recent systematic review, lack data on crucial metrics that indicate their internal validity, and have not all been externally validated.
Two of the commonly used performance metrics for evaluation of internal validity are discrimination and calibration. The former refers to the ability of the model to accurately provide a high probability of the outcome in individuals who do end up experiencing the outcome of interest, whereas the latter reflects the overall agreement between predicted and observed outcomes. Additional metrics include net reclassification index (NRI) and integrated discrimination index (IDI), and more recently, net benefit (NB) developed using Markov modeling methods. All of these provide information about the clinical value of using these models. External validation refers to the performance of these metrics using a population different from the one used to derive the original model. Needless to say, this description makes it apparent that the development and implementation of these models requires sophisticated techniques and an expert team. The TRIPOD statement provides a checklist for reporting of prediction model research. Indeed, prediction models can even be used to plan and conduct research, as has been detailed in a previous blog post.
Looking beyond research, however, adoption of a model into electronic medical records (EMR) or apps can minimize clinical inertia and seamlessly integrate a particular model into clinical workflow. Novel biomarkers, such as suPAR and urinary EGR in CKD and Nephrocheck in AKI, also offer promise in improving existing models.
Swapnil Hiremath, MD
AJKD Blog Contributor