Decision Support & Hospital Monitoring
Author: Cedric Manlhiot
Coauthor(s): Cedric Manlhiot, Conall Morgan, Brigitte Mueller, Varsha Thakur, Vitro Guerra, Luc Mertens, Mark Friedberg, Fraser Golding, Mike Seed, Brian W. McCrindle, Lynne E. Nield
Status: Completed Work
Abstract Winner - Decision Support & Hospital Monitoring
Predictive analytics in routine clinical use: do they affect physician’s prognostication and management?
Background: Predictive analytics are increasingly being provided to physicians as part of decision support tools. However, there is still a paucity of information regarding how these additional data elements influence physician’s perception of the patient, their prognostication and care planning. Furthermore, no study previously investigated the potential negative effects should the predictive analytics provided to physicians be incorrect.
Method: We used the data from 127 fetuses with suspected left heart lesions (across the entire spectrum of disease: from very mild disease to major heart defects) on prenatal echocardiogram to generate standardized case report forms. We used this data to generate 11 variations of the case report form (~1,200 “cases” total) containing either no predictions, accurate predictions, overly optimistic predictions (probability of positive outcomes +25%, +50%, +75%) or overly negative predictions (probability of negative outcomes +25%, +50%, +75%). Additional variations were created by differentially reporting and manipulating model accuracy. A total of 7 fetal cardiologists, unaware of the study objectives or the manipulation of predictions received random subsets of cases report forms over a period of a few months (for them not to realize that duplicate cases with different manipulations existed) and were ask to provide a prognostic for these patients along with management recommendations.
Results: Physicians had lower overall performance than our prediction model (model AUC: 0.81, physicians AUC 0.66) and their accuracy marginally improved when they were provided with accurate predictions (AUC 0.70). Without being provided any predictive analytics, physicians were found to generally underestimate the risk of negative outcomes (single ventricle/mortality) by 25% (predicted/actual ratio: 0.74); this increased to 45% when accurate predictions became available. The change in degree of underestimation of risk of negative outcomes was generally proportional to the percentage of negatively or positively inflated predictions. Importantly, adding predictions to the information provided to the physicians also affected their intended management whereas overly positive predictions were associated with decreased recommendations to start prostaglandins at birth and a lower acuity of hospital at birth. The converse was true for overly negative predictions. Finally, physicians were found to be more likely to ignore predictions with low reported accuracy.
Conclusions: This study highlights the importance of carefully considering the accuracy of predictive analytics before providing them to physicians as part of decision support tools as they affect both their perception of risk and their intended management. Proper disclosure of model performance will be essential in this matter. Future research on how physicians interpret and consider predictive analytics is urgently needed as decision support tools increasingly make their way in the clinical arena.