Abstract

Using blood results and administrative data to predict which patients are likely to die in hospital

Author: Vishal Nangalia

A significant proportion of hospitalised patients receive sub-optimal care, which leads to increased mortality. Two factors in this sub-optimal care of patients are: 1) the lack of recognition of the seriousness of a patient’s condition on first presentation to a hospital; and 2) lack of recognition of a patient’s subsequent clinical deterioration while in hospital. The current systems for the recognition/identification of a deteriorating patient rely primarily on the measurement and interpretation of a patient’s vital signs, these are error prone and have poor accuracy to predict in-hospital death respectively. I explored whether an advanced Machine Learning (ML) approach using blood results and administrative could better predict in-hospital mortality, both on admission and modify this prediction post initial treatment. I examined the mortality rate in the largest dataset yet collated of hospitalised patients, from over fourteen UK National Health Service Trusts, comprising over twenty hospitals, collected over a period from 2005 to 2015. All adult patients who were admitted to hospital and had at least one sodium and haemoglobin measurement were included. Using routinely collected variables of: full blood count, serum electrolytes and albumin, administrative data, and key co-morbidities, two machine learning were created applying gradient boosted machines to predict in-hospital death on admission (ML-Admission) and subsequently when a second set of blood results were available (ML-TwoTests). Positive predictive value (PPV) thresholds ranging from 1:5 (20%) to 1:3 (33%) were calculated for the model’s ability to predict death. Of the 1,874,325 admissions, 58,843 (3.14%) died in hospital (median length of stay (LoS): 2 days (IQR 0-6); median age 60 years (IQR 41–75)). Mortality by method of admission (Emergency: 4.9%; Elective: 0.4%; and Maternity: 0.07%; p<0.01) substantiates the view that patients admitted as an emergency are significantly more likely to die than those admitted via other routes. The ML-Admission model achieved an AUROC of 93% (logloss: 0.088). The performance at the different PPV thresholds were: PPV 1:5 (sensitivity 76.1%, specificity 90.2%), PPV 1:3 (sensitivity 49.3%, specificity 96.8%). In other words, for every 1,000 admissions to hospital, approximately 31 die. If the PPV 1:5 (20% mortality risk i.e. >6x mean) threshold was used to escalate a patient’s care to a designated medical emergency team (MET), the referral rate would be 119 per 1,000 admissions, within which would be 24 of the 31 patients likely to die. Of the total, 43.28% (811,268) patients remained in hospital for a second set of blood tests. They had a higher- mean mortality: 6.1%, median LoS: 6 days (IQR 3-13), and median age: 66 years (IQR 47-80). The ML-Two-Tests model achieved an AUROC of 90.6% (log loss: 0.152). The performance at the different PPV thresholds were: PPV 1:5 (sensitivity 88.2%, specificity 77%), PPV 1:3 (sensitivity:65%, specificity:91.5%). Thus, for every 1,000 remaining admissions, ~61 die. Using the PPV 1:3 (33% i.e.>5x mean) threshold, 120 patients would be referred to a MET, within which would be 40 of the 61 patients likely to die. I have demonstrated a universally applicable and accurate in-hospital mortality predictor for admitted patients.

Funding Acknowledgement (If Applicable): Medical Research Council