Article Text

Download PDFPDF

The death of administrative data for benchmarking cardiothoracic mortality?
  1. D Pagano1,2,
  2. C P Gale3,4
  1. 1Clinical Director Quality and Outcome Research Unit, University Hospital Birmingham, Birmingham, UK
  2. 2School of Clinical and Experimental Medicine, University of Birmingham, Birmingham, UK
  3. 3Division of Epidemiology and Biostatistics, Leeds Institute of Genetics Health and Therapeutics, University of Leeds, Leeds, UK
  4. 4Department of Cardiology, York Teaching Hospital NHS Foundation Trust, York, UK
  1. Correspondence to Dr Chris P Gale, Division of Epidemiology and Biostatistics, University of Leeds, Level 8, Worsley building, Clarendon Way, West Yorkshire, Leeds LS2 9JT, UK; c.p.gale{at}

Statistics from

The decision to recommend surgical treatment for patients with cardiac surgical disease is based on a multidisciplinary approach of the evaluation of the intended benefits and inherent risks of the proposed treatment.1 This is particularly important for patients of advancing age, with multiple comorbidities and who receive complex interventions. Such patients have higher operative risk and, increasingly, form the routine referral base for cardiothoracic operations. Subjective assessment of a patient's clinical risk, undertaken in isolation, is no longer acceptable—objective evaluation is highly recommended, allowing patients and surgeons a more informed decision and commissioners, regulators and researchers a mechanism to benchmark standards of access to and outcomes of care.2

It is, therefore, essential that reliable and valid tools are established which predict outcomes after surgery. Higher risk may then be quantified and, where appropriate, patients offered alternative evidence-based therapies. This is relevant to coronary artery and aortic valve disease interventions where, in parallel with established cardiothoracic operations provider volume-outcome associations are under close scrutiny and some cardiac devices may, in future, be ‘commissioned through evaluation’. Specifically, between and within centre surgical outcomes are regularly monitored and compared, as part of institutional, national and international quality improvement programmes.2 ,3 Thus, case-mix adjustment that carefully models the full spectrum of baseline patient risk is central to the identification of ‘outlier’ status and the evaluation of the quality of care of operative interventions.4

Siregar et al5 use clinical and administrative data from the majority of hospitals in The Netherlands performing cardiac surgery to report the differences in data quality and model performance indices with and without model recalibration. Clinical data are sourced from their National Cardiac Surgical Registry, analogous to the UK Society for Cardiothoracic Surgery (SCTS) in Great Britain and Ireland registry, and administrative data from the Dutch administrative database, analogous to the UK Hospital Episode Statistics (HES) database.2 ,3 For the former they use the logistic EuroSCORE, and for the latter, the Hospital Standardised Mortality Ratio (HSMR) derived using automated modelling techniques. Both techniques use indirect standardisation and the models fitted are compared using the C statistic (discriminative performance) and Brier score (calibration). The authors suggest that despite the limitations of their study that included incomplete case ascertainment, imperfect linkage of the data sources and the use of inhospital mortality rather than longer-term outcomes, there was evidence to support the notion that administrative data are substandard to clinical data (with regard to data coding). They inform us that Dutch administrative data were less well calibrated (probably because of inferior discrimination due to lack of adequate predictors) and this affected how ‘outlying’ hospitals were identified. They conclude that caution must be upheld in any interpretation of analyses resulting from administrative data to benchmark cardiothoracic inhospital mortality rates at centres in The Netherlands.

The avowal that administrative data are not appropriate for the evaluation of clinical care is not novel. There are several reports of the limitations of administrative data for health services research.6–8 Some arguments are academically justified; Mohamed et al reported that the Dr Foster Intelligence Unit HSMR derived from the Charlson comorbidity index and emergency admissions was unsafe and methodologically flawed.9 Typically, administrative data are used for non-clinical secondary use and their application to the evaluation of quality of highly specialised clinical care is an extrapolation of their intended use.10 They also suffer issues of data quality which varies among institutions. Indeed, in the presence of an established and comprehensive disease-specific clinical registry, administrative data would not be the preferred option for calculating specialist treatment-specific standardised mortality ratios. For the National Health Service in England, HES data are collected for all healthcare episodes for an individual patient across all the acute trusts.3 Potentially, this provides a huge resource for the investigation of clinical services and the quantification of non-fatal cardiovascular outcomes.10 However, the data and methods must be fit for the study of the clinical condition. To date, statistical techniques for the derivation of HSMR have been limited in their predictability and the use of more sophisticated computing analyses which take into account additional patient factors and adjust for unexplained statistical variation (a phenomenon known as overdispersion) improves model discrimination and calibration. Through the use of a stepwise algorithm, Siregar et al also failed to include variables such as previous emergency hospitalisation which are powerful prognostic indicators in administrative datasets.

In contrast, clinical databases are designed by healthcare professionals to facilitate quality improvement, enable epidemiological and health services research and more recently for commissioning through evaluation. For cardiovascular disease there are a number of national registries which have been used for high impact research.11 In cardiac surgery, several units and many national specialist societies collect specific data for individual patients. These include details of the patient's clinical condition in relation to cardiac surgery along with postoperative complications and discharge status. Some clinical registries, such as cardiothoracic datasets are treatment databases, whereas others such as the Myocardial Ischaemia National Audit Project are disease specific.12 The latter allows the comparative effectiveness of treatment versus no treatment, whereas the former is limited to treatment versus alternative treatment analyses. Typically, the data fields are broad, but explicit to the clinical disease or treatment for which the databases are designed. Consequently, clinical registries may suffer imperfect case ascertainment (especially treatment-based registries because they do not record the rates of the underlying disease) and missing and invalid data—some of which is missing by design and some through failure to accurately record data at source.13 ,14 In addition, clinical registries do not usually have information about healthcare episodes other than the specific one for which they were designed or longer-term major adverse cerebrovascular and cardiovascular events.

In 2004, Lilford et al15 proclaimed that variation in hospital outcomes was a composite of data quality/definitions, case-mix, healthcare quality and chance. That is, case-mix is only one of the factors associated with the quantification of healthcare outcomes; outcomes may also be influenced by unmeasured or unrecorded factors which carry significant prognostic impact. The consequences of not following this analytical remit were recently highlighted through the premature reporting of UK centre-specific paediatric cardiothoracic mortality rates.16 In addition, the type of data (and their quality) impacts upon outlier status, and the statistical methods used to analyse the data.17 Siregar et al also show us that model recalibration (through re-estimating the intercept and coefficients and the inclusion of first order interactions) affected the mean standardised mortality ratio, but more importantly this changed which hospitals were identified as ‘outliers’. Calibration drift has other important repercussions.18 It can lead to overprediction of risk of mortality and is more pronounced for the patients with a higher risk profile. In turn, this could lead to patients being denied a surgical treatment, which in reality carries a lower (observed) risk than the (expected) risk calculated from the model. A model that overpredicts risk, when used as a benchmarking tool, may falsely reassure us about clinical performance.19 For example, in some patients the choice between conventional surgery and novel treatment techniques, such as transcatheter aortic valve implantation, relies strongly on accurate risk prediction of surgical outcome. Finally, the authors did not report data missingness which is a known to be associated with mortality—though in a study using Myocardial Ischaemia National Audit Project data this didn't affect the distribution of hospitals showing special cause variation.13

What Siregar et al do is emphasise the critical role national electronic healthcare records play in comparative effectiveness research. This form of fundamental prognosis research is contemporary and a priority cardiovascular research area.20 ,21 Yet, in their desire to report their concerns with administrative data (preferring to promote clinical registries for provider comparison), they neglect to see that there may be a third option—a hybrid approach involving the linkage of the two types of national data.22 Until now, cardiothoracic risk models have focused on the prediction of inhospital mortality.19 While this is a pragmatic approach capturing deaths that are likely to be related to the surgery, it has limitations. Different healthcare systems discharge patients at different stages of their recovery, in some cases to rehabilitation institutions. Figure 1 shows unpublished analyses from the Quality and Outcomes Research Unit in Birmingham which used HES data linked to the Office for National Statistics Death Records (and revealed high rates of early inhospital deaths (accelerated failure)) and a constant low rate of attrition of patients discharged from hospital (some of which occurred early after discharge and may be have been avoidable). The linkage of national clinical databases to administrative databases would provide details of treatment episodes and major adverse cerebrovascular and cardiovascular events over the full patient journey. This is important when alternative treatment strategies to conventional surgery are available and cardiac surgical interventions are reported to be associated with low operative mortality risk and good prognostic benefit.2

Figure 1

Histogram showing all-cause mortality rates up to 90 days following cardiac surgery performed in English National Health Service hospitals from 31 March 2008 through 1 April 2011. Red bars indicate inhospital death and blue bars out of hospital death. (Adapted from: Hospital Episode Statistics and the Office for National Statistics (Study Institutional Registration CAB-05663-13)).

Perhaps then, the manuscript by Siregar et al will encourage readers to reflect on the future of our national cardiovascular outcomes data for the purposes of health services research? Administrative and clinical data sources are limited when considered in isolation. However, if pooled and linked to primary care and patient reported outcomes, they could offer higher and longer-term patient-centred data resolution—and resurrect administrative data for benchmarking cardiothoracic mortality.


View Abstract


  • Contributors CPG and DP drafted, reviewed, amended and approved the manuscript.

  • Funding National Institute for Health Research.

  • Competing interests CPG is funded by the National Institute for Health Research (NIHR/CS/009/004) as a Clinician Scientist and Honorary Consultant Cardiologist.

  • Provenance and peer review Commissioned; internally peer reviewed.

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Linked Articles