Article Text
Abstract
Both versions of the EuroSCORE, the complex logistic and the simpler additive, have been used for comparing institutional performance, but they are no longer robust enough to be used for comparing individual surgeons. It is now time to tighten the standard and add additional data to obtain a more holistic picture of the quality of cardiac surgical care in the UK.
Statistics from Altmetric.com
In April 2006, the Society for Cardiothoracic Surgery in Great Britain and Ireland, in conjunction with the Healthcare Commission, published cardiac surgical results for individual surgical units and, in some cases for surgeons, on the Healthcare Commission’s website, using logistic EuroSCORE (European System for Cardiac Operative Risk Evaluation) as the risk adjustment model.1 Publication of these data was well received by the media and the public, firstly, because of its transparency and, secondly, because it indicated that all units and surgeons were performing within the acceptable limits defined by EuroSCORE.
However, in this issue of Heart, Bhatti et al2 have demonstrated that in the northwest of England, the EuroSCORE tends to overpredict operative mortality by a factor of 2. This raises the issue as to whether or not the EuroSCORE remains a good model for risk adjustment in cardiac surgery. The EuroSCORE was developed during the 1990s, with the aim of devising a simple and pragmatic risk adjustment model based on easily collectable data that could be used for both preoperative risk assessment and post-hoc comparison of surgical performance based on contemporary European data.3 EuroSCORE has two versions: a complex logistic version and a simpler additive version. Both versions have been shown in a number of publications to provide a good model for risk prediction in cardiac surgery at international,4,5 hospital or unit level.6 The simultaneous publication of the article by Bhatti et al and the Healthcare Commission survival statistics raises two questions. Firstly, is the EuroSCORE still sufficiently contemporary to be credible? Secondly, is it robust enough for risk adjustment when comparing the performance of individual surgeons rather than hospitals?
Bhatti et al’s finding is endorsed by reworking of data underpinning the Healthcare Commission website and the Australian national database in Victoria.7 Although there was emerging evidence that the EuroSCORE was drifting towards overprediction in 2003,8 the magnitude of overperformance by UK surgeons could not have been predicted. We continued with the application of logistic EuroSCORE because it is the only validated European model, it is widely used in UK units and its use for this purpose had been agreed to before the collection and merging of national data.9 The fact that EuroSCORE overpredicts mortality is a reflection of sustained improvements in cardiac care nationally. Similar national data from other European countries are not available.10
COMPARING SURGICAL OUTCOMES
So how does this relate to the use of EuroSCORE for comparison of individual surgeons’ performance? The most important predictors of outcome are operation type, urgency and age. Also, most units in the UK have broadly similar proportions of coronary and valve operations, but that variation is much greater between individual surgeons because of special interests such as off-pump coronary surgery, repairative mitral valve surgery, aortic surgery, and so forth. So when comparing units the type of operation is less important in the risk adjustment hierarchy than when comparing individual surgeons. This is where EuroSCORE fails. In 2003 in the UK, the in-hospital mortalities for aortic valve replacement (3.2%), mitral valve replacement (5.4%), mitral valve repair (1.3%), combined aortic and mitral valve operations (4.3%) and combined mitral and tricuspid operations (9.1%) went up to 6.8%, 8.6%, 6.2%, 11.8% and 14%, respectively, when combined with coronary surgery, but in EuroSCORE the influence of each of these operations on the final score is the same. So with similar age and comorbidities, a triple valve replacement with coronary grafts scores the same as an atrial septal defect repair.
Thus, a new algorithm is clearly required for enabling comparative performance in the UK. There are several options. The first is simply to recalibrate EuroSCORE by dividing by two. While attractive in its simplicity, this ignores the changing impact of different risk factors over time11,12 and the inability of EuroSCORE to provide a discriminatory weighting for different operations. The second option is to use an international model and this is almost certainly where we will end up. However, the most widely used models remain proprietary13 or would require a change in our dataset.14 Such an approach would also raise issues of confidence in international data quality. The final and most pragmatic option is to refine and employ operation-specific risk models based on contemporary UK data, such as the UK Bayes model for coronary surgery8 and a generic valve model.15 These could be updated on an annual basis, as is the New York registry.
The approach to risk adjustment in terms of predictive accuracy, discrimination and frequency of recalibration is determined by what we are trying to achieve. There are three models encapsulating the reasons for publishing this sort of data at an institutional or an individual level.16
The first is a public accountability model which sees public disclosure as a public responsibility, irrespective of the consequences whereby release of the data, in conjunction with appropriate education and subsequent informed debate, will help clarify important societal issues and also improve standards.
The second is a market-oriented model, which assumes that the provision of comparative data will allow informed and willing consumers to drive quality improvement through selective purchasing or utilisation behaviour. To make valid and fair comparisons the data need to be standardised.
Finally, a professionally driven model assumes healthcare professionals have a desire to monitor and improve standards. This is generally motivated by a desire to retain autonomy in the face of greater governmental regulation. Providing data on variations aids this process, and publication increases provider responsiveness. The data act as a catalyst to identify and solve problems, and publication turns up the heat.
These models are not mutually exclusive, and the publication of cardiac surgical results in the UK has been driven to a variable extent by all three models. However, the choice of logistic EuroSCORE and the mode of presentation on the Healthcare Commission website was primarily to demonstrate compliance with a widely accepted European standard and not to provide graduated, categorical data to facilitate ranking of surgeons under the guise of “patient choice”.
NEW HORIZONS
The venture has been a success. It is now time for us to tighten the standard and add additional data on hospital facilities, processes and other outcomes relating to morbidity, such as resternotomy and length of stay, in order to paint a more holistic picture of cardiac surgery in the UK.
What does this signal for cardiology and other specialties? The New York State Department of Health has published an operator-specific angioplasty report since 1995.14 The Chief Medical Officer’s consultation on revalidation,17 coupled with the desire of the Department of Health in England to see publication of unit-specific, specialty-based outcomes to underpin patient choice, will bring urgent pressure to bear on other interventional specialties, including cardiology, to identify useful outcome measures that can be risk adjusted.
REFERENCES
Footnotes
-
Published Online First 27 September 2006
-
Competing interests: BEK is President of the Society for Cardiothoracic Surgery in Great Britain and Ireland, and a Commissioner on the Healthcare Commission.