Article Text

Download PDFPDF

NICE guidelines for the use of drug-eluting stents: how do we establish worth?
  1. M A de Belder
  1. Dr M A de Belder, The James Cook University Hospital, Marton Road, Middlesbrough TS4 3BW, UK; mark.debelder{at}stees.nhs.uk

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

The introduction of new healthcare technologies requires an appropriate discourse between clinicians who identify problems to be overcome and industrial partners who provide potential solutions. New products go through a tight regulatory process before they can be widely used. Although this process is somewhat different for medical devices than for drugs, the principles are the same: demonstration of proof of principle and safety, and identification of patients most likely to benefit. However, their use nowadays depends as much on a demonstration of cost effectiveness as on clinical effectiveness. Cost-effectiveness analysis requires quantification of both clinical effect and overall costs of different treatments.

In the UK the balance between clinical effectiveness and cost effectiveness is determined by the National Institute of Health and Clinical Excellence (NICE). NICE has just published its updated technology appraisal following its latest review of the use of drug-eluting stents (DES), and reaffirms that these products should be used in coronary angioplasty in patients with long lesions (>15 mm), or in those with lesions in small vessels (<3 mm), because it is in these groups that the use of DES can be shown to be cost effective. It has, however, added a new statement, indicating that this cost effectiveness is dependent on the price differential between bare metal stents (BMS) and DES.1

The process by which NICE performs its appraisals includes a call for evidence from all interested stakeholders as well as the production of an assessment report (AR) and later, an appraisal consultation document (ACD, in effect the first draft of the guidance). The AR summarises the findings of an independent Assessment Group, who are commissioned by NICE to perform a systematic review of the published data and undertake an economic evaluation based on the clinical evidence. On this occasion, the Appraisals Committee has had to deal with significant disagreements between the external stakeholders and the economic analysts.

QUANTIFYING CLINICAL EFFECTIVENESS

It transpires that quantifying clinical effectiveness has not proved to be easy. All agree that currently available DES reduce restenosis. There has though been a considerable debate about the magnitude of this effect and what its impact is on clinical outcomes. Cost effectiveness is best determined from the results of randomised controlled trials (RCTs), as the randomisation process should remove most sources of bias, and indeed the available RCTs were the major determinants of the 2003 appraisal of DES,2 which approved their use. There was considerable shock then when the ACD of the current reappraisal stated that DES were not cost effective and should not be used. What was the basis for this apparently complete U-turn?

The economic analysts had trouble including the results of many of the RCTs in this field on two grounds: (a) trial protocols that required follow-up angiograms (which themselves influence the outcomes measured) and (b) the fact that the case mix in the trials did not represent “real-life” practice. Although they have a point, the latter concern would exclude any further NICE appraisals on anything, as RCTs always result in a degree of case selection. As new stents appear we will need to continue with studies requiring follow-up angiography to determine their biological effect. However, contemporary studies, in general, overcome the impact of the “oculo-stenotic reflex” by mandating a clinical reason for additional revascularisation, and comparative studies of different stents often use clinical follow-up alone. Overall though, on this occasion, the RCT results have not had their usual dominant position in the cost-effectiveness analysis. Having rejected the results of RCTs, the analysts preferred to use “real-life” registries and just one randomised trial to obtain “realistic” rates of the clinical outcomes of interest. This technique is as open to criticism as the rejection of most of the RCTs, partly because registries hardly ever capture sufficient information during follow-up to reflect the true costs of treatment, nor are their results as carefully monitored as in the RCTs, and they contain an inherent bias. Completeness of data collection for variables influencing costs is fundamental to cost-effectiveness analysis. Where it is lacking, health economists resort to modelling exercises, using best-guess estimates in their calculations. In addition, the effect of DES on certain subsets of lesions (eg, in-stent restenosis) may be lost in this sort of analysis.

Apart from the sources of data, another debate regarded the types and rates of end points to be included in the analysis, as well as the duration of follow-up. An overview of the RCTs to date suggests that within the confines of their protocols, there is no major difference in rates of death or myocardial infarction up to 4 years between BMS and DES. An analysis of some registries, however, suggests that when more complex cases are taken on, there might be an advantage of DES in this regard.3 Although such data became available during the appraisal process, the latest registries were not taken into account. It is also of note that where there was perceived to be no significant difference in certain end points, the costs of such end points were not considered in the analysis. Cost-effectiveness analysis should compare the differences in total absolute costs and absolute outcomes of two different treatments. Excluding some of the costs on this basis will have an impact on the final result. This may be particularly important where several trials or risk-adjusted registries show a trend towards a difference in outcomes. Where a cost-effectiveness model moves away from the results of RCTs and relies more on registry data, then the assumptions made and numbers included in the model must be realistic and believable, and agreed both by economists and clinical advisors.

The economic analysis compared the use of DES versus BMS, because this had been asked for. Was this entirely appropriate? The availability of DES has allowed interventionalists to offer more patients a reasonable alternative to coronary artery bypass grafting (CABG). When should DES be compared with BMS and when should they be compared with CABG? Should cost effectiveness encompass a more complex analysis comparing different groups of patients receiving DES with different comparator treatments depending on the complexity of the clinical and angiographic criteria of these subsets of patients? If so, how should this be done?

QUANTIFYING COST EFFECTIVENESS

Given the apparent difficulties in establishing the impact of DES on clinical outcomes, it is not surprising that determining cost effectiveness is also complex. Incremental cost effectiveness ratios (ICERs) are the differential clinical effect of one treatment compared with another divided by the differential between their costs. In this regard, “effect” is usually determined by quality-adjusted life years (QALYs), which are usually calculated using a number of assumptions (given the lack of information provided by many studies). The models by necessity use “guestimates” rather than hard facts. QALYs are affected much more by life saved than quality of life improved, and where a mortality advantage has not been proven, price differentials have to be relatively low to establish cost effectiveness. Moreover, assumptions in determining the costs of treatment are often crude and significant elements may not be included (eg, wider social and economic costs, and many aspects of continuing healthcare costs). These analyses are very sensitive to what is put into them. In addition, there are political moves in the UK to change the threshold at which a new treatment is considered acceptable from £30 000 per QALY to less than £20 000 per QALY.4 5

Although there might be difficulty in differentiating the overall impact on future costs between two treatments, surely it is easy enough to include the costs of the actual treatments themselves? Actually, this also is difficult! Should the list price be included, or an average of locally negotiated prices (and if so, at what time point)? Prices of new technologies appear to be very sensitive to market forces and are rapidly driven down by competition. As regard coronary stent technology, one must remember that in the UK BMS cost approximately £1500 when first introduced compared with the current prices for DES, now usually less than £600. Moreover, the current price of BMS (usually less than £300) would almost certainly not have been achieved without the development of the DES market. Of note, percutaneous coronary intervention (PCI) reference costs in the NHS have stayed remarkably constant over time in spite of the shift from the use of BMS to DES, so the NHS is already getting good value for money. Not all developments produce a net increase in cost per individual treatment. We also know that we cannot lump all BMS or all DES together as the clinical results for different sorts of stents can vary significantly. Should we therefore be doing separate cost-effectiveness analyses for individual stents?

In short, cost-effectiveness analysis is an inexact science. It is therefore not surprising that a number of such analyses of DES come up with different ICERs for these products and therefore different conclusions about whether or not they are cost effective.1 6 7 The situation is made more complex because of the different health systems across the world, and so a product might be determined to be cost effective in one country but not in another. On a parochial level, it would be iniquitous if DES were thought to be cost effective in Scotland, for example, but not in England.

Given all these difficulties, appraisal of a technology cannot depend only on the results of cost-effectiveness analysis.8 The details of the debates that have raged about this particular appraisal by NICE have been available over the past year on their website. Needless to say, the fact that an initial guidance in 2003 supporting the use of DES was to be replaced by guidance that they should not be used at all, but which has subsequently been converted at the last minute to guidance more or less maintaining the status quo, does lead to the raising of an eyebrow.

The explanation lies in the weight given to the commissioned economic analysis in the AR, which concluded that DES were not cost effective. The final guidance differs because the Appraisal Committee “did not accept all the parameters and assumptions used in the commissioned economic model” (so why did they produce the ACD in the way they did?). The problems raised by the various stakeholders that subsequently influenced the final decision included a concern about the different methodologies used in sequential NICE appraisals; the end points to be used; the rates of these end points with the different treatments; the time period for follow-up; the duration of clopidogrel treatment considered as an additional cost in different subsets of patients; the waiting times for repeat procedures; the lack of important variables that might have been factored into the model; the differential evidence base for the various products; the prices to be used; the down-playing of RCT results; the overemphasis on a local audit to determine likely rates of outcomes; and the weight given to the commissioned economic model versus other evidence. Most commentators felt that the economic analysis commissioned by NICE significantly underestimated the clinical benefits of DES and overestimated the price differential between DES and BMS.

Some judge that the current guidance is a victory for common sense. Others remain angry that the process has disregarded several years’ worth of well-conducted RCTs and that we are no further forward than in 2003, except that a process akin to price fixing has been established. The latter aspect would be a new departure for NICE and actually goes beyond its own remit, stated in its own methods guide (“The Committee is not able to make recommendations on the pricing of technologies to the NHS”).9 It is difficult to understand the rejection of diabetes as an independent factor to be used in the selection of a DES as well as the ignoring of evidence for the treatment of in-stent restenosis and chronic total occlusions. It is also inherently flawed to provide a specific allowable price differential between BMS and DES, without determining the actual prices being paid for BMS (as ICERs are generated from absolute numbers). A better method would be to state the price premium for a range of BMS prices that establish the technology as cost effective within NICE’s stated range of £20 000–£30 000 per QALY. Although most clinicians have agreed that DES should be used preferentially in those patients at highest risk of restenosis, and are probably not cost effective for all-comers based on the costs we have paid over the past few years, it is possible (given further market-driven changes to stent costs) that DES will become a dominant technology based on both clinical and cost effectiveness.

As stakeholders, the British Cardiovascular Society (BCS) and the British Cardiovascular Intervention Society (BCIS) elected not to appeal. However, others activated the appeals process. Some stakeholders, although not formally appealing, went to the lengths of preparing appeals documents, but decided instead to ask for a number of factual errors to be considered. Grounds for the appeal included the perception of an inappropriate evaluation of the submitted evidence, factual errors, errors in process and the belief that NICE had on this occasion overstepped its powers. The appeal has been rejected.

This has been a long and arduous process. If nothing else, it demonstrates how difficult it is to determine the worth of a product, especially when the costs of products and the evidence base shift so rapidly. However complex this is, clinicians and manufacturers have to understand the economic factors but, equally importantly, economic analysts have to understand the clinical factors involved if we are all to engage constructively with the process. Although clinicians concentrate more on clinical issues than the price of stents, we cannot avoid getting involved with this process when faced with a guideline that might have precluded us from using effective technologies. Future research requires appropriate trial design with an emphasis on clinical outcomes and incorporating an economic analysis (accepting that the results will differ over time as prices change), or at the very least must collect and report the relevant data for these analyses to be conducted. Although complex, there are a number of guiding principles, but we remain uncertain about whether the sands are shifting and whether NICE (a) is considering a change to its processes and (b) is entering a phase when it is deliberately exploring value-based pricing.4 In this regard the BCIS and the BCS have sought a meeting with NICE to explore these concerns further and to determine the ground rules for future appraisals. This is relevant for the entire cardiology community. From an interventional cardiologist’s point of view, we certainly need to have an agreed understanding of process, given that NICE will consider a further review of DES in April 2009.

REFERENCES

View Abstract

Footnotes

  • Competing interests: MAdB has received travel grants and research funds from, and has sat on advisory boards of, some manufacturers of drug-eluting stents.

Linked Articles