Article Text
Statistics from Altmetric.com
Sir,—It was with some dismay that we read the editorials on the problems and pitfalls of randomised controlled trials (RCTs) for evaluating new procedures in general1 and minimally invasive direct coronary artery bypass (MIDCAB) in particular.2 Many of the opinions offered in the articles arise from problems encountered as a result of poor design, planning, and execution of RCTs rather than as a result of the methodology itself.
The timing of trials of new technologies is problematic. The ideal moment is between standardisation of the new technique and its wide dissemination, but this is often a very brief moment given the understandable enthusiasm for the newest methods. The “tidal wave” of new technology and the tendency to follow the “evaluation bypass” route in its introduction to the health service were two of the problems the National Health Service research and development strategy was set up to tackle. It is doing so by providing funding for scientific evaluation in RCTs, and minimally invasive cardiac surgery is one of the current priorities for research under the health technology assessment programme.
Many technologies have been widely adopted and then later shown to be ineffective or harmful—for example, prophylactic antiarrhythmic drugs in acute myocardial infarction. Nor can we assume that less invasive or apparently more convenient techniques will be more acceptable to patients. The much praised trial comparing laparoscopic and open surgery for cholecystectomy has shown no difference between the two in terms of hospital stay, patient discomfort, return to work, and complications.3 Bonchek1 suggests that meta-analysis of large clinical series can substitute for RCTs, but bias in observational studies may be considerable. Based on observational evidence that β carotene prevents lung cancer, an RCT was undertaken and showed that, on the contrary, the risk of lung cancer was significantly greater in the group assigned to receive β carotene.4 If bias is not recognised, it may be magnified by incorporation into meta-analysis. In comparison, the potential bias from selection of suitable patients in RCTs is less misleading.
RCTs comparing surgical and medical management have particular problems; clinicians must see beyond the interests of their own specialty to ensure better overall patient management, based on good scientific evaluation of new practice. Objective criteria for patient selection are indeed important1 and crossover of patients from the old to the new treatment should be discouraged. The US multicentre trial of PLC Systems’ transmyocardial revascularisation laser treatment for patients with intractable angina was criticised by an FDA advisory committee for having too many “crossovers” from medical to surgical management and too brief follow up.5RCTs are costly and time consuming and should only be undertaken when definitive answers are to be gained to questions that are of importance to patients and the health service.
We believe that the time is right to conduct the first MIDCAB trial. Although the technique is still evolving, for most centres the main indication is an isolated proximal stenosis of the left anterior descending artery and a good comparison is with angioplasty, with or without stents, as clinically indicated in routine practice. Multidisciplinary collaboration is the lynchpin of the successful RCT; not just between different clinical specialties but with crucial research professionals such as medical statisticians and health economists. In addition, it is desirable that such trials are multicentred to provide rapid and generalisable results. Major research grant institutions and medical journals could play a role in encouraging multicentre and multidisciplinary collaboration to complete trials according to the best scientific practice, as quickly as possible. Editorials should encourage, rather than undermine, the use of scientific methodology to evaluate new procedures.6
References
This letter was shown to Dr Bonchek who replies as follows:
I am disappointed that such distinguished scientists as Sharples et al reacted with “dismay” to my editorial. I thought that I had adequately emphasised the core of my message: most influential RCTs have compared drugs with drugs, or operations with operations. In contrast, most trials that have compared evolving operations with drug treatment have had unique problems that impaired their validity. (The reader is asked to review the references in my editorial and particularly reference 8 for historical examples that substantiate these assertions.)
These correspondents accuse me of undermining the use of RCTs for evaluating new procedures in general. I plead innocent to the charge, because my essential message was that well timed and properly designed RCTs that compare new procedures with established ones prevent the waste of scarce resources on ill designed comparisons between apples and oranges (that is, between drugs and surgery). My assertions about the difficulty of selecting the proper time for trials of evolving procedures are echoed by the correspondents’ comment that the timing of trials of new technologies is problematic.
I am also accused of suggesting that meta-analysis of large clinical series can substitute for RCTs. In fact, the summary of my editorial defined the specific circumstances in which I felt that RCTs are likely to help and those in which they are not, after which I said that “meta-analysis of large clinical series can substitute for those randomised studies that are unlikely to be helpful.”
Actually, Sharples et al inadvertently substantiate my assertions. Their examples of useful RCTs were invariably comparisons between different drug regimens (antiarrhythmics in myocardial infarction and β carotene to prevent lung cancer), or between different operations (laparoscopicv open surgery for cholecystectomy). The RCT they propose, minimally invasive coronary bypass (MIDCAB) versus angioplasty of the left anterior descending coronary, is a comparison between procedures, a circumstance in which I agree that RCTs can be informative if they adhere to certain design criteria that I was careful to specify. I made no reference to MIDCAB in my editorial, and was unaware of the editorial by Izzat et aluntil it was published, but their point of view seems unexceptional to me, as it merely advises proceeding deliberately. They acknowledge that RCTs of MIDCAB “will certainly be necessary at some stage.” Surely, there is room for men of good will to disagree about the exact timing of such studies.
This letter was shown to Dr Izzat et al, who reply as follows:
We thank Dr Sharples and colleagues for their interest in our editorial. However, they seem to have misunderstood the purpose of our commentary. We are not against the concept of RCTs but if these are to be useful to clinicians rather than to statisticians then they have to be generally applicable. The differences between a drug trial and those involving a procedure were clearly highlighted and discussed by Bonchek. It is obvious that the technicalities of the MIDCAB procedure are evolving rapidly and if a trial is done in the very early stages before the many technical problems have been overcome then the procedure is likely to fair badly and be condemned. This is not the way to make progress. Surgeons and interventional cardiologists need time to develop procedures and to overcome the learning curve before submitting their technique to an RCT especially if the comparison is with drug treatment. An RCT will have to be done with the MIDCAB procedure at some time but the most important question is when.
Although Sharples et al feel that the time is right to conduct the first MIDCAB trial, the only reason for this appears to be that the indications for MIDCAB are agreed on. However, they admit that the technique is still evolving. We all agree on the indications for MIDCAB, that is not difficult, but deciding when the technique has developed sufficiently to be subjected to a trial is another, more difficult question. We agree with their other comments about multidisciplinary and multicentre trials but they do not provide an answer to the difficult question: when is a technique ready to be subjected to an RCT? It cannot be in the early stages of development.