Article Text

Download PDFPDF

Cardiovascular professional societies fall short in providing impartial, clear and evidence-based guidelines
  1. Catherine M Otto1,
  2. Peter J Kudenchuk2,
  3. David E Newby3
  1. 1 Division of Cardiology, University of Washington, Seattle, Washington, USA
  2. 2 Department of Medicine/Cardiology, University of Washington, Seattle, Washington, USA
  3. 3 Centre for Cardiovascular Sciences, University of Edinburgh, Edinburgh, UK
  1. Correspondence to Professor Catherine M Otto, Division of Cardiology, University of Washington, Seattle, WA 98195, USA; cmotto{at}uw.edu

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Clinical guidelines play an increasingly important role in care of patients with cardiovascular disease. Approaches to guideline development reflect the need to integrate a complex and ever-expanding evidence base with new treatment options and clinical expertise to formulate recommendations that then can be implemented both by individual healthcare providers and across healthcare systems. All guidelines for a specific disease condition start with the same evidence base, yet guidelines are developed in many different ways, by many different organisations, often addressing the same or overlapping types of cardiovascular disease, typically leading to at least subtle (and sometimes major) divergences in the resultant recommendations.

Professional society recommendations, such as those generated by the European Society of Cardiology (ESC) and by the American Heart Association/American College of Cardiology (AHA/ACC), predominate, but many geographic regions have their own guidelines, tailoring recommendations to specific regional requirements.1 Government agencies and insurance providers also generate guidelines either directly in published documents or indirectly by restricting reimbursement. Online medical textbooks, such as Up-to-Date, attempt to integrate and reconcile recommendations from multiple guideline sources, filling any gaps in clinical management with recommendations based on clinical expertise alone. Another approach is to convene an independent group of experts to address new practice changing evidence rapidly, focusing on a specific question, such as the BMJ Rapid Recs or Magic Evidence Ecosystem Foundation.2 3

Why are there so many guidelines? What are the limitations of our current approach? How can we optimise guideline development to improve care of patients with cardiovascular disease?

All guidelines share two common purposes: first, to review, assess quality, summarise and interpret the published evidence base, and second, to provide clear recommendations for patient management. Other goals may differ between guidelines, such as balancing the good of the individual patient versus population health, considerations of cost-effectiveness, the scope of the document and whether clinical expertise is used to address issues for which the scientific evidence base is insufficient. Guidelines also differ in the processes used to develop them including composition of the writing committee, input from stakeholders, management of potential conflicts of interest and the process for reviewing evidence and developing recommendations. International checklists that summarise best practices for guideline development are available, but current guideline development and publication often fail to meet these standards (figure 1).2 4

Figure 1

Visual summary of reporting criteria for clinical practice guidelines as detailed in the Appraisal of Guidelines, Research and Evaluation (AGREE) checklist.4

In this issue of Heart, Garbi5 presents a detailed description of the National Institute for Health and Care Excellence (NICE) clinical guideline development process. NICE is an independent public body that provides evidence-based guidelines to inform care provided within the English National Health Service. This article provides a thorough and transparent narrative of the process for appointing an advisory committee, determining the scope of each guideline and reviewing the clinical evidence.

Compared with professional society guidelines, NICE gets it right on several fronts but also describes where opportunities lie for everyone to improve:

  1. Keep the cardiovascular specialists out of the room! Ensuring there are no direct financial or industry conflicts of interest is not enough; content experts are subject to implicit bias in interpretation of data, personal clinical practice preferences and academic conflicts of interest. Although experts help frame the questions and have an opportunity to comment later, the evidence is systematically evaluated and analysed by independent experts.

  2. All types of healthcare professionals as well as other stakeholders are involved in the process—patients, administrators, advocates and members of the public–with support from project managers and information specialists, as well as experts in systematic reviews and health economics. The chair facilitates careful consideration of the data and ensures balanced participation of all committee members; the ultimate decisions are very much on a level playing field.

  3. There are strict criteria about how the evidence is systematically brought together, analysed and judged. Professional society guidelines are much less strict and often use the ‘we know the evidence’ argument. Typically, there is no description of the specific processes or methodology for systematic evidence evaluation.

  4. NICE guidance does not try to keep everyone happy, instead making recommendations based on clinical efficacy and cost-effectiveness. NICE guidance aims to reduce health inequalities, promoting the health of the population, not just the individual, and is intended for a wide audience, not just healthcare professionals, using language accessible to patients and families.

  5. Unlike the somewhat secretive process used by professional societies, the NICE process is extremely transparent, and everyone is allowed to see how the recommendations were derived and the conclusions made.

The NICE guideline development process should give us pause to consider more thoughtfully the scope of our professional society guidelines, the composition of our writing committees, the rigour of our evidence review and analysis, our approaches to determining the quality of the evidence and strength of a recommendation, and the intended audience for the recommendations.

A particularly challenging aspect of guideline development is determining the quality (or level) of the evidence underlying each recommendation. The most stringent approach to evaluating clinical evidence is the Grading of Recommendations Assessment, Development and Evaluation (GRADE) system.6 The quality of evidence is rated as high, moderate, low or very low quality with explicit criteria used to upgrade or downgrade the rating. Framing the evidence evaluation in a patient, intervention, comparator and outcome format ensures that evidence rating remains linked to the specific patient population and intervention in comparison with alternate treatments. ESC and AHA/ACC guidelines use a similar but less strict approach with three levels of evidence: A (multiple RCTs or meta-analysis), B (large nonrandomised or single RCT) and C (expert consensus, small studies, registries and retrospective data). However, just because there is evidence in a meta-analysis, this does not mean the quality, robustness and strength of the evidence are high. An alternate hybrid approach is used in the AHA Guidelines for Emergency Care, which takes the recommendations from International Liaison Committee on Resuscitation (ILCOR) (based on GRADE), which are then adapted to the AHA level of evidence.7

Guidelines also use different scales to describe the strength of each recommendation. The GRADE approach provides a transparent process for moving from evidence to recommendations with additional considerations of uncertainly in the balance between desirable and undesirable effects and variability in values and preferences. GRADE recommendations are simply categorised as ‘strong’ or ‘weak’, similar to the NICE wording of ‘offer’ versus ‘consider’. AHA/ACC and ESC guidelines insert a class 2a recommendation between class 1 (strong) and class 2b (weak) plus the addition class 3 recommendations to not do something.

An additional challenge lies in maintaining consistency in guideline recommendations around the globe. Without such consistency, widely differing treatments might be recommended across geopolitical borders for the same condition (supposedly based on the same evidence), resulting in disharmony and confusion. At the same time, guidelines need to be relevant to individual communities and allow for local adaptation. The science is the same, but its application is decidedly different. One approach to address this challenge is to centralise the evidence evaluation process itself as an international effort by contributors from across the globe, thereby assuring greater uniformity in the final consensus product. This objective summary of the evidence base could then be disseminated to national and regional guideline bodies, charged with adaptation to local conditions and practice. Such a model is already in existence. The ILCOR was formed in 1992 to provide an opportunity for the world guideline bodies such as the AHA, European Resuscitation Council and others to collaborate in the development and dissemination of resuscitation guidelines.8 ILCOR is comprised of resuscitation experts from across the globe who are charged with the rigorous evaluation of the scientific evidence (using GRADE) and from it developing a consensus on science and treatment recommendations (CoSTR). The CoSTR is then taken by regional councils like the AHA or the European Resuscitation Council who interpret and apply this guidance when developing their own formal guidelines.

Going forward, there are many issues to address and opportunities to improve clinical guidelines for patients with cardiovascular disease. Our goals should include:

  • A comprehensive and rigorous review and analysis of the evidence is the foundation for all guidelines. We all are looking at the same data; hopefully, different professional societies can work together to develop a common evidence database and avoid duplication and redundancy.

  • All guidelines should follow a rigorous transparent process for guideline development based on established standards, such as the AGREE checklist.4

  • Guideline development and writing groups should include a more diverse group of stakeholders including specialists with expertise in evidence evaluation and cost-effectiveness, physicians with expertise in guideline development, internists or generalists, primary care providers, ethicists, patients and patient-based advocacy groups, and members of the public. Racial/ethnic and sex diversity also are essential. Conflicts of interest should be minimised.8

  • Writing groups must be supported by information specialists, medical writers, experts in cost-effectiveness analysis, ethicists, medical illustrators, publishing professionals and other relevant experts.

  • Guidelines should be updated regularly, with prompt revision when practice-changing new evidence is published or new treatments become available. Efficient and streamlined processes for these updates are needed, perhaps modelled on the focused approach used by MAGIC (Making GRADE the Irresistible Choice).3

  • Guidelines are not ‘one size fits all’. Regional guidelines are often needed to reflect geographic, social, economic, healthcare system and other factors.

  • Improved methods for dissemination and access to guidelines is central to improving care of patients with cardiovascular disease. Clinicians need easy and quick links to all relevant guidelines for each patient at the point of care, possibly integrated into clinical decision support in the electronic medical record.9

Current cardiovascular society guidelines fall short of best practice. We can and must do better.

References

Footnotes

  • Contributors All authors wrote, edited and approved the final version of this editorial.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.

  • Provenance and peer review Commissioned; internally peer reviewed.

Linked Articles