Article Text
Statistics from Altmetric.com
Invasive coronary angiography (ICA) has served as the “gold standard” for coronary artery disease diagnosis over the past two decades. The recent British Cardiovascular Society Working Group report on the role of non-invasive imaging, published in Heart, proposes that developments in non-invasive testing now challenge the primacy of ICA.1 Non-invasive testing modalities, such as exercise tolerance testing (ETT), myocardial perfusion scintigraphy (MPS), stress echocardiography (SE), cardiac computed tomography (CT) and cardiovascular magnetic resonance (CMR), offer alternative approaches to evaluate aspects of cardiac anatomy and physiology, while avoiding the morbidity, costs and risks of major complications from ICA.
Since selection of the optimal diagnostic test for a particular patient requires an assessment of the benefits and risks to the patient of each test under consideration, as well as the benefits and risks of not performing any testing, an understanding of the risks from ionising radiation can play an important role in patient management. In this editorial, I examine how radiation risk is quantified and how such estimates differ between different cardiac diagnostic tests.
RISK AND DOSE
The most common approach to characterising risks associated with ionising radiation, such as malignancies or genetic mutations in a patient’s progeny, is to describe it in terms of dose. The absorbed dose to an organ reflects the amount of energy imparted to the organ, divided by its mass. Absorbed dose does not reflect either the type of radiation or the sensitivity of the organ to radiation and as such is not itself a measure of risk. An equivalent dose to an organ normalises the absorbed dose, using a factor that reflects the relative biological effectiveness of the type(s) of radiation. In the current system of radiological protection that factor is 1 for the x rays and γ rays commonly used in cardiac imaging, and thus in …