Statistics from Altmetric.com
It is widely recognised that stroke is a major hazard for patients with atrial fibrillation (AF). Antithrombotic therapy is the most important management consideration owing to the high risk of stroke and the availability of treatment, proven by clinical trials. Aspirin reduces the risk of stroke by 22% compared with placebo, and oral anticoagulation (OAC) reduces this risk a further 30–50% compared with aspirin.1 However, OAC also increases the risk of bleeding, which partly offsets its beneficial effect. Another drawback of OAC is that it requires regular blood monitoring to maintain the international normalised ratio (INR) within the optimal therapeutic window, which is between 2.0 and 3.0 for AF.2 Balancing each patient’s risk of stroke and of bleeding should help decide whether treatment with aspirin or OAC is most appropriate.
Several clinical stroke risk factors in AF have been identified. There is a strong predictive value of previous stroke or transient ischaemic attack, and evidence that increasing age, hypertension and heart failure increase the risk. The predictive value of other factors, such as diabetes and female gender, is weaker.3 Using varying combinations of these factors, several groups have developed stroke risk stratification schemes. Although widely used, recent evidence suggests that these schemes generate variable risk groups and have limited predictive value.4 5 Figure 1 shows the receiver operating characteristic (ROC) curves for several risk schemes applied to a large patient cohort.5 A value of 0.50 indicates no predictive value and a value of 1.00 means perfect prediction. The observation that the areas under the ROC curves ranged from 0.56 to 0.62 indicates that current schemes have relatively weak predictive power for AF-related thromboembolism. Nevertheless, to provide guidance using the best available evidence the ACC/AHA/ESC 2006 and ACCP 2008 guidelines mainly based their recommendations on the …