Statistics from Altmetric.com
It has been a long journey getting cardiovascular disease (CVD) risk prediction recognised as relevant to clinical decision-making. The Framingham Heart Study investigators put CVD risk prediction on the map over 30 years ago with their Framingham CVD risk scores, and Bill Kannel’s seminal 1976 paper advising clinicians to inform their risk management decisions using predicted CVD risk rather than individual risk factor levels has stood the test of time.1 There is now a wealth of supporting evidence that major CVD risk factors like blood pressure or blood lipid levels are not only individually poor predictors of a patient’s CVD risk but also of a patient’s potential to benefit from treatment, when compared with multifactor CVD risk prediction estimates.2 Disappointingly, estimating CVD risk remains the exception rather than the rule in routine clinical practice.3 As a result, CVD risk factor management is poorly targeted4 because risk prediction, like your tax return, is difficult to do in your head.5
CAUSE FOR OPTIMISM?
But there is cause for optimism. Many clinical guidelines on the management of CVD risk factors now incorporate risk prediction scores. Moreover, separate guidelines for managing hypertension and dyslipidaemia—which should not be considered independently—are being replaced by more clinically relevant CVD prevention guidelines.6 7 However, there are several barriers to the widespread acceptance of risk prediction as a prerequisite to high-quality clinical management of CVD risk. First, there are continuing arguments on how and for whom to apply prediction scores and second, the accuracy of existing risk prediction scores is quite modest. Before turning to the problem of accuracy—the subject of a study published in this issue of the journal (see article on page 34)8—some of the other barriers need to be considered.
WHAT RISK PERIOD SHOULD BE USED?
A topic that continues to be debated is the choice …