Article Text
Abstract
Background Artificial intelligence (AI) is increasingly used in diagnostic imaging. Deep convolutional neural networks (CNN) are able to learn from datasets presented to them, and then provide independent interpretations on new cases, but often without traceability of how they came to the conclusions. Such ‘black box’ behaviour is not desirable for clinical use. We propose the concepts of visualising and guiding the AI attention in an application to artefact detection in cardiac T1-mapping - a critical quality assurance step for clinically-robust T1 determinations.
Method We utilise the emerging AI attention visualisation. This serves as an ‘eye tracker’ and reveals where the neural network ‘looks’ when scoring artefacts in T1 mapping, and adds an essential accountability aspect to the CNN by producing additional evidence to validate the decision making process. Beyond simply observing the perception, we developed a technique to provide direct guidance on the attention of the CNN, by telling the machine which region to look at, very similar to training a human observer.
Results We demonstrate an application in automated T1 mapping artefact detection of the 6 AHA segments in mid-ventricular slices (figure 1a). The AI ‘eye tracker’ detected an ill-trained CNN paying attention to features not desired for the assigned tasks (figure 1b). A well-trained CNN learned from the training data to pay attention to the corresponding myocardial segments for detecting artefacts, but with indicating distractions leading to suboptimal accuracy (figure 1c). A CNN trained with additional guidance on attention is shown to pay the desired attention to the right structures and avoids distractions (figure 1d).
Conclusion CNN designed with support of attention visualisation, and trained with guidance on attention can lead to significantly more transparent and accountable AI use in clinical practice.