Article Text

Download PDFPDF

9 Train the Ai like a human observer: deep learning with visualisation and guidance on attention in cardiac T1 mapping
  1. Qiang Zhang,
  2. Konrad Werys,
  3. Elena Lukaschuk,
  4. Iulia Popescu,
  5. Evan Hann,
  6. Stefan Neubauer,
  7. Vanessa M Ferreira,
  8. Stefan K Piechnik
  1. OCMR, University of Oxford Centre for Clinical Magnetic Resonance Research


Background Artificial intelligence (AI) is increasingly used in diagnostic imaging. Deep convolutional neural networks (CNN) are able to learn from datasets presented to them, and then provide independent interpretations on new cases, but often without traceability of how they came to the conclusions. Such ‘black box’ behaviour is not desirable for clinical use. We propose the concepts of visualising and guiding the AI attention in an application to artefact detection in cardiac T1-mapping - a critical quality assurance step for clinically-robust T1 determinations.

Method We utilise the emerging AI attention visualisation. This serves as an ‘eye tracker’ and reveals where the neural network ‘looks’ when scoring artefacts in T1 mapping, and adds an essential accountability aspect to the CNN by producing additional evidence to validate the decision making process. Beyond simply observing the perception, we developed a technique to provide direct guidance on the attention of the CNN, by telling the machine which region to look at, very similar to training a human observer.

Results We demonstrate an application in automated T1 mapping artefact detection of the 6 AHA segments in mid-ventricular slices (figure 1a). The AI ‘eye tracker’ detected an ill-trained CNN paying attention to features not desired for the assigned tasks (figure 1b). A well-trained CNN learned from the training data to pay attention to the corresponding myocardial segments for detecting artefacts, but with indicating distractions leading to suboptimal accuracy (figure 1c). A CNN trained with additional guidance on attention is shown to pay the desired attention to the right structures and avoids distractions (figure 1d).

Abstract 9 Figure 1

Attention visualisation and guidance in detecting T1 mapping artefacts in the 6 AHA segments (a), which reveals that (b) an ill-trained CNN looked at the features irrelevant to the tasks, (c) a well-trained CNN highlighted the segments but with distraction by other image features, and (d) with attention guidance in training the CNN highlighted the segments more accurately

Conclusion CNN designed with support of attention visualisation, and trained with guidance on attention can lead to significantly more transparent and accountable AI use in clinical practice.

Statistics from

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.