Article Text

Download PDFPDF

3 Train the Ai like a human observer: deep learning with visualisation and guidance on attention in cardiac T1 mapping
  1. Qiang Zhang,
  2. Konrad Werys,
  3. Elena Lukaschuk,
  4. Iulia Popescu,
  5. Evan Hann,
  6. Stefan Neubauer,
  7. Vanessa M Ferreira,
  8. Stefan K Piechnik
  1. OCMR, University of Oxford Centre for Clinical Magnetic Resonance Research


Background Artificial intelligence (AI) is increasingly used in diagnostic imaging. Deep convolutional neural networks (CNN) can learn from labelled datasets, and then provide independent interpretations on new cases, but often without traceability to how such conclusions were made. This ‘black box’ behaviour is not desirable for clinical applications. We propose a novel framework for visualising and guiding the AI attention, using artefact detection in cardiac T1-mapping as an example - a critical quality assurance step for clinically-robust T1 determinations.

Method We utilised an AI attention visualisation framework. This serves as an ‘eye tracker’ and reveals where the neural network ‘looks’ when scoring artefacts. The technique adds an essential accountability aspect to the CNN by producing additional evidence to validate the decision-making process. Beyond simply observing the AI attention maps, we provided additional direct guidance on the attention of the CNN, instructing the machine where to look, similar to training a human operator.

Results We demonstrate an application in automated T1-mapping artefact detection of the 6 AHA segments in mid-ventricular slices (figure 1a). The AI ‘eye tracker’ detected an ill-trained CNN not paying attention to myocardium (figure 1b). A well-trained CNN learned from the training data to pay attention to the 6 myocardial segments, but with distraction by other image features (red arrows, figure 1c) and inaccuracy (yellow arrows). The proposed solution is a CNN trained with additional guidance to pay attention to the correct structures and avoid distractions (figure 1d).

Abstract 3 Figure 1

Attention maps as the CNN ‘eye tracker’ in detecting T1-mapping artefacts in the 6 AHA segments (a), which reveals that (b) an ill-trained CNN looked at the features not desired for the task; In comparison; (c) a well-trained CNN correctly identified the segments, but with distraction (red arrows) and low accuracy (yellow arrows); (d) CNN trained with attention guidance looked at the target myocardial segments more accurately

Conclusion CNN designed with both visualisation in perception and guidance on attention to relevant anatomical structures can lead to significantly more transparent and accountable AI, therefore more reliable for clinical practice.

Statistics from

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.