FastMRI

Reconstructing High-Fidelity MRIs from Partial Data

Team 16: Anant Joshi, Kanksha Zaveri, Mason Lilly, Sarmishta Velury, Shlok Gujar

Project Summary

Given an undersampled MRI scan, can we use a GAN to reconstruct the missing information?

We used pix2pix - a semisupervised learning approach - to reconstruct undersampled MRI data.

Motivation

MRIs are extremely useful, but they require copious sensor data and take a long time to perform. This makes them expensive and hard on patients.

MRI Image

Capture less data = Faster, cheaper MRI

It is possible to perform a faster scan by capturing less data, but this causes traditional MRI reconstruction techniques to produce blurry images.


The FastMRI dataset

The FastMRI dataset provided by NYU Langone allows researchers to study this problem. It is an anonymised paired dataset consisting of k-space data (raw MRI output) and full decoded images from more than 1500 MRI images of knees.

Undersampling

The FastMRI authors generate a mask to apply to k-spaces to simulate an undersampled MRI scan.



from common.subsample import MaskFunc
mask_func = MaskFunc(center_fractions=[0.04], accelerations=[8])
masked_kspace, mask = T.apply_mask(slice_kspace2, mask_func)  
          
The subsampled image can be viewed by applying an inverse fourier transform, followed by computing the absolute value to get the real image.

Baseline Models

FastMRI provideds two baseline models for comparison

Classical Method: uses expectation-maximization to minimize the loss of a reconstruction function

Deep Learning Method: Uses a U-Net, a network consisting of several convolutions followed by deconvolutions, with "skip" connections across the middle layers.


U-Net Architecture

Our Approach

pix2pix - a Conditional GAN for image translation


GANs - Brief Overview

A GAN uses a pair of competing neural networks: a Generator and a Discriminator.

The discriminator is trained to recognize whether or not a given image comes from the training dataset. The generator is trained to generate images that “fool” the discriminator

GANs have shown widespread success in image generation and translation tasks.

GAN Structure

Pix2pix Overview

Pix2pix is an extension of a traditional GAN called a “Conditional GAN”, in which the generator is fed an input image in addition to noise. Whereas a traditional GAN merely generates images in a target domain, this allows a conditional GAN to translate between two domains (in our case, from k-space to MRI images).

In addition to training the generator to fool the discriminator, pix2pix also adds an extra loss term that compares generated images to their “intended” translations. This allows pix2pix to be more stable than traditional GANs, which are notoriously difficult to train.

Pix2Pix

Why Pix2Pix

Pix2pix is particularly good at image to image translation when you have paired data.

While most GANs are good at superresolution tasks, other varieties are less stable.

They are also more computationally intensive. Pix2pix was more feasible with the machines we had available.


Results and Visualisation

Quantitative Results

Metric Classical Baseline U-Net Baseline Ours (10 epochs)
NMSE 0.0479 0.0154 0.0263
SSIM 0.0588 0.0636 0.0602

*NMSE: Normalized Mean Square Error

*SSIM: Structural Similarity Index

Visualisation

Target Images
Subsampled Images
Epoch 1
Epoch 9
D_fake
D_real
G_loss
G_L1
D_fake_SSIM
D_real_SSIM
G_loss_SSIM
D_fake stable vs unstable
D_real stable vs unstable
G_loss stable vs unstable
G_L1 stable vs unstable

Discussion

Some artifacts are visible, indicative of impending mode collapse.

We got our best results using a low learning rate and using PixelGAN for our discriminator (instead of PatchGAN).

Though the key structures are visible, the contrast and coloring are not great.

With the computers available to us, we only trained for 10 epochs - most GANs train for far longer.

Caveat

So far, no existing loss metric captures the level of detail required for medical-garde MRI reconstruction.

While our method looks promising, it may not be real-world-ready until better metrics are defined

Conclusion

Subsampled MRI reconstruction is a difficult problem, and is an area of active research. We show improved performance compared to the classical baseline, that approaches the fully-trained baseline in just 10 epochs. We believe that with additional resources, we can train this to state-of-the-art performance in 200 epochs.

Achievements

We reconstructed MRIs from undersampled data.

Our images captured the shape and structure of knees well.

The images are imperfect, but could improve with more training.

Future work

In the future, we would train the model to completion (200 epochs). We would also like to try other loss metrics.