MRI isotropic volume reconstruction from 2D slices

Hello everyone,
I have a simple task to achieve: I have 3 sets of anisotropic MRI data per subject (= 2D MRI slices along one axis) and I want to perform volume reconstruction to obtain a single 3D isotropic volume of the brain.
Can simpleITK perform this task? I couldn’t find it in the documentation.
If not, could someone tell me which software can perform volume reconstruction?
(For example, NiftyMIC is performing exactly the task I am referring to, only it is designed for fetal MRI).
Thanks in advance,
Fleur

Hello @Fleur,

Welcome to SimpleITK!

Short answer is yes, please see the make_isotropic function in this Jupyter notebook.

I highly recommend that you skim the notebooks in our notebook repository to familiarize yourself with all the functionality the toolkit provides, or if you are willing to invest some time in learning SimpleITK in a methodical way, go over the online tutorial which includes code/data/videos…

Thank you for your help @zivy .
I managed to perform some kind of volume reconstruction with the following steps:
-reading my 3 sets of 2D MRI slices
-applying make_isotrope to them
-resample 2 of the images according to the 1st one
-intensity windowing the 3 images
-blending the 3 images together.
The issue I have now is that the reconstructed image suffers from a huge quality loss compared to the original 2D slices, particularly along the 2 axes that I resampled to match the space of the 1st one. In addition, the reconstructed image size is now 256x256x128, when I was expecting 256x256x256.
Is there something I did wrong?

Hello @Fleur,

After reading your original post again, I realized that what you are describing is not “make a volume isotropic” you want something more complex:

  1. Input - 3 anisotropic MRI volumes of same object.
  2. Output - isotropic volume that combines information from the three volumes.

If the above is correct, you will have to:

  1. Perform registration between the three volumes (use them as is after reading).
  2. Use the registration results to resample all three to a common, isotropic grid.
  3. Blend the three volumes.

All of these steps can be done with ITK/SimpleITK, but require understanding of registration and resampling.

Why are you surprised that the final volume had a size of 256x256x128? This is likely the size of the 1’st volume onto which you resampled. This doesn’t mean that it isn’t isotropic. To see if it is or isn’t isotropic, you’ll have to print the spacing.

Finally, when working with images try to avoid multiple resamplings, for instance the make_isotropic after reading the images can be skipped as you resample them again onto the 1’st volume. You just need to resample all three images onto an isotropic grid once.

If you are comfortable programming then you can continue down this path, otherwise I would recommend trying 3D Slicer which gives you access to a lot of functionality via a graphical user interface.

1 Like

This problem is not at all simple and a big research topic in MRI. On top of properly registering the images they have to be deblurred. Taken as a whole the problem is the super-resolution problem. There is an open source code available that relies heavily on SimpleITK and Python wrapped ITK. The github project is here:

3 Likes

This is a hard problem and a frequently asked question on the Slicer forum. We keep recommending NiftyMIC but haven’t received any feedback so far that somebody could use it successfully.

@Nick_Rubert Have you tried to use it? Can you show a few illustrations? What is your overall impression (difficulty in setting this up, tuning parameters, robustness, computation time, etc.)?

It should not be hard to create a Slicer extension that provides a convenient GUI for running this processing, but it would only worth the effort if the algorithm works well enough overall (gives good results for most data sets, without parameter tuning, within a maximum of a few ten minutes on an average computer, on most data sets).

@lassoan

I’m a physicist working at a children’s hospital and we’ve been using this algorithm for about the last year to reconstruct fetal brain images with generally very good results.

The next couple of sentences are for the totally uninitiated stumbling on this post. NiftyMIC is developed to work for MR brain images only. Super resolution algorithms need to make assumptions about a system’s point spread function (PSF) and the PSF assumed by NiftyMIC wouldn’t likely correspond to the PSF of some other imaging system. This algorithm also only reconstructs a region of interest from the original images and uses a CNN-based segmentation tuned to find fetal brains to segment the super-resolution ROI.

NiftyMIC does work very well out of the box for reconstructing fetal brains. There are only a handful of parameters to tune and I’ve found the reconstructions are pretty robust to parameter choice. This algorithm wants as many 2D stacks as you can get so more important than the post-processing parameters is the number of scans you have to work with in the first place.

I’ve run the algorithm using the disk image provided by the project with Oracle VM Virtual Box and I’ve also run the algorithm on a PC natively running Ubuntu 20.0, though if you go this route you need to be comfortable compiling ITK from scratch.

2 Likes

Thanks for the information, this should help people to decide if they want to give this a try.

Fetal brain MRI is a much harder problem than the usual adult MRI with negligible patient motion between the scans. Probably motion compensation is not necessary. Automatic segmentation may be replaced by simple manual/semi-automatic segmentation.

About installation on Windows: Using docker may be a good option, too. Docker on Windows can now use WSL2 (Windows Subsystem for Linux), so it is much easier to install (no need for tweaking UEFI/BIOS settings to enable Hyper-V, etc.) and overall integrates much better with the operating system. I was able to download the image using docker pull renbem/niftymic (it downloaded 5GB, which is a lot, but should be tolerable) and run it using docker run renbem/niftymic niftymic_reconstruct_volume_from_slices (it just printed usage information, I did not try to do any real processing).

@zivy thank you for your answer, I will try coding the steps you suggested.

@Nick_Rubert Indeed this problem is not simple, but I was surprised to find a dedicated pipeline for fetal MRIs while having difficulty finding one for adults - the problem being less complicated in that case.
I already use NiftyMIC and I’ve had good results with fetal MRIs. I actually intended on using it on my non-fetal MRIs, but as I’m using the virtual machine provided by NiftyMIC, my computer does not have enough RAM to run the algorithm on adult MRIs. Installing directly NiftyMIC on Ubuntu is quite difficult and I was advised against it.

@zivy : I am wondering, what is the difference between resampling the images to a common isotropic grid (using the transformation calculated during registration), and resampling 2 of the images to the 1st one (after having made it isotropic)? Should we expect the same results?
I’ve tried both options and the 1st one does not seem to work - the resampled brains lie mostly outside of the new image.

#resampling image2 to the common grid
#transform is the transform computed when registering image2 to image1

grid = sitk.GridSource(outputPixelType=sitk.sitkFloat64, size=out_size, sigma=(0.5, 0.5, 0.5),
    gridSpacing=[1.0, 1.0, 1.0], gridOffset=image1.GetOrigin(), spacing=[1.0, 1.0, 1.0])


resampled2 = sitk.Resample(image2, grid, transform, sitk.sitkLinear, 0.0)

EDIT: my bad, I tried setting the grid origin and direction to those of image1 and now it works!

I also tried deblurring the output volume using Unsharped masking. With amount = 5, threshold = 10 and scaling = (1, 1, 1), I manage to obtain a sharper image, but the contrast is very diminished. Why is that?
I also noticed that increasing the threshold makes the brain less sharp than the original image…

Thanks in advance!

Hello @Fleur,

After you resampled all three original images into a common isotropic grid, how did you blend them? If the registration error is large enough, when you blend via averaging you will likely get a blurrier image.

I would start by evaluating the registration accuracy. If it is sufficiently accurate then the issue has to be coming from the blending. Does median blending look any better than mean blending (the common default approach)?