3D Registration of Anatomical and functional MRI of Brain Images

Hello,

I have anatomical MR and functional MR images of a patient with tumor in the brain. I want to transform the low resolution functional MR image in such a way that it matches in shape with the anatomical MR image. The resultant image should be a fusion of both the images (containing information from both images).

Size of anatomical MR image is 256 X 256 X 28
Size of functional MR image is 256 X 256 X 28 X 30

One slice each from the two modalities is shown below.

             Anatomical MR                         Functional MR

Please suggest which algorithm I should use. Also if there is an existing code which performs this task, please let me know.

1 Like

Hello @debapriya,

What do you mean by “matches in shape”? Generally speaking, if this is the same patient and the same time point the two objects in the image have the same size and shape (they occupy the same sized region in physical space).

If the two images are aligned (registered) via the identity transformation then all you need to do is resample the low resolution image onto the high resolution image grid using the identity transformation. If they are not aligned you need to first register them (likely rigid registration but this is just a guess because I don’t know what the relationship is between these images) and then resample just with the transformation obtained from the registration.

Please familiarize yourself with the toolkit’s fundamental concepts and see the registration overview. The notebooks repository may also be of interest and in particular the registration series,6* notebooks, the Transforms and Resampling notebook and the results visualization notebook.

1 Like

Thank you @zivy . The information was helpful.

In order to develop some concepts, I was trying to register the two images shown below.

As the fixed image is tilted, I would expect the transformed and resampled moving image also to be tilted, after registration. However, after resampling the moving image with the final_transform, the resulting image is not tilted, rather it is similar to the initial unregistered moving image, as shown below.

What is the reason behind this?
I have used rigid registration with MattesMutualInformation metric and Gradient Descent optimization.
The code for resampling is

moving_resampled = sitk.Resample(
    moving_image,
    fixed_image,
    final_transform,
    sitk.sitkLinear,
    0.0,
    moving_image.GetPixelID(),
)

The code for registration is

registration_method = sitk.ImageRegistrationMethod()

# Similarity metric settings.
registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)
registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)
registration_method.SetMetricSamplingPercentage(0.01)

registration_method.SetInterpolator(sitk.sitkLinear)

# Optimizer settings.
registration_method.SetOptimizerAsGradientDescent(
    learningRate=1.0,
    numberOfIterations=100,
    convergenceMinimumValue=1e-6,
    convergenceWindowSize=10,
)
registration_method.SetOptimizerScalesFromPhysicalShift()

# Don't optimize in-place, we would possibly like to run this cell multiple times.
registration_method.SetInitialTransform(initial_transform, inPlace=False)

# Connect all of the observers so that we can perform plotting during registration.
registration_method.AddCommand(sitk.sitkStartEvent, rgui.start_plot)
registration_method.AddCommand(sitk.sitkEndEvent, rgui.end_plot)
registration_method.AddCommand(
    sitk.sitkMultiResolutionIterationEvent, rgui.update_multires_iterations
)
registration_method.AddCommand(
    sitk.sitkIterationEvent, lambda: rgui.plot_values(registration_method)
)

final_transform = registration_method.Execute(fixed_image, moving_image)

# Always check the reason optimization terminated.
print("Final metric value: {0}".format(registration_method.GetMetricValue()))
print(
    "Optimizer's stopping condition, {0}".format(
        registration_method.GetOptimizerStopConditionDescription()
    )
)

Hello @debapriya,

The first thing to check in this scenario is whether the registration even succeeded. What is the final transformation? Also, what was the optimizer’s stopping condition?

The code does not include the initialization of the initial_transform but I assume that is done and it isn’t the identity transformation after that step.

Hello @zivy

The final transformation is

itk::simple::CompositeTransform
 CompositeTransform (000001AEF6750080)
   RTTI typeinfo:   class itk::CompositeTransform<double,3>
   Reference Count: 1
   Modified Time: 147311
   Debug: Off
   Object Name: 
   Observers: 
     none
   Transforms in queue, from begin to end:
   >>>>>>>>>
   Euler3DTransform (000001AEF6453500)
     RTTI typeinfo:   class itk::Euler3DTransform<double>
     Reference Count: 1
     Modified Time: 147151
     Debug: Off
     Object Name: 
     Observers: 
       none
     Matrix: 
       0.99954 0.0275151 -0.0127827 
       -0.0273175 0.999508 0.0153844 
       0.0131998 -0.0150282 0.9998 
     Offset: [9.06745, 6.8524, 13.9616]
     Center: [-0.214844, -1.21484, 21.5]
     Translation: [8.75929, 7.18963, 13.9727]
     Inverse: 
       0.99954 -0.0273175 0.0131998 
       0.0275151 0.999508 -0.0150282 
       -0.0127827 0.0153844 0.9998 
     Singular: 0
     Euler's angles: AngleX=-0.0150287 AngleY=-0.0132016 AngleZ=-0.0275217
     m_ComputeZYX = 0
   End of MultiTransform.
<<<<<<<<<<
   TransformsToOptimizeFlags, begin() to end(): 
      1 
   TransformsToOptimize in queue, from begin to end:
   End of TransformsToOptimizeQueue.
<<<<<<<<<<
   End of CompositeTransform.
<<<<<<<<<<

With 100 iterations, the stopping condition was

Final metric value: -0.5304509757680418
Optimizer's stopping condition, GradientDescentOptimizerv4Template: Maximum number of iterations (100) exceeded.

I increased the iterations to 500. Then the stopping condition was

Final metric value: -0.5277902472209781
Optimizer's stopping condition, GradientDescentOptimizerv4Template: Convergence checker passed at iteration 128.

initial_transform is

initial_transform = sitk.CenteredTransformInitializer(
    fixed_image,
    moving_image,
    sitk.Euler3DTransform(),
    sitk.CenteredTransformInitializerFilter.GEOMETRY,
)
1 Like

Hello @debapriya,

Based on the rotation matrix in the final transformation it looks like the optimization ended at a local minimum far from the actual one (rotation is very close to the identity matrix, differences of less than 2 degrees). Try increasing the sampling percentage, SetMetricSamplingPercentage, it is currently set at 1% which is low, try 10% and see if the result improves.

1 Like

Thank you @zivy .
With sampling percentage 10%, and number of iterations 1000, I got the following result.

Stopping criteria:

Final metric value: -0.5617761390226094
Optimizer's stopping condition, GradientDescentOptimizerv4Template: Convergence checker passed at iteration 577.

Final_transform

itk::simple::CompositeTransform
 CompositeTransform (000002427F3E8BA0)
   RTTI typeinfo:   class itk::CompositeTransform<double,3>
   Reference Count: 1
   Modified Time: 1348919
   Debug: Off
   Object Name: 
   Observers: 
     none
   Transforms in queue, from begin to end:
   >>>>>>>>>
   Euler3DTransform (000002427E769F70)
     RTTI typeinfo:   class itk::Euler3DTransform<double>
     Reference Count: 1
     Modified Time: 1348759
     Debug: Off
     Object Name: 
     Observers: 
       none
     Matrix: 
       0.989314 0.136303 -0.0517703 
       -0.134421 0.990185 0.0382643 
       0.0564777 -0.0308964 0.997926 
     Offset: [7.797, 5.57976, 14.9376]
     Center: [-0.214844, -1.21484, 21.5]
     Translation: [6.52064, 6.44324, 14.9184]
     Inverse: 
       0.989314 -0.134421 0.0564777 
       0.136303 0.990185 -0.0308964 
       -0.0517703 0.0382643 0.997926 
     Singular: 0
     Euler's angles: AngleX=-0.0309013 AngleY=-0.0565348 AngleZ=-0.136795
     m_ComputeZYX = 0
   End of MultiTransform.
<<<<<<<<<<
   TransformsToOptimizeFlags, begin() to end(): 
      1 
   TransformsToOptimize in queue, from begin to end:
   End of TransformsToOptimizeQueue.
<<<<<<<<<<
   End of CompositeTransform.
<<<<<<<<<<

Transformed Images:

This is a better transformation.

Now, I am trying to register another brain image pair. However, for this pair, I do not have any metadata information. So SimpleITK is using default values for spacing, origin and direction cosine.

Values for reference image are:
Ref_Data

Values for moving image are:
Mov_Data

Registration results are
Stopping Condition:

Final metric value: -0.38442490465178913
Optimizer's stopping condition, GradientDescentOptimizerv4Template: Convergence checker passed at iteration 45.

Final transform:

itk::simple::CompositeTransform
 CompositeTransform (000001DDD38B2F10)
   RTTI typeinfo:   class itk::CompositeTransform<double,3>
   Reference Count: 1
   Modified Time: 375645
   Debug: Off
   Object Name: 
   Observers: 
     none
   Transforms in queue, from begin to end:
   >>>>>>>>>
   Euler3DTransform (000001DDB4549F50)
     RTTI typeinfo:   class itk::Euler3DTransform<double>
     Reference Count: 1
     Modified Time: 375485
     Debug: Off
     Object Name: 
     Observers: 
       none
     Matrix: 
       0.999998 -0.00139968 -0.00123681 
       0.00135567 0.99939 -0.0348902 
       0.00128489 0.0348884 0.99939 
     Offset: [0.631528, -0.262781, 3.31637]
     Center: [127.5, 127.5, 13.5]
     Translation: [0.43615, -0.638696, 7.92024]
     Inverse: 
       0.999998 0.00135567 0.00128489 
       -0.00139968 0.99939 0.0348884 
       -0.00123681 -0.0348902 0.99939 
     Singular: 0
     Euler's angles: AngleX=0.0348955 AngleY=-0.00128567 AngleZ=0.00140053
     m_ComputeZYX = 0
   End of MultiTransform.
<<<<<<<<<<
   TransformsToOptimizeFlags, begin() to end(): 
      1 
   TransformsToOptimize in queue, from begin to end:
   End of TransformsToOptimizeQueue.
<<<<<<<<<<
   End of CompositeTransform.
<<<<<<<<<<

Visual output:

What is the reason for this failure? Is it only the lack of metadata information?
Sampling percentage is 10%

Hello @debapriya,

Most likely the missing metadata is causing the issue. Note that without metadata the spacing becomes isotropic (1.0, 1.0, 1.0). Most often the spacing is different along the scan direction, so (s1, s1, s2) where s1<s2, which makes a big difference.

To see if this is indeed the case open the image in ITK-SNAP or Slicer and look at the sagittal or coronal views, you will see that the anatomy appears distorted. This artificial isotropic spacing is equivalent to using a viewer that isn’t aware of the metadata, similar to what is illustrated in the fundamental concepts documentation which is referred to at the top of this discussion.

2 Likes

I am trying to register another image pair.

Metadata information of the fixed and moving images are respectively

(1.7200000286102295, 1.7200000286102295, 1.7200000286102295)
(0.0, 0.0, 0.0)
(1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0)
(128, 128, 47)

and

(3.5, 1.0, 0.859375)
(-114.23728942871094, -133.84988403320312, 22.01392364501953)
(-1.7088175806908158e-05, 0.7071032644682752, 0.707110318371585, -2.9318385449587944e-06, -0.7071102978164786, 0.7071032438421095, 0.9999999998496992, 1.0009971617977067e-05, 1.4156338083462928e-05)
(256, 256, 28)

The Slicer view of the fixed and moving images respectively are shown below

As I can understand, the moving image is distorted. However, I do not understand the reason for this distortion. Is the metadata information wrong? If that is the case, what can be the source of this error?

Hello @debapriya,

The spacing for the moving image looks to be corrupt. Usually the images are isotropic in one plane and have a different spacing in the third axis. In the second image each axis has a different spacing, this is likely an error. You will need to go back to the original data and see what is going on there.

Thank you for the information @zivy . I could finally register the images.

However, as you can see, the transformed image is a little wider and shorter than the fixed image. Vertical and horizontal scaling might align the two images better. How can I incorporate scaling in the existing code?

Existing Code:

initial_transform = sitk.CenteredTransformInitializer(
    fixed_image,
    moving_image,
    sitk.Euler3DTransform(),
    sitk.CenteredTransformInitializerFilter.GEOMETRY,
)

registration_method = sitk.ImageRegistrationMethod()

# Similarity metric settings.
registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)
registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)
registration_method.SetMetricSamplingPercentage(0.1)

registration_method.SetInterpolator(sitk.sitkLinear)

# Optimizer settings.
registration_method.SetOptimizerAsGradientDescent(
    learningRate=1.0,
    numberOfIterations=1000,
    convergenceMinimumValue=1e-6,
    convergenceWindowSize=10,
)
registration_method.SetOptimizerScalesFromPhysicalShift()

# Don't optimize in-place, we would possibly like to run this cell multiple times.
registration_method.SetInitialTransform(initial_transform, inPlace=False)

# Connect all of the observers so that we can perform plotting during registration.
registration_method.AddCommand(sitk.sitkStartEvent, rgui.start_plot)
registration_method.AddCommand(sitk.sitkEndEvent, rgui.end_plot)
registration_method.AddCommand(
    sitk.sitkMultiResolutionIterationEvent, rgui.update_multires_iterations
)
registration_method.AddCommand(
    sitk.sitkIterationEvent, lambda: rgui.plot_values(registration_method)
)

final_transform = registration_method.Execute(fixed_image, moving_image)

# Always check the reason optimization terminated.
print("Final metric value: {0}".format(registration_method.GetMetricValue()))
print(
    "Optimizer's stopping condition, {0}".format(
        registration_method.GetOptimizerStopConditionDescription()
    )
)

Hello @debapriya,

An overview of all the transformation types supported by SimpleITK/ITK is given in this notebook. You are currently using Euler3DTransform. To allow for scaling you can use the ComposeScaleSkewVersor3DTransform.

By using the ComposeScaleSkewVersor3DTransform you are adding degrees of freedom to the transformation, accounting for scaling. Having said this, the specific transformation type to use depends on knowledge of the physical world. If it is known that the object in the image does not change scale, you should not use a transformation that includes scaling, even if the registration results look better.

This is known as over-fitting, see this notebook which illustrates the problem with overfitting in registration.

3 Likes

Thank you for this information @zivy

Hello,

I am registering two brain images of a patient having tumor. I also have a ROI image outlining the tumor. The ROI image is a label image with two intensity values.

After obtaining the transformation using fixed and moving images, I am applying the transformation on the ROI image. However, the transformed ROI image is an all black image.

I have tried resampling the ROI image with Nearest Neighbor as well as Linear interpolation. The result is the same.

What is the reason for this and how can I fix it?

Hello @debapriya,

You are asking for help without providing relevant details. Just saying I’ver performed registration and “the transformed ROI image is an all black image.” is not a useful description of the problem.
There could be issues with:

  1. Registration failing (converging to a local minimum that is far from the one that aligns the images).
  2. Resampling (using the wrong transformation direction or grid definition).
  3. Visualization (which you already encountered in your other question).

You need to confirm success of each sub-step before you can move on to the next step.

Having said this, I will hazard a guess at the issue just based on the images:

The ROI image looks like a contour and not the object itself. When the ROI signal is sparse, high frequency, you will often encounter aliasing when performing resampling. See this question and guidance on how to resolve this specific issue.

If this does not address the problem, you will need to provide a more complete description.

1 Like

Hello @zivy,

Based on your suggestion, I am trying to convert the contour image to a distance map. However I do not know the code to do so. Please help.

I did not provide details because I have already faced the issues that you mentioned, earlier in this chain of question answers. So I assume these issues are already taken care of. However, below is the code and the relevant screenshots. Please let me know if there is anything erroneous.

Original Fixed, Moving and ROI image:

Registration Code:

initial_transform = sitk.CenteredTransformInitializer(
    fixed_image,
    moving_image,
    sitk.Euler3DTransform(),
    #sitk.ComposeScaleSkewVersor3DTransform(),
    sitk.CenteredTransformInitializerFilter.GEOMETRY,
)

registration_method = sitk.ImageRegistrationMethod()

# Similarity metric settings.
registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)
registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)
registration_method.SetMetricSamplingPercentage(0.1)

registration_method.SetInterpolator(sitk.sitkLinear)
#registration_method.SetInterpolator(sitk.sitkNearestNeighbor)

# Optimizer settings.
registration_method.SetOptimizerAsGradientDescent(
    learningRate=1.0,
    numberOfIterations=1000,
    convergenceMinimumValue=1e-6,
    convergenceWindowSize=10,
)
registration_method.SetOptimizerScalesFromPhysicalShift()

# Setup for the multi-resolution framework.
# registration_method.SetShrinkFactorsPerLevel(shrinkFactors = [4,2,1])
# registration_method.SetSmoothingSigmasPerLevel(smoothingSigmas=[2,1,0])
# registration_method.SmoothingSigmasAreSpecifiedInPhysicalUnitsOn()

# Don't optimize in-place, we would possibly like to run this cell multiple times.
registration_method.SetInitialTransform(initial_transform, inPlace=False)

#Connect all of the observers so that we can perform plotting during registration.
registration_method.AddCommand(sitk.sitkStartEvent, rgui.start_plot)
registration_method.AddCommand(sitk.sitkEndEvent, rgui.end_plot)
registration_method.AddCommand(
    sitk.sitkMultiResolutionIterationEvent, rgui.update_multires_iterations
)
registration_method.AddCommand(
    sitk.sitkIterationEvent, lambda: rgui.plot_values(registration_method)
)

final_transform = registration_method.Execute(fixed_image, moving_image)

# Always check the reason optimization terminated.
print("Final metric value: {0}".format(registration_method.GetMetricValue()))
print(
    "Optimizer's stopping condition, {0}".format(
        registration_method.GetOptimizerStopConditionDescription()
    )
)

moving_resampled = sitk.Resample(
    moving_image,
    fixed_image,
    final_transform,
    sitk.sitkLinear,
    0.0,
    moving_image.GetPixelID(),
)

roi_resampled = sitk.Resample(
    ROI_image,
    fixed_image,
    final_transform,
    #sitk.sitkLinear,
    #sitk.sitkNearestNeighbor,
    sitk.sitkLabelGaussian,
    0.0,
    ROI_image.GetPixelID(),
)

gui.MultiImageDisplay(
    image_list=[fixed_image, moving_image, moving_resampled],
    title_list=["fixed", "moving", "transformed"],
    figure_size=(8, 4),
    window_level_list=[fixed_window_level, moving_window_level, moving_window_level],
);

Output:

Code for displaying the transformed ROI:

gui.MultiImageDisplay(
    image_list=[fixed_image, moving_image, ROI_image, roi_resampled],
    title_list=["fixed", "moving", "ROI", "Transformed ROI"],
    figure_size=(8, 4),
    window_level_list=[fixed_window_level, moving_window_level, ROI_window_level, ROI_window_level],
);

Output

Hello @debapriya,

Code to compute distance map and then obtain the transformed mask using it is below:

# Compute distance map on original binary segmentation (outside of object values are
# greater than zero, inside less than zero and on the border they are zero)
ROI_image_distance_map = sitk.SignedMaurerDistanceMap(
    ROI_image, squaredDistance=False, useImageSpacing=True
)

# Resample distance map
ROI_image_distance_map_resampled = sitk.Resample(
    ROI_image_distance_map,
    fixed_image,
    final_transform,
    sitk.sitkLinear,
    0.0,
    ROI_image_distance_map.GetPixelID(),
)

# Threshold resampled distance map to obtain the resampled binary mask
ROI_image_resampled = ROI_image_distance_map_resampled <= 0

Thank you @zivy. I used your code, but output is still an all black image. This time the intensity range of the transformed ROI image is 1 - 1, instead of 0 - 0 in the earlier case.

Hello @debapriya,

You should check that the ROI_image_distance_map is correct. If it looks good confirm that the ROI_image_distance_map_resampled is correct.