Multiresolution Registration with 2D Affine Transformation on pairs of 2D images

Hello, I’m a beginner to registration (1 month experience) and semi-beginner to Python (4 months).

I need to do multiresolution registration in Python with 2D Affine transformation on pairs of 2D fixed and 2D moving images. No images have more than 2 dimensions. I’m currently using SimpleITK 1.2.4 (ITK 4.13), but eventually want to do this in full ITKv4.

The fixed images are binary masks and I wish to transform the moving images to align with these fixed images. The fixed images are generally larger resolution and occupy a larger physical space than the moving images. The moving images can be of any imaging modality, though none are MRI. The moving images have features of varying intensity but are strongly geometrically similar.
All images have different X and Y sizes and could have different X and Y spacing, so each may occupy a different physical 2D space. All images may also have different pixel types.

To set up the code, I’ve largely copied the following sources…

… but I’m stuck getting a 2D Affine Transformation to work, partly because the source examples are Euler 3D or some other transformation. Here’s my process and code, which include some questions commented-in:

Dependencies:

import os
import itertools
import SimpleITK as sitk
import matplotlib.pyplot as plt
import numpy as np
import gui
from ipywidgets import interact, fixed
from multiprocessing.pool import ThreadPool
from functools import partial
from IPython.display import display, HTML, clear_output
# Callbacks for plotting registration progress.
import registration_callbacks

First, I need to define the initial transform and align the centers of the fixed and moving images. For 2D AffineTransform, my understanding is that it requires SetCenter(centerpoint) and not SetOrigin(). I am uncertain where and how to do this for the fixed and moving images simultaneously (if that is required), since they occupy different spaces:

# Make sure to give centerpoint of SetCenter(centerpoint) in world coordinates.
initial_transform = sitk.AffineTransform(sitk.CenteredTransformInitializer(
    fixed_image, 
    moving_image,
    sitk.AffineTransform(2), #AffineTransform(2) or AffineTransform()?
    sitk.CenteredTransformInitializerFilter.GEOMETRY,
    # for centering, something like?: 
    sitk.SetCenter([fixed_X/2, moving_X/2],[ fixed_Y/2, moving_Y/2]) )) 

Second, I configure the registration as in the sources above. This code executes without error after defining initial_transform as above, except with the sitk.SetCenter line commented-out:

# This is the registration configuration which we use in all cases. The only parameter that we vary 
# is the initial_transform. 
def multires_registration(fixed_image, moving_image, initial_transform):
    
# Registration framework setup.
    registration_method = sitk.ImageRegistrationMethod()

# Interpolator (bspline is slowest)
    registration_method.SetInterpolator(sitk.sitkLinear)
    
# Similarity metric settings
    registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)
    registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)
    registration_method.SetMetricSamplingPercentage(0.01)
        
# Optimizer settings
    registration_method.SetOptimizerAsGradientDescent(learningRate=1.0, numberOfIterations=100, convergenceMinimumValue=1e-6, convergenceWindowSize=10)

# Scale the step size differently for each parameter
    registration_method.SetOptimizerScalesFromPhysicalShift() 

# Set the initial moving and optimized transforms in ITKv4.
    optimized_transform = sitk.AffineTransform() #AffineTransform(2) dimensions
    registration_method.SetMovingInitialTransform(initial_transform)
    registration_method.SetInitialTransform(optimized_transform, inPlace=False)
    
# Setup for the multi-resolution framework. 
    registration_method.SetShrinkFactorsPerLevel(shrinkFactors = [4,2,1])
    registration_method.SetSmoothingSigmasPerLevel(smoothingSigmas = [2,1,0])
    registration_method.SmoothingSigmasAreSpecifiedInPhysicalUnitsOn()
    
# Connect all of the observers so that we can perform plotting during registration.
# NOTE: next 4 lines may have to be moved outside this function definition
    registration_method.AddCommand(sitk.sitkStartEvent, registration_callbacks.metric_start_plot)
    registration_method.AddCommand(sitk.sitkEndEvent, registration_callbacks.metric_end_plot)
    registration_method.AddCommand(sitk.sitkMultiResolutionIterationEvent, registration_callbacks.metric_update_multires_iterations) 
    registration_method.AddCommand(sitk.sitkIterationEvent, lambda: registration_callbacks.metric_plot_values(registration_method))

 # Possibly need to cast fixed and moving images as e.g. sitk.sitkFloat32
    final_transform = registration_method.Execute(fixed_image, moving_image)
    
    print('Final metric value: {0}'.format(registration_method.GetMetricValue()))
    print('Optimizer\'s stopping condition, {0}'.format(registration_method.GetOptimizerStopConditionDescription()))
    print(final_transform)
    
    return (final_transform, registration_method.GetMetricValue())

Third, I want to find the best initial rotation of the (now centered) moving image from the set of angles: 0, pi/2, pi, 3*pi/2. I don’t expect any moving image to be mirrored, so there’s no need for X- or Y-flipped cases. The code below currently throws the error ‘MattesMutualInformationImageToImageMetricv4(000001DDD7E4F770) Too many samples map outside moving image buffer’. I suspect this is due to the uncentered image pairs that I’m having trouble with, in the First step.

# This function evaluates the metric value in a thread safe manner
def evaluate_metric(current_rotation, tx, f_image, m_image):
    print(current_rotation)
    registration_method = sitk.ImageRegistrationMethod()
    registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)
    registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)
    registration_method.SetMetricSamplingPercentage(0.01)
    registration_method.SetInterpolator(sitk.sitkLinear)
    current_transform = sitk.AffineTransform(tx)
    #from sitk.AffineTransform: Rotate (int axis1, int axis2, double angle, bool pre=false)
    current_transform.Rotate(axis1=0, axis2=1, angle=current_rotation)
    registration_method.SetInitialTransform(current_transform)
    res = registration_method.MetricEvaluate(f_image, m_image)
    return res

# 2D orientations
orientations_list = [0,np.pi/2,np.pi, 3*np.pi/2]
p = ThreadPool(len(orientations_list))
all_metric_values = p.map(partial(evaluate_metric, 
                              tx = initial_transform, 
                              f_image = fixed_image,
                              m_image = moving_image),
                              orientations_list)
print(all_metric_values)
best_orientation = orientations_list[np.argmin(all_metric_values)]
print('best orientation: {}/'.format(best_orientation))

Fourth, I create the reference domain based on the largest possible space in this dataset:

# Create reference domain
dimension=2
# Create the reference image with a zero origin, identity direction cosine matrix and dimension     
reference_origin = np.zeros(dimension)
reference_direction = np.identity(dimension).flatten()
reference_size = [512]*dimension # Arbitrary sizes, smallest size that yields desired results.

# Physical image size corresponds to the largest physical size in the training set, or any other arbitrary size, where 'size' = (image_canvas_length-1)*pixel_length in each dimension

#largestphysicalspace(sources) returns [x,y] = [5855.0, 5195.0]
reference_physical_size = largestphysicalspace(sources)
print(reference_physical_size)

reference_spacing = [ phys_sz/(sz-1) for sz,phys_sz in zip(reference_size, reference_physical_size) ]
# Load ith pair of fixed_image and moving_image as target_pair[0] and target_pair[1], and as sitk.sitkFloat32 (pixel type 8)
i = 0
target_pair=loadtargetpair(sources, i, 8)
#Assign the [ith] moving_image [index 1] as the reference_image. [Index 7] contains the pixel type (integer) of the original image:
reference_image = sitk.Image(reference_size, sources[1][i][7])
reference_image.SetOrigin(reference_origin)
reference_image.SetSpacing(reference_spacing)
reference_image.SetDirection(reference_direction)

# Always use the TransformContinuousIndexToPhysicalPoint to compute an indexed point's physical coordinates as
# this takes into account size, spacing and direction cosines. For the vast majority of images the direction 
# cosines are the identity matrix, but when this isn't the case simply multiplying the central index by the 
# spacing will not yield the correct coordinates resulting in a long debugging session. 
reference_center = np.array(reference_image.TransformContinuousIndexToPhysicalPoint(np.array(reference_image.GetSize())/2.0))

Fifth, I wish to perform the registration and show the transformed moving_image:

final_transform,_ = multires_registration(fixed_image, moving_image, initial_transform)

#Resample the moving image given the optimized transform, and display
moving_resampled = sitk.Resample(moving_image, fixed_image, final_transform, sitk.sitkLinear, 0.0, moving_image.GetPixelID())
# display_images_with_alpha is as defined in the Utility Functions in the Jupyter notebook 60_Registration_Introduction
interact(display_images_with_alpha, image_z=(0,fixed_image.GetSize()[2] - 1), alpha=(0.0,1.0,0.05), fixed = fixed(fixed_image), moving=fixed(moving_resampled))

I know I’m stuck at initializing an Affine 2D transform with centering, and would appreciate suggestions on how to do that. But what else am I missing to get the spine of this registration process working? I realize that my pixel types are a bit mixed up in the code… you can assume all are type sitk.sitkFloat32 for simplicity.
Thank you greatly for your time!

Hello @ScottP,

Welcome to SimpleITK/Python/registration, tackling all at once is not going to be easy. We’ll try to make it manageable.

First, to debug a registration you need to visualize the images after each step. This will give you an idea if you have a bug. Second the Too many samples map outside moving image buffer exception is indeed indicating that the images no longer overlap, as you suspected the initial alignment is off. In your code this is a combination of two transforms (initial_transform, optimized_transform) you also need to combine the two after the registration, your code doesn’t do that (see Version 1.1 in this notebook).

Let’s simplify, start with a basic 2D/2D affine registration example. To run the following code as is you will need to obtain the two images photo.dcm and cxr.dcm.

Once you understand how this code works, you should be able to generalize it for your case, or ask a specific question (one question at a time please :wink:):

import SimpleITK as sitk

def multires_registration(fixed_image, moving_image, initial_transform):    
    registration_method = sitk.ImageRegistrationMethod()
    registration_method.SetInterpolator(sitk.sitkLinear)
    registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)
    registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)
    registration_method.SetMetricSamplingPercentage(0.01)
    registration_method.SetOptimizerAsGradientDescent(learningRate=1.0, numberOfIterations=100, convergenceMinimumValue=1e-6, convergenceWindowSize=10)
    registration_method.SetOptimizerScalesFromPhysicalShift() 
    registration_method.SetInitialTransform(initial_transform, inPlace=True)
    registration_method.SetShrinkFactorsPerLevel(shrinkFactors = [4,2,1])
    registration_method.SetSmoothingSigmasPerLevel(smoothingSigmas = [2,1,0])
    registration_method.SmoothingSigmasAreSpecifiedInPhysicalUnitsOn()

    registration_method.Execute(fixed_image, moving_image)
    
    print('Final metric value: {0}'.format(registration_method.GetMetricValue()))
    print('Optimizer\'s stopping condition, {0}'.format(registration_method.GetOptimizerStopConditionDescription()))
    
# Read the images and modify so that they are 2D, the original size is [x,y,1] and we want [x,y].
fixed_image = sitk.ReadImage('cxr.dcm', sitk.sitkFloat32)[:,:,0]
moving_image = sitk.ReadImage('photo.dcm', sitk.sitkFloat32)[:,:,0]

# Display the original images and resamples moving_image (onto the
# fixed_image grid using the identity transformation)
sitk.Show(fixed_image, 'fixed')
sitk.Show(moving_image, 'moving')
sitk.Show(sitk.Resample(moving_image, fixed_image, sitk.Transform()), 'identity transform')

# Centered 2D affine transform and show the resampled moving_image using this transform.
registration_transform = sitk.CenteredTransformInitializer(fixed_image, 
                                                      moving_image, 
                                                      sitk.AffineTransform(2), 
                                                      sitk.CenteredTransformInitializerFilter.GEOMETRY)
sitk.Show(sitk.Resample(moving_image, fixed_image, registration_transform), 'initial affine transform')


# Register using 2D affine initial transform that is overwritten
# and show the resampled moving_image using this transform.
multires_registration(fixed_image, moving_image, registration_transform)
sitk.Show(sitk.Resample(moving_image, fixed_image, registration_transform), 'final affine transform')
2 Likes

Thanks so much for your help, @zivy! :slight_smile: I’ve combined the initial_transform and optimized_transform after registration to get the following 2D Affine registration to work, using your ‘basic’ example and Version 1.1 notebook link:

def multires_registration(fixed_image, moving_image, initial_transform):
        
    registration_method = sitk.ImageRegistrationMethod()
    registration_method.SetInterpolator(sitk.sitkBSpline)
    registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)
    registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)
    registration_method.SetMetricSamplingPercentage(0.01)
    registration_method.SetOptimizerAsGradientDescent(learningRate=0.5, numberOfIterations=100, convergenceMinimumValue=1e-6, convergenceWindowSize=10)
    registration_method.SetOptimizerScalesFromPhysicalShift() 
    registration_method.SetShrinkFactorsPerLevel(shrinkFactors = [2,1]) 
    registration_method.SetSmoothingSigmasPerLevel(smoothingSigmas = [1,0]) 
    registration_method.SmoothingSigmasAreSpecifiedInPhysicalUnitsOn()

    optimized_transform = sitk.AffineTransform(2)
    registration_method.SetMovingInitialTransform(initial_transform)   
    registration_method.SetInitialTransform(optimized_transform, inPlace=False)

    registration_method.AddCommand(sitk.sitkStartEvent, registration_callbacks.metric_start_plot)
    registration_method.AddCommand(sitk.sitkEndEvent, registration_callbacks.metric_end_plot)
    registration_method.AddCommand(sitk.sitkMultiResolutionIterationEvent, registration_callbacks.metric_update_multires_iterations) 
    registration_method.AddCommand(sitk.sitkIterationEvent, lambda: registration_callbacks.metric_plot_values(registration_method))

    optimized_transform = registration_method.Execute(fixed_image, moving_image)
        
    print('Final metric value: {0}'.format(registration_method.GetMetricValue()))
    print('Optimizer\'s stopping condition, {0}'.format(registration_method.GetOptimizerStopConditionDescription()))
    print('Optimized Transform from Multires:')
    print(optimized_transform)

    return (optimized_transform)


fixed_image = sitk.ReadImage('D:/Registration_tests/USAF-1951 fixed.tif', sitk.sitkFloat32)
moving_image = sitk.ReadImage('D:/Registration_tests/USAF-1951 moving three-quarter resolution.tif', sitk.sitkFloat32)

initial_transform = sitk.CenteredTransformInitializer(  fixed_image, 
                                                        moving_image, 
                                                        sitk.AffineTransform(2),
                                                        sitk.CenteredTransformInitializerFilter.GEOMETRY)
sitk.Show(fixed_image, 'fixed')
sitk.Show(moving_image, 'moving')
sitk.Show(sitk.Resample(moving_image, fixed_image, sitk.Transform()), 'identity transform')
sitk.Show(sitk.Resample(moving_image, fixed_image, initial_transform), 'initial affine transform')
print('Parameters: ' + str(initial_transform.GetParameters()))
print('FixedParameters: ' + str(initial_transform.GetFixedParameters()))

optimized_transform = multires_registration(fixed_image, moving_image, initial_transform)
final_transform = sitk.Transform(optimized_transform)
final_transform.AddTransform(initial_transform)

sitk.Show(sitk.Resample(moving_image, fixed_image, final_transform), 'final affine transform')
moving_resampled = sitk.Resample(moving_image, fixed_image, final_transform, sitk.sitkBSpline, 0.0, moving_image.GetPixelID())
interact(display_2d_images_with_alpha, alpha=(0.0,1.0,0.1), fixed = fixed(fixed_image), moving=fixed(moving_resampled))

This scales-up the moving image size appropriately when I use fixed_imge=cxr.dcm[:,:,0] and moving_image=photo.dcm[:,:,0] (same results as your example), but I have not been able to get the correct scaled-up outcome using this USAF-1951 fixed.tif, which is more representative of the blocky, almost two-level images that I need to process. Here’s the image results using the original chart as fixed_image, and 3/4 resolution of the same chart as moving_image.

Why might be causing a lower-resolution moving_image to not become scaled-up to align with the fixed_image?

Hello @ScottP,

Happy to help.

Thanks for sharing an image that is more representative of what you are working with. Your task is harder than most if you want to use intensity based registration. It may be more appropriate to identify corresponding points in the two images and perform point based registration (LandmarkBasedTransformInitializerFilter).

With intensity based registration the issue is that you have very high frequency content (what you refer to as blocky). That means that it is easy for the registration to fail (get stuck in local minima) because the terrain in parameter space has a very small convergence range. The common solution is to blur the images with larger kernels (in your code these are the smoothingSigmas) and possibly use more levels in the pyramid. The blurring at the higher levels of the pyramid smooths the optimized function, removing local minima, and increasing the convergence range. In lower levels of the pyramid there is less blurring but you are already inside the narrow convergence range.

In the extreme (working with high frequency binary images) instead of a blurring pyramid, you can replace the use of original images with their distance maps (e.g. SignedMaurerDistanceMapImageFilter, DanielssonDistanceMapImageFilter):

  1. Input binary images fixed_image, moving_image.
  2. Compute distance maps for both images, fixed_distance_map, moving_distance_map.
  3. Register fixed_distance_map, moving_distance_map.
  4. Apply the estimated transformation as if you had registered fixed_image, moving_image.

Hi @zivy,

I now have permission to post samples of my actual fixed and moving images :dart:

Thanks for the helpful suggestions! The screenshots below show registration results using SignedMaurerDistanceMapImageFilter distance maps with smoothingSigmas [2,1,0] and pyramid shrinkFactors [4,2,1]. Results were the same without smoothingSigmas and the pyramid.

As seen in the output matrix below, I’m getting almost-unitary scaling each time registration is run (whether the fixed image is bigger than the moving image, or vice-versa) so there’s still something funny with the scaling, in addition to the final transform results. You can also see the incorrect scaling in the initial_affine_transform and final_affine_transform images above.

The fixed image is always binary, but the moving image’s high frequency content is variable - it could equally look like it does in the above example, or like this:

Given that the moving image’s high frequency content could vary by about this much, do you think non-intensity-based registration is the way go? And are there features-based registration methods that do not require landmark points?
Landmarking is possible for my application if it can be automated…Before attempting registration, I tried to use template matching to get the location of an alignment marker on the moving image given its known location on the fixed image, but this method was intensity-based and the results were not very repeatable.

1 Like

Segmenting these polygons, then using their center points as landmarks seems like the most reliable way to go.

1 Like

Hi @ScottP,

Thanks for sharing the images, never seen anything quite like these, a far cry from medical images.

Assumptions:
0. 2D images.

  1. Fixed image is binary, with polygons and lines.
  2. Moving image is grayscale, with polygons and some “line like” structures.
  3. The transformation mapping between the two is affine. From looking at the two images and the shapes of the polygons the transformation looks like it is a similarity transform and not affine. Looks like the polygons retain their aspect ratio and there is no shearing. Caveat, conclusion is based on visual inspection of a single pair of images. The transformation type is critical when using a feature/point based registration as the features/points need to be invariant to the transformation type to enable automatic pairing.

Suggested solution below is applicable if the transform is affine or similarity.

Modify the binary image before registering, remove the thin structures while retaining the polygons using a morphological opening operation, then follow the distance map solution (your distance maps should look much more similar than the current ones - though I find the fixed map visually pleasing).

Before you continue, I suggest that you confirm the transformation type relating the two images using manual paired point registration. This notebook section on “Register using manual initialization”. After you obtain the paired points using the GUI there is a call to LandmarkBasedTransformInitializer with a VersorRigid3DTransform, change it to 2D Affine. Look at the results and see if the matrix is affine or closer to similarity. Also this will give you an idea of how the best case registration is expected to look (use the result to resample…)

2 Likes

Hi @Zivy,

Following the notebook you mentioned, here’s the final 2D Affine matrix from manual Landmark registration:

0.16024 -0.00058562
0.000703648 0.160461

So you’re correct - it is mainly a Similarity transform, and that’s true of all of the other pairs of input images. However, I do expect some small nonisotropic scaling and skewing components in the matrix, because in my system the imaging axis is slightly off-angle with respect to the normal of the flat specimen surface, in both X and Y directions. (This is what motivated me to pursue registration in the first place; previously, I sequentially rotated, isotropically scaled, and translated the moving image, but the mismatch to the fixed image was always worse than one pixel of the original moving image, and I need at least a mapping accuracy of 1-pixel across the image). In SimpleITK I’ve been implicitly approximating these small off-angle (actually projection) components as Affine nonisotropic scale and skew. Perhaps full projection would be best.

Following your suggestion, I removed all thin structures from the fixed image (and additionally removed the brighter thin structures from the moving image), then ran the same 2D Affine registration that gave the results in my previous post. Result images are below; the distance maps are more similar than in my previous post, as you intuited, but the scaling in the initial transform is still incorrect.

Before I get further into landmark registration and segmentation as @dzenanz recommended (good idea! I’d probably start with Binary Watersheding the polygons), it seems important to first get a reasonable scale in the initial transform of intensity-based registrations. In fact, the manual Landmark registration result above is the first time I’ve seen non-unity, correct scaling. I suspect that my unreasonable initial scales in intensity-based registrations are due to big resolution differences between fixed and moving images, e.g. 3000 vs 1920 pxl widths, combined with an implicit assumption of 1:1 fixed:moving pixel spacing… I was attempting to use SetSpacing in the “Fourth” step of my original post to ensure similar physical spaces occupied by the fixed and moving images, but I’m still having trouble seeing how that would fit into my current 2D Affine routine. Or maybe my focus here is on the the wrong problem? :stuck_out_tongue:

Thank you very much again for your guidance, Ziv.

Are you just visualizing the distance maps of fixed and moving images in an inverted way, or are they actually inverted? Having them with the same sign would probably help even with MMI metric, and would enable other metrics. If your spacings are correct, then even mean squares would/should work. If your spacing metadata in images is not correct, putting even approximate values would help too (e.g. one has 1.0 the other has 1.5, even if only relative). And then if transform with original spacing is required, adjust transform’s scaling.

Hi @dzenanz,

Thanks very much for your suggestions.

Those distance maps from the SignedMaurerDistanceMapImageFilter were made with InsideIsPositiveOff(), but I get the same error below using InsideIsPositiveOn() to invert both maps (shown in the pic).

In the new code below, I’ve manually set proper spacing for the fixed and moving images, resampled the moving image based on the fixed image so that both images start with the same scale & resolution, and I’ve chosen (0,0) as the origin for both images. Here’s the results after initial registration is defined, just before the error is thrown:

Executing the code below results in the error

itk::ERROR: MattesMutualInformationImageToImageMetricv4(000001DD86709000): Joint PDF summed to zero

which comes from line

optimized_transform = registration_method.Execute(fixed_image, moving_image)

within the multires_registration method defined in my 2nd post in this thread.

New code:

dimension = 2

moving_image = sitk.ReadImage('D:/Registration_tests/moving_image.tif', sitk.sitkFloat32)
fixed_image = sitk.ReadImage('D:/Registration_tests/fixed_image.tif', sitk.sitkFloat32)
sitk.Show(fixed_image, 'fixed_image')
sitk.Show(moving_image, 'moving_image')

# Set mm/pxl spacing
fixed_mm_per_pxl = [2.696/2892, 2.696/2892]
moving_mm_per_pxl = [2.696/1856, 2.696/1856]
fixed_image.SetSpacing(fixed_mm_per_pxl)
moving_image.SetSpacing(moving_mm_per_pxl)

# Set Direction Cosine Matrix
reference_direction = np.identity(dimension).flatten()
fixed_image.SetDirection(reference_direction)
moving_image.SetDirection(reference_direction)

# Set Origin [x_pixel, y_pixel]
reference_origin = np.zeros(dimension)
fixed_image.SetOrigin(reference_origin)
moving_image.SetOrigin(reference_origin)

# Setup Distance Filter
distance_filter = sitk.SignedMaurerDistanceMapImageFilter()
distance_filter.SquaredDistanceOff()
distance_filter.UseImageSpacingOn()
distance_filter.InsideIsPositiveOn() # produces same result with distance_filter.InsideIsPositiveOff()

# Set Background Pixel Value of the fixed image for Distance Filter and Execute
fixed_np = sitk.GetArrayFromImage(fixed_image)
fixed_modal_value, _ = stats.mode(fixed_np, axis=None)
fixed_modal_value = float(fixed_modal_value)
print('fixed_image modal value = {}'.format(fixed_modal_value))
distance_filter.SetBackgroundValue(fixed_modal_value)

fixed_distance_map = distance_filter.Execute(sitk.Cast(fixed_image, sitk.sitkUInt32))
sitk.Show(fixed_distance_map, 'fixed_distance_map')

# Set initial scale and center of moving_image resampled on the fixed_image space
resample = sitk.ResampleImageFilter()
resample.SetReferenceImage(fixed_image)
transform_scale=sitk.ScaleTransform(dimension)
transform_scale.SetScale([3.7279751037344404/2.787512931034483, 2.7957482710926698/2.1193232758620693]) # fixed_image_space/moving_image_space, where e.g. x_space = (pxls_width - 1) * x_mm_per_pxl 
transform_scale.SetCenter(reference_origin) # center of the image to be resampled
resample.SetDefaultPixelValue(0)
resample.SetInterpolator(sitk.sitkBSpline)
inv_transform_scale=transform_scale.GetInverse()
resample.SetTransform(inv_transform_scale)

resampled_moving_image = resample.Execute(moving_image)
sitk.Show(resampled_moving_image, 'resampled_moving_image')

#Set Background Pixel Value of the moving image for Distance Filter and Execute
moving_np = sitk.GetArrayFromImage(moving_image)
moving_modal_value, _ = stats.mode(moving_np, axis=None)
moving_modal_value=float(moving_modal_value)
print('moving_image modal value = {}'.format(moving_modal_value))
distance_filter.SetBackgroundValue(moving_modal_value)

resampled_moving_distance_map = distance_filter.Execute(sitk.Cast(resampled_moving_image, sitk.sitkUInt32))
sitk.Show(resampled_moving_distance_map, 'resampled_moving_distance_map')

print('resampled_moving_distance_map.GetSpacing()={}'.format(resampled_moving_distance_map.GetSpacing()))
print('fixed_distance_map.GetSpacing()={}'.format(fixed_distance_map.GetSpacing()))

# Set Initial transform
initial_affine=sitk.AffineTransform(dimension)
initial_affine.SetCenter([0,0]) #Error insensitive to choice of center (0,0) or (image_width/2, image_height/2)
print('initial_affine Parameters: ' + str(initial_affine.GetParameters()))
print('initial_affine FixedParameters: ' + str(initial_affine.GetFixedParameters()))

initial_transform = sitk.CenteredTransformInitializer(  fixed_distance_map,
                                                        resampled_moving_distance_map,
                                                        initial_affine,
                                                        sitk.CenteredTransformInitializerFilter.GEOMETRY)
print('initial_transform Parameters: ' + str(initial_transform.GetParameters()))
print('initial_transform  FixedParameters: ' + str(initial_transform.GetFixedParameters()))

sitk.Show(sitk.Resample(resampled_moving_distance_map, 
                        fixed_distance_map, 
                        initial_transform, 
                        sitk.sitkLinear, 
                        0.0, 
                        moving_image.GetPixelID()),'initial_transform of resampled_moving_distance_map')

# Definition of multires_registration method found in my 2nd post in this thread
optimized_transform = multires_registration(fixed_distance_map, moving_distance_map, initial_transform)

# Compose transformations after registration
final_transform = sitk.Transform(optimized_transform)
inv_final_transform = final_transform.GetInverse()
new_origin = inv_final_transform.TransformPoint(reference_origin) # Get transformed origin
final_transform.AddTransform(initial_transform)

I understand that ‘Joint PDF summed to zero’ often happens when the fixed and moving images do not overlap in physical space, but they do overlap after I set the initial transform (see pic), so I’m not sure what’s causing this error.

Your setup looks good to me. With that setup you should be able to use mean squares metric instead of MMI, and it should be both faster and more robust for registering distance maps.

Origin is one of the corners (top-left or maybe bottom-left), so you might want to set reference_origin to -w/2,-h/2.

You may also want to print the value GetMetricNumberOfValidPoints in the call back at each iteration to monitor the overlap.

I just got back to looking at this today. Where I left off, registration_method.Execute was failing due to MMI error “Joint PDF summed to to Zero”. I have now replaced MMI with the mean squares metric, resulting in multi-level registration failing but single-level registration successfully completing with poor results.

Specifically, multi-level registration now fails to find GetMetricNumberOfValidPoints > 0 after the first iteration (thanks for the tip @blowekamp!). This occurs with any pyramid, e.g.:

ImageRegistrationMethod.SetShrinkFactorsPerLevel(shrinkFactors = [2,1])
ImageRegistrationMethod.SetSmoothingSigmasPerLevel(smoothingSigmas = [1,0])

In the registration results below, the red box after the empty plot shows this behaviour.

However, single-level registration (e.g. shrinkFactors=[1] and smoothingSigmas=[0]) successfully runs to completion, with output below:

I don’t like that increasing metric plot, and indeed the final transformed moving image is still not well- aligned with the fixed image, which is a binary mask. Their multiplication result is below - note the largest offsets in polygons at the bottom-left and top-right.


Single-level registration partly worked though, since the misalignment between the initially transformed moving image and the fixed binary mask is even worse, by a few pixels (multiplication result below):

The poor result from single-level registration might be attributed to the increasing metric, which I had originally hoped to avoid by using a multilevel pyramid… Any thoughts as to why GetMetricNumberOfValidPoints gives zero for the 2nd to Nth multilevel iterations?

And @dzenanz when the fixed and moving image origins are set to (0,0) then the moving image on the fixed image physical space is resampled correctly. But when their origins are ±(w/2, h/2) then the resampled moving image is uniform with all pixel values = ResampleImageFilter.SetDefaultPixelValue, and so registration fails. I’m surprised that using the midpoint of images as their origin results in this behaviour, but using (0,0) doesn’t seem to affect the registration problems that I encountered in this post.

Corrections & Update:

I was accidentally registering distance maps in my last post; now registering the actual fixed and resampled moving images, and all the (0,0) instances of SetCenter and SetOrigin have been replaced with (-w/2, -h/2) of each image, as @dzenanz suggested, except that ScaleTransform.SetCenter([0,0]) is still needed to resample the moving image with its (-h/2, -w/2) origin.

Result is that single-level registration now works to well-within 1 pixel error on the example images that I’ve posted so far :partying_face:! But multilevel registration still fails due to the same problem in my previous post: no valid points found on the 2nd to Nth iterations.

For example, two-level registration with

registration_method.SetShrinkFactorsPerLevel(shrinkFactors = [2,1])
registration_method.SetSmoothingSigmasPerLevel(smoothingSigmas = [1,0])

gives (note the first two lines of text in the screenshot):


and the final transformed moving image is all pixel value = 0.

But any number of single-level registrations, like these three

registration_method.SetShrinkFactorsPerLevel(shrinkFactors = [1,1,1])
registration_method.SetSmoothingSigmasPerLevel(smoothingSigmas = [0,0,0]) 

gives successful & accurate registration:

Why do I need multilevel registration if single-level is working well?
Because the polygons are not always 100% filled with white. See pics below; polygons are typically filled between 10% and 100%. I thought that a smoothing pyramid could help align polygons that aren’t filled to the same shape, though are still located in the same general vicinity of the fixed and moving images. Registering with these variable shapes is a challenge, and any help with the smoothing pyramid approach or alternative methods would be appreciated!

Part of fixed image mask:


Same part of moving image with a typical instance of partly filled polygons:

You may want to limit the maximum step size of the optimizer with the maximumStepSizeInPhysicalUnits parameter. This will likely case an increase in the required number of iterations.

In this code, we initialize with affine registration, then the optimization process starts. How does the algorithm know to continue to affine optimization? Does it make sense to change the type of registration in the optimization process? Thanks

Hello @Moha_data,

What you are observing is the use of function overloading. The registration algorithm is not aware of what type of transformation it is optimizing. The registration uses the transformation’s GetParameters() and SetParameters() methods. So all transformation types are used in the same way. The effect of the parameter values does depend on the specific transformation which is determined by the parameter set in the SetInitialTransform method.

In some cases registration is more stable if you run it in two steps. Start with a lower parameter space (i.e. rigid) and then in a second step run the registration using a higher parameter space (i.e. affine) with the previous result as the initial transformation. The transforms notebook illustrates how to move from a lower dimension transform to a higher one.

2 Likes

Hi @zivy, in this case, we initialize with affine registration. If we use inPlace=False, does it use the rigid body to continue? Thanks

Hello @Moha_data,

The inPlace parameter does not have an effect on the type of transformation being optimized, rigid or affine, that is determined by the actual type of the initial_transform. This parameter just tells the registration framework if it should make a copy of the initial_transform and modify that during the optimization, or modify the original.

Multi-stage registration, starting with rigid and then moving to affine needs to be explicitly implemented by the developer, it isn’t part of the registration framework.

1 Like