Maintain physical location with registration?

Hello -

I’m trying to register a CT head to a CT head template, but I want the final registered image to maintain the same physical location as the original head CT. I used the method from introduction to registration (http://insightsoftwareconsortium.github.io/SimpleITK-Notebooks/Python_html/60_Registration_Introduction.html). The registration works well, aligning the pixels to the template, but it also moves the physical location of the image to the template (I’m verifying the position by viewing in 3D Slicer). Fingers crossed, but is there an option to maintain original location? If not, does anyone have advice on how to recalculate an Origin and Direction to get back? Simply applying the final_transform to the origin point and direction vectors doesn’t seem to work. Appreciate any advice - this has stumped me for awhile.

Thanks,
Justin

Does using TransformPoint etc on the inverse of final_transform work?

Hello @radiplab,

Not sure about what you are expecting from registration. I suspect you need clarification with respect to the process, so here goes.

Registration
Input: fixed_image, moving_image
output: T, transformation that maps points from the fixed image coordinate system to the moving image coordinate system - T(p_fixed_image) = p_moving_image

Application
Now that you have the transformation, we can do many things. Two common applications:

  1. Resample the moving_image onto the fixed_image grid, combining the intensity information into one image. This is useful in multi-modality scenarios. For instance you create a MR-CT image where soft tissue is well defined from MR and bone from CT. In this case there is no preference which image is the fixed and which is the moving.
  2. Transform a segmentation defined in one image space to another (based on your terminology, “template image”, I suspect this is what you want). In this case, because you want to map the segmentation from your template image to the new image, the template image should be the moving_image. Or as @dzenanz suggested if you have things the other way round, just call the T.GetInverse() and apply that transformation to map the segmentation.

If these don’t cover your scenario you will need to provide more specific details.

1 Like

Thanks for your replies @zivy and @dzenanz. I’m exploring registration as a way to “line up” a head CT - they’re often acquired with a patient’s head tilted to the side, etc. I still want the lined-up version to be in the same physical space as the original acquisition so if viewed in PACS, the reference lines would still match up. When I register the original CT image to a template (the Rorden standard head CT) in sitk, it moves that CT image to the same physical space as the template. I want to move it back to the original physical space but keep the newly aligned pixels.

I tried applying the inverse and that didn’t seem to work. It’s entirely probable I’m doing that wrong. Probably helpful to include the code I’m using - a large part of it is directly from the sitk registration introduction so may look familiar. Really appreciate your help:

fixed_image = sitk.Cast(fixed_image, sitk.sitkFloat32)
moving_image = sitk.Cast(moving_image, sitk.sitkFloat32)

# Use the CenteredTransformInitializer to align the centers of the two volumes and set the center of rotation to the center of the fixed image
initial_transform = sitk.CenteredTransformInitializer(fixed_image, 
                                                    moving_image, 
                                                    sitk.Euler3DTransform(), 
                                                    sitk.CenteredTransformInitializerFilter.GEOMETRY)

# ***** Registration
registration_method = sitk.ImageRegistrationMethod()

# Similarity metric settings.
registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)
registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)
registration_method.SetMetricSamplingPercentage(0.01)
registration_method.SetInterpolator(sitk.sitkLinear)

# Optimizer settings.
registration_method.SetOptimizerAsGradientDescent(learningRate=1.0, numberOfIterations=100, convergenceMinimumValue=1e-6, convergenceWindowSize=10)
registration_method.SetOptimizerScalesFromPhysicalShift()

# Setup for the multi-resolution framework.            
registration_method.SetShrinkFactorsPerLevel(shrinkFactors = [4,2,1])
registration_method.SetSmoothingSigmasPerLevel(smoothingSigmas=[2,1,0])
registration_method.SmoothingSigmasAreSpecifiedInPhysicalUnitsOn()

# Don't optimize in-place, we would possibly like to run this cell multiple times.
registration_method.SetInitialTransform(initial_transform, inPlace=False)
final_transform = registration_method.Execute(sitk.Cast(fixed_image, sitk.sitkFloat32), 
                                            sitk.Cast(moving_image, sitk.sitkFloat32))

print('Final metric value: {0}'.format(registration_method.GetMetricValue()))
print('Optimizer\'s stopping condition, {0}'.format(registration_method.GetOptimizerStopConditionDescription()))               

# Perform the registration
min_value = float(sitk.GetArrayFromImage(moving_image).min())
registered_image = sitk.Resample(moving_image, fixed_image, final_transform, sitk.sitkLinear, min_value, moving_image.GetPixelID())

# Adjust the registered image direction
width, height, depth = registered_image.GetSize()
center = registered_image.TransformIndexToPhysicalPoint((int(np.ceil(width/2)),
                                                        int(np.ceil(height/2)),
                                                        int(np.ceil(depth/2))))   
vector_x = moving_image.GetDirection()[0:3]
vector_y = moving_image.GetDirection()[3:6]
vector_z = moving_image.GetDirection()[6:9]                                                             
new_vector_x = final_transform.GetInverse().TransformVector(vector_x, center)
new_vector_y = final_transform.GetInverse().TransformVector(vector_y, center)
new_vector_z = final_transform.GetInverse().TransformVector(vector_z, center)
new_direction = (new_vector_x[0], new_vector_x[1], new_vector_x[2], new_vector_y[0], new_vector_y[1], new_vector_y[2], new_vector_z[0], new_vector_z[1], new_vector_z[2])
registered_image.SetDirection(new_direction)

# Adjust the registered image origin
new_origin = final_transform.GetInverse().TransformPoint(registered_image.GetOrigin())
registered_image.SetOrigin(new_origin)

Wow, I actually got it working. Applied the inverse of final_transform to the registered_image direction, and the final_transform to the registered_image origin.

Appreciate your feedback. I partly needed a sanity check also - it seems like there should be an easier way to move pixels around in sitk and preserve their physical location. Like a “preserveLocation=True” flag. Anyone know if that exists?

As far as I know, moving and preserving location are opposites. What you might want is resampling. This example does scaling, and you might want rigid transform.

That’s what I was thinking - can’t quite wrap my head around why applying the transform (non-inverse) to the origin worked with registration.