Dear all,
Before I get to the question this is the background: I have a registration pipeline that runs Mattes Mutual Information registration (with OnePlusOneEvolutionary as optimizer) using MATLAB and I would like to convert it to Python. One of the comments in MATLAB’s OnePlusOneEvolutionary function suggested that MathWorks implemented this function based on ITK, so I assumed the conversion would go rather smoothly as I could just call the corresponding functions from SimpleITK and plug in the parameters previously used in MATLAB. Turned out it wasn’t that straightforward. I created a synthetic image pair where a translational offset was introduced to img_A to create its counterpart, img_B. Using the same parameters (growth / shrink factor, epsilon, initial radius), MATLAB took around 200 iterations to find the correct local minimum, whereas SimpleITK could take over 700+. To make SimpleITK converge in the same speed, I had to increase the growth factor from 1.05 to 1.25. One important thing to note is that most of MATLAB’s algorithms were implemented in C/C++ and it’s not clear to me how some of the parameters were used. For example, pyramidLevels=3 is an argument that gets passed into MATLAB’s internal registration function but I have no way to know what the shrink factor would be for each pyramid level. TLDR: It’s very likely that mattes mutual information registration is implemented differently between simpleITK and MATLAB. For simpleITK I came up with two sets of parameters: set 1 uses the same parameters as MATLAB; set 2 uses the parameters that achieve the same performance as MATLAB.
So here is the problem I encountered:
When translational shift is the only offset between images, set 2 seemed to work pretty well. However, if there’s a rotational shift between fixed and moving images, simpleITK does not seem to do well in finding the correct rotation. For example, I created another synthetic image pair where img_A is rotated counterclockwise by 2 degrees at the center of the image to create its counterpart img_B. If I asked MATLAB to register img_B to img_A using Mattes Mutual w/ OnePlusOneEvolutionary as optimizer, the affine matrix I got was (by default MATLAB transposes the matrix):
[[0.999399001473223 0.034664619633346 0]
[-0.034664619633346 0.999399001473223 0]
[21.231931825887902 -21.052581869688499 1.000000000000000]]
whereas the affine matrix I got from simpleITK would be:
[[ 9.99999902e-01 -4.18741788e-08 5.59955927e-01]
[ 5.04991089e-08 1.00000012e+00 -7.09840319e-01]
[ 0.00000000e+00 0.00000000e+00 1.00000000e+00]]
It appears that SimpleITK’s Mattes Mutual Information is really “against” learning rotational shift during registration, and this behavior has been quite consistently seen on many other images from real world dataset. I am not very sure why this is the case, and I would love to learn how to make it work properly if possible. Thank you!
This is the code I used:
# ref_img_path and mov_img_path are paths to tiff images, ref is 1 channel uint8, mov is 3 channel uint8
ref_img = sitk.GetImageFromArray(tifffile.imread(ref_img_path), sitk.sitkFloat32)
mov_img = sitk.GetImageFromArray(tifffile.imread(mov_img_path), sitk.sitkFloat32)
selectCastFilter = sitk.VectorIndexSelectionCastImageFilter()
selectCastFilter .SetIndex(0)
selectCastFilter .SetOutputPixelType(sitk.sitkFloat32)
ref_img = selectCastFilter.Execute(ref_img)
mov_img = selectCastFilter.Execute(mov_img)
registration_method = sitk.ImageRegistrationMethod()
registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)
registration_method.SetMetricSamplingStrategy(registration_method.NONE)
registration_method.SetInterpolator(sitk.sitkLinear)
growthFactor=1.25
registration_method.SetOptimizerAsOnePlusOneEvolutionary(growthFactor=growthFactor, shrinkFactor=growthFactor**(0.25), epsilon=1.5e-6, initialRadius=6.25e-3/3, numberOfIterations=200, seed=42)
registration_method.SetOptimizerScalesFromPhysicalShift()
registration_method.SetShrinkFactorsPerLevel(shrinkFactors=[4,2,1])
registration_method.SetSmoothingSigmasPerLevel(smoothingSigmas=[2,1,0])
registration_method.SmoothingSigmasAreSpecifiedInPhysicalUnitsOn()
# initial_transform = sitk.CenteredTransformInitializer(ref_img, mov_img, sitk.AffineTransform(ref_img.GetDimension()))
# registration_method.SetInitialTransform(initial_transform, inPlace=False)
initial_transform = sitk.AffineTransform(ref_img.GetDimension())
registration_method.SetInitialTransform(initial_transform, inPlace=False)
final_transform = registration_method.Execute(ref_img, mov_img)
tform = np.array(final_transform.GetParameters())
temp_tform = np.reshape(tform[:4], (2,2))
temp_tform = np.concatenate((temp_tform, np.expand_dims(tform[-2:], axis=-1)), axis=-1)
temp_tform = np.concatenate((temp_tform, np.array([[0,0,1]])), axis=0)
inv_tform = np.linalg.inv(temp_tform)
print(inv_tform)
Best,
Zihang