I’m new to SimpleITK and have familiarized myself with basics of 3D image registration. I’m working on an image registration pipeline to align two 3D images. The images are density maps for protein structures and have complex geometries. However, the good news is that the images are almost identical.
It cannot be assumed that the images overlap initially. Also, the moving_image may require significant rotation to achieve alignment with the fixed_image. The moving_image will likely not be centered within its volume, and will be relatively far from the origin.
I’m using Python and the images are loaded in SimpleITK and have the same size, spacing, origin, and direction. I can successfully get the image masses to overlap using CenteredTransformInitializer() as the initial transform and the SetOptimizerAsGradientDescent() optimizer.
However, I’m unable to achieve rotational alignment between the images. I’ve experimented with the SetOptimizerAsExhaustive() but this causes the moving_image to migrate far away from the fixed_image. My hunch is that this is because the moving_image is rotated about its origin, which is far away from where the actual image density resides, so it’s “swinging” far out of alignment with even minor rotational changes. Perhaps the image origin needs to be aligned with the center of mass of the image, but I’m not sure.
Can someone offer advice on how to approach this problem?
Registration initialization is a critical part of classical registration which is not often discussed in textbooks because it is either heuristic or context dependent (e.g. implicitly assuming the object to be registered is in the middle of the image volumes). Rotational alignment between the images may be hard to achieve if the orientation differences are significant. Most algorithms will fail to converge to the correct transformation if there is 180 degree rotation after initialization. Also, if the structure is close to spherical the problem becomes ill-posed (an infinite number of solutions).
Finally when you say "I’ve experimented with the SetOptimizerAsExhaustive() but this causes the moving_image to migrate far away from the fixed_image." The exhaustive optimizer is the only method that the user fully controls because it just evaluates the similarity between the images using a grid in the transformation’s parameter space, so if you define the grid in a way that will not admit solutions that are far from the expected result you will not get such solutions.
In your description of the problem it was also not clear what type of transformation is being used, rigid, affine or something else? I suspect rigid, but this is mostly a guess.
If the notebooks do not help resolving your issue, please provide a more detailed description of the registration problem and we’ll try our best to help.
If your moving image is far from the origin, you could try setting the center of rotation of the transform that is being optimized, or using composite transform, where only the second transform is optimized via SetOnlyMostRecentTransformToOptimizeOn().