Need help urgently!! I have a pair of images, a CT image with a voxel size of 0.1250.1250.125 and an MRI image with a voxel size of 0.50.53. Given the coordinates of the four pairs of key points, how can I use these four pairs of points to calculate the transformation matrix corresponding to the two images and match this pair of images? Should the coordinates of the key points be the slice index or the ITK world coordinates? The origin and direction of the two images are different, what should I do?
You probably want to use LandmarkBasedTransformInitializer. It takes points in physical space (world coordinates). If you have indices of the key points, you can use image->TransformIndexToPhysicalPoint(). If you transform each indices using their corresponding image, different origins and directions don’t matter, as they are properly accounted for by TransformIndexToPhysicalPoint()
.
As always, when working with medical images and considering using ITK and VTK then you can save a lot of time by prototyping and testing your algorithms in 3D Slicer with a convenient GUI and built-in Python console.
For example, Fiducial registration module in Slicer provides an interactive user interface for ITK LandmarkBasedTransformInitializer. If something goes wrong in your registration workflow then you can very easily figure out where the issue is - do you generate the input coordinates incorrectly or you interpret the results incorrectly? If you find a difference, you may immediately see that you have some inverted signs (RAS vs LPS coordinate system), wrong order of coordinates (numpy vs. ITK/VTK convention), using physical/voxel coordinates, etc.