Resampled image does not match the points returned by transformpoint

I am applying a transform matrix to resample an MR image by following this notebook.

I am able to resample the image. However, when I apply the matrix on the point to get the transformed point and compare the initial and transformed points on the initial and final image, they correspond to different parts. I am using the inverse transform while transforming the point using TransformPoint method. Can someone guide me to know if something is wrong here? Thanks in advance

Hello @prms,

The transformation obtained by the registration maps points from the fixed image to the moving image (physical points, not indexes), so when you evaluate distances you are working with three points: point_in_fixed_image, corresponding_point_in_moving_image, T(point_in_fixed_image).

You then look at the distance d(T(point_in_fixed_image), corresponding_point_in_moving_image).

The following code provides a GUI that shows you where your points are mapped for a computed transformation final_transform and using the original fixed and moving images. The gui module is part of the notebook distribution. You don’t need to set the window-level settings if you don’t have specific ones in mind, will use default ones.

%matplotlib notebook
import gui
gui.RegistrationPointDataAquisition(fixed_image, moving_image, figure_size=(8,4), 
                                    fixed_window_level=ct_window_level, moving_window_level=mr_window_level);

Hi @zivy I think the question was not clear. I am using the transform matrix learned using least squares on MR image. So far, I haven’t used registration. It’s just transforming the image using the affine(4x4) matrix to see how transformation works. I am using the affine matrix

array([[ 0.94969925, -0.07489753, -0.09085982,  2.34559412],
           [ 0.0128945 ,  0.97870216, -0.01826539, -1.60140903],
           [ 0.04813273, -0.02432601,  1.06080813, -5.17650667],
           [ 0.        ,  0.        ,  0.        ,  1.        ]])

to transform the MR image and after resampling using the above matrix the get this resultant image. When I use the inverse transform on the point (50.16, -31.71, 34.84), the result is (47.81, -30.10, 40.01). However, when I see compare the first point in the source image and the second point in the transformed image, they seem to be different. I was expecting them to locate to similar place across the images since the same matrix is used in transforming from one point to the other.

I’ve used resample function from this notebook.

Hello @prms,

Not really sure what it is you are doing, “learned matrix”. The way resampling works is as follows:

  1. We have an image we want to resample, it occupies a physical region in space.
  2. We define a grid of points in physical space, either explicitly or implicitly by using a reference image in which case the grid coincides with the image voxel locations (not indexes). The reference image can be another image or the image that we are resampling, in both cases it is just a way to define a grid.
  3. We have a transformation that maps the physical grid points to the resampled image and we take the intensities from there and place them at the grid points, resulting in our resampled image.

Hopefully this clarifies how resampling works, so that you can obtain what you want.

Hi @zivy, Thank you very much for your response. I would like to re-frame my scenario since I think I am not able to convey my query clearly. So far, I have done the following:

  1. I have transformed the MR image using affine transform. The parameters of transform are:
    array([[ 0.94969925, -0.07489753, -0.09085982, 2.34559412],
    [ 0.0128945 , 0.97870216, -0.01826539, -1.60140903],
    [ 0.04813273, -0.02432601, 1.06080813, -5.17650667],
    [ 0. , 0. , 0. , 1. ]])
  2. Reference grid was taken as the image itself. I applied the above mentioned affine transform on this image. After applying transform/resampling I get this output.
  3. Now I would like to compare two images visually side by side. For that, I took a point in the initial image and obtained its corresponding point on the new image by applying the inverse affine transform on the point of the initial image. For (x, y, z) in initial image, I obtained (x', y', z') using affine.GetInverse().TransformPoint((x, y, z)) I want to compare (x,y,z) on initial image and (x', y', z') on the new image visually.
  4. The problem is (x, y, z) on the initial image is not similar to (x', y', z') on the image after resampling when comparing them visually. My thought is that they should look similar since they are the location of the same point before and after transformation.

I am not able to figure out why do those two points (x, y, z) and (x', y', z') look different.
thesis transformation clarification (1)

For example: if the square is rotated and translated to the second square, when visualizing them, I think (x,y) and (x', y') should have similar values. When generalizing this argument to MR image, if (x, y, z) is the location of falx cerebri then (x', y', z') in the resampled image should also point to falx cerebri.

Am I missing anything here?

Hello @prms,

The transform maps points from the resampled_image to the original image. Please see the following code (Jupyter notebook and relies on gui module from notebooks repository):

import SimpleITK as sitk

%matplotlib notebook
import gui

image = sitk.ReadImage('Case1-FLAIR-resize.nii')
tx = sitk.AffineTransform(3)
tx.SetMatrix([0.94969925, -0.07489753, -0.09085982, 0.0128945 ,  0.97870216, -0.01826539, 0.04813273, -0.02432601,  1.06080813])
tx.SetTranslation([2.34559412, -1.60140903, -5.17650667])
image_resampled = sitk.Resample(image1=image, transform = tx)

#tx maps points from image_resampled to image
gui.RegistrationPointDataAquisition(image_resampled, image, figure_size=(8,4), 
1 Like

Thank you very much @zivy. My mistake was due to LAS and RAS coordinate difference. I was using the coordinates and viewer in RAS and was unable to compare them since SITK uses LAS coordinates. Now I am able to compare them visually in the tool I am using and the one you mentioned above.