How to apply composite affine matrix to a volume

I am performing affine transformation of image and have an affine matrix learned using PyTorch. I want to use the learned matrix to MR image and save the image to disk using SimpleITK. However, I am not able to apply the composite matrix. It is a 4x4 homogeneous transformation matrix. Any help regarding this would be highly appreciated.

Thanks,

Hello @prms

What you need is the AffineTransform’s SetMatrix method. Use the upper-left 3x3 submatrix of your 4x4 matrix (I’m assuming that the last row of your matrix is [0,0,0,1]). Unroll it in row major order. For additional details on working with SimpleITK transforms see this Jupyter notebook. For additional information on resampling, see this notebook.

1 Like

Thank you @zivy for your help.

I have one more question. Since I already have an affine matrix, how can I extract the center from it. In the example it is hardcoded.

Thanks,

Hello @prms

In the world outside ITK/SimpleITK the center is at the origin (0,0,0). If you don’t explicitly set the center in ITK/SimpleITK it defaults to the origin too. This is not ideal when working with images as a rotation around the point (0,0,0) may be far away from the center of the image which is what we usually want. In your case I suspect this doesn’t matter.

Hi @zivy, I think I was not quite clear in stating my problem. I am trying to register MR and US 3D images using PyTorch. I have a learned affine matrix(treating MR as moving image and US as fixed image). After training is complete, I want to apply the learned affine matrix to test MR images and compare the landmark points with US volume to see the differences in target registration error. I am not quite sure how to get the landmark position in the warped volume(find the position where the initial landmarks get mapped to). Is there any approach in SITK that would help me to get the landmark position after applying the affine matrix.

So far I have done these steps:

  1. Learn affine matrix using PyTorch
  2. Convert initial landmark coordinates into voxel coordinates
  3. Apply learned affine matrix on the initial voxel coordinates to get the corresponding voxel coordinates for the warped image

After this I am stuck in getting the transformed image using the learned affine matrix and extracting the new coordinates for the landmark points after warping.

Thank you for your help :slight_smile:

You probably want to resample your image. The docs link to quite a few examples.

Hello @prms ,

Now I understand what it is you want.

  1. To quantitatively evaluate registration you don’t need to resample the moving image onto the fixed image grid. This is useful for qualitative evaluation and for creating figures in manuscripts.
  2. The correct way to perform quantitative evaluation is via Target Registration Error (TRE). This is done by manually or semi-automatically identifying corresponding points in the two images (the physical coordinates of these points, not voxel indexes).
  3. Given the corresponding points, you analyze the TRE distribution, preferably in space as it is spatially varying. For more details, please see this notebook section titled “Quantitative evaluation”.
1 Like

Hi @zivy

Thank you very much for your response. I have one more question. Since I am training the neural net to learn affine matrix, do I need to have images with the same origin and direction cosine? So that I could apply the learned affine to the landmark points as you’ve suggested?

Hello @prms,

Good question. The answer is it depends.

Relative to what coordinate system is the affine transform? If your network is not aware of the image-spacing, origin, direction-cosine-matrix (i.e. these are not part of the inputs), then you are implicitly learning an affine transform which assumes spacing=[1,1,1], origin=[0,0,0], direction-cosin-matrix=identity for both images.

You will need to compensate/correct for that when mapping points from the fixed image to the moving image.

If on the other hand this information is taken into account then just map the fixed points
(physical coordinates in fixed image) to the moving coordinate system using the affine transformation and compute the distances from the corresponding moving points.

1 Like

Thank you very much @zivy. I’ll try with the approach you suggested and see the results.