Split deformation vector field into two deformation vector fields and apply to image in order


I am trying to split the dvf into two dvfs and apply in order.

However the result of deforming with one dvf is not the same as deforming with the split two dvfs.

I am wondering why is happen and how to fix it.

here is my code:

def deform_img(img,dvf):

resampler = sitk.ResampleImageFilter()
dis_tx = sitk.DisplacementFieldTransform(sitk.Cast(dvf, sitk.sitkVectorFloat64))
new_rigid_post = resampler.Execute(img)
return new_rigid_post

def test10():

post = np.zeros((128, 256, 256))
post[40:90, 40:90, 40:90] = 255
itk_post = sitk.GetImageFromArray(post)

pre = np.zeros((128, 256, 256))
pre[30:80, 20:70, 10:60] = 255
itk_pre = sitk.GetImageFromArray(pre)

dvf = np.zeros((128, 256, 256, 3))
dvf[:, :, :, 0] = 30.2
dvf[:, :, :, 1] = 20.2
dvf[:, :, :, 2] = 10.2
itk_dvf = sitk.GetImageFromArray(dvf)
new_post_img = sitk.GetArrayFromImage(new_post)

dvf1 = np.zeros((128, 256, 256, 3))
dvf1[:, :, :, 0] = 20.0
dvf1[:, :, :, 1] = 15.0
dvf1[:, :, :, 2] = 5.0
itk_dvf1 = sitk.GetImageFromArray(dvf1)
part1_post = deform_img(itk_post, itk_dvf1)

dvf2 = np.zeros((128, 256, 256, 3))
dvf2[:, :, :, 0] = 10.2
dvf2[:, :, :, 1] = 5.2
dvf2[:, :, :, 2] = 5.2
itk_dvf2 = sitk.GetImageFromArray(dvf2)
part2_post = deform_img(part1_post, itk_dvf2)
part2_post_img = sitk.GetArrayFromImage(part2_post)

The print(np.array_equal(new_post_img,part2_post_img)) returns false.

That is correct based on the mathematical properties of displacement fields. I don’t believe this is covered in the ITK software guide.

I believe dvf2 would need to be transformed/resampled by inverse of dvf1. The math of compositing deformations fields needs further thought.

Hopefully some one else can recommend an appropriate text to review the Mathematica foundations of displacements fields. @matt.mccormick @dzenanz @gdevenyi @zivy

1 Like

Hello @Xinyue,

The difference you are observing is not specific to displacement fields. You are experiencing the discreetness of computation vs. the continuous world. This is a feature of resampling multiple times as compared to compositing transformations and resampling once (still suffers from the issue but minimizes its effects). To see that the results are equal, equality is always up to a certain \epsilon, add these two lines to the end of test10():

  # Show all the locations where the values differ and
  # print the sum of absolute differences
  sitk.Show(((part2_post - new_post)!=0)*255, "diff")
  print(np.abs(new_post_img - part2_post_img).sum())

You will see that the differences are at the boundary of the box object, due the resampling, and that these differences are very small (sum of absolute differences is 4.6e-09).

Finally, if you are using deep learning, everybody is, MONAI added what they refer to as lazy resampling composing spatial transformations which reduces the resampling artifacts associated with consecutive resampling (which is what the provided code does).


Hi @zivy,

Thank you for the explanation.

I think MONAI would be helpful when people try to resample multiple times and get one output.

I am working on image registration and my data need to be “rigid registration” between the pre and post image. I have the deformation vector field (dvf in my sample code) to move the post to look like pre, but I want to split this deformation vector field into two vector fields (dvf1 and dvf2), the first dvf moves all the voxels on post to same distance and same direction (which is like “rigid registration”), and the second dvf will deform the new post to my target. dvf = first dvf +second dvf.

I work on my deep learning model, the old input would be the old post images and the pre images, and out put is the dvf, but the new input would be the new post and the pre images.

The hard part is I need the output image after applying the dvf1. I am wondering if MONAI could help this issue since it reduce the multiple resample affect by only execute once.

Thank you,

Why not compute an actual rigid transformation first?

I am wondering if MONAI could help this issue since it reduce the multiple resample affect by only execute once.

You can reduce resampling error by using better interpolation techniques, but fundamentally sampling into a new space will always add some computational error.

Hi @gdevenyi ,

Thank you for your reply.

I am trying to use a simple way to move the image slightly, so the value of my deformation vector field would not be too large, and the data distribution would look better.

I don’t know if I use a rigid registration step in my data preprocessing will slow down the whole pipeline cause I want to finish the data preprocessing by real time.

I was trying to use 3D Slicer to see if my assumption (dvf = dvf1+dvf2) is correct, and it seems the 3D slicer won’t have any issue with deforming two times, but I am not sure how could I use python to achieve it.

Thank you,

You can rigidly transform an image without resampling, just by updating the image geometry information (image origin, spacing, axis directions). This is what 3D Slicer does, too, when you harden a rigid transform on an image.

That can be accomplished in ITK via TransformGeometryImageFilter, see here.

Hi @lassoan and @dzenanz ,

Thank you for your reply!

The document mentioned that it transform the image “physically” by changing the metadata. May I ask that after GetArrayFromImage(), will this change keeps in the numpy array as well?

Thank you,

As numpy array throws away metadata, it will be unchanged by this filter.

1 Like