Interpolate image physical points from DVF extracted from mesh points

I have extracted a volumetric mesh from a given nifti image. And then apply some deformation to that mesh points to create a deformed version of it.

Now what I wanted is to apply the same deformation/DVF on the mesh points to the reference CT (i.e. nifti image volume which is used to acquired the mesh). How I can I do that in simpleITK ? Could you please give an example code ?

I much appreciate your willingness to assist me with this matter.

Hello @isuruwi ,

Assuming the transformation T is a displacement field and that the original mesh is defined in the image coordinate system and that you obtained the deformed mesh as follows:
p_{\textrm{deformed_mesh}} = T(p_{\textrm{original_mesh}}).

We need to:

  1. Define a new image grid which bounds the deformed mesh points. Specifically, define its spacing (possibly the original image spacing)
    origin and direction cosine matrix (likely the identity).
  2. Obtain T^{-1}, the inverse of the displacement field (several options: InvertDisplacementFieldImageFilter, InverseDisplacementFieldImageFilter, IterativeInverseDisplacementFieldImageFilter).
  3. Resample the original image using T^{-1} and the grid defined above.

Notebooks of interest: Transforms, Transforms and Resampling.

1 Like

Hi Zivy,

I just wrote the code indicated below by refereing to your response. However, I am wondering how I could use simpleITK to calculate the Transform function from two point sets, namely the reference and deformed mesh points, with dimensions of (964, 3). If feasible, could you eloborate it with an example code snippet ? What I wanted is to complete step 2 in the below code ?

# Step 1 => Define a new image grid which bounds the deformed mesh points.
reference_image = sitk.ReadImage("./MCR.nii.gz")
x = reference_image.GetSize()[0]
y = reference_image.GetSize()[1]
z = reference_image.GetSize()[2]
new_image_grid = sitk.Image(x, y, z, sitk.sitkVectorFloat32)


# Step 2 => Obtain the inverse of the displacement field.
# the shape of each mesh point array is (964, 3)
ref_mesh ='./MCR.vtk').points
predicted_deformed_mesh ='./deformed_MCR.vtk').points

# need to calcuate the dispalement vector field trasnformation here based on the reference and deformed mesh points

# Step 3 => Resample the original image using and the grid defined above.
resampled_image = sitk.Resample(new_image_grid, reference_image, transform_inverse, sitk.sitkLinear, 0)

Hello @isuruwi,

In my recommendation I assumed you had already computed the deformation field which mapped points from the original mesh extracted from the image to the new mesh, apparently that is not the situation.

Below is pseudo-code based on the current understanding:

# Step 1 => Define a new image grid which bounds the deformed mesh points. This may not match the original image. If it does, then
# this step can be skipped and transformation_domain=reference_image
#Note that if the image is much larger than the mesh you should compute the transformation_domain

predicted_deformed_mesh ='./deformed_MCR.vtk').points

# Get the minimum/maximum coordinates that bound the deformed mesh and compute the physical
# size of this axis aligned bounding box, add some padding.
min_x,max_x,min_y,max_y,min_z,max_z = bounding_box(predicted_deformed_mesh)
phys_size = [pad+max_x-min_x, pad+max_y-min_y, pad+max_z-min_z]

# define the new image grid
new_origin = [min_x-pad/2.0, min_y-pad/2.0, min_z-pad/2.0]
new_spacing = reference_image.GetSpacing() # or we can set any new arbitrary spacing
new_size =  [ int(round(psz/spc)) for psz, spc in zip(phys_size, new_spacing)]

transformation_domain = sitk.Image(new_size, sitk.sitkUInt8)

# Step 2 => Obtain the (approximate) BSpline/FFD transformation that maps the points from the deformed mesh to the original

ref_mesh ='./MCR.vtk').points

landmark_transform_initializer_filter = LandmarkBasedTransformInitializerFilter()
landmark_transform_initializer_filter.SetFixedLandmarks(predicted_deformed_mesh_points_flattened) # points in flattened list [x1,y1,z1,x2,y2,z2...]
landmark_transform_initializer_filter.SetMovingLandmarks(ref_mesh_points_flattened) # points in flattened list [x1,y1,z1,x2,y2,z2...]
landmark_transform_initializer_filter.SetBSplineNumberOfControlPoints(num_control_points) # more control points the better the fit, be careful to not over-fit.

bspline_transform = sitk.BSplineTransform(dimension, spline_order)
bspline_transform = landmark_transform_initializer_filter.Execute(bspline_transform)

# Step 3 => Resample
resampled_image = sitk.Resample(reference_image, transformation_domain, bspline_transform)

One last, Python related comment, use variable unpacking.

# Don't do
x = reference_image.GetSize()[0]
y = reference_image.GetSize()[1]
z = reference_image.GetSize()[2]

# Do
x, y, z = reference_image.GetSize()