Points are not transformed correctly when using BSplineTransform

Hi,

I am looking for a bit of advice (or information on where I am going wrong) with regard to applying transformation to an image and coordinating points from GeoJSON.

I have a composite transform consisting of 5 transformations in the following order: Euler2d > Euler2d > Euler2d > Affine > BSpline. The image is transformed using a resampler (we are resampling from 0.163um/px to 0.507um/px with the mentioned transformation set as the transform).
The points are transformed using the inverse transformation of the composite transform. I have tried using several approaches, but all of them resulted in the incorrect transformation of points.
Attempt 1:

  • Build CompositeTransform using the individual transforms in the reverse order, namely BSpline > Affine > Euler2d > Euler2d > Euler2D.
  • Inverse of linear methods are simply obtained by using the tx.GetInverse method
  • Inverse of BSpline are obtained by inverting a displacement field image

Attempt 2:

  • Build Transform using an inverted displacement field

Attempt 3:

  • Iterate through each transform and using the inverse transform use the TransnformPoint method.
  • Inverse of linear methods are simply obtained by using the tx.GetInverse method
  • Inverse of BSpline are obtained by inverting a displacement field image

I only see this issue occur when using a non-linear transformation.

Here is an overlay of the points before the transformation is applied:


And here it is after:

Hello @lukasz-migas,

Not sure whether you are using ITK or SimpleITK, the solution is similar.

For the SimpleITK case, code for this operation is provided in the Transforms notebook section titled “Inverting Composite Transform”. The other sections and the notebook repository in general may be of interest too.

Hi @zivy - thanks for your reply

I am using SimpleITK in Python.

In attempt 1 I am using code that you posted in another question as I thought the problem was similar. This seems to be essentially the same approach as you linked above.

The method from you link produces points in exactly same position as my current implementation.

Hello @lukasz-migas,

Well, that is interesting.

If I understand the situation, you have a registration process that outputs the transformation T mapping points from the fixed to moving image. You then used that transformation to resample the moving image (blue) onto the fixed image grid to obtain a resampled_moving image (red). Finally, a set of points on the moving image is mapped to the fixed image coordinate system using T^{-1}?

As you said that the mapping works when all the transformation are global, then there is something likely going on with the bounded transformation. Have you checked that the inversion is close enough? It won’t be perfect as it is not a closed form operation and it will vary with the inversion algorithm. To evaluate the quality of the inversion operation, take a set of points in the moving image, ^mp, and compute the distance for each of them ||T(T^{-1}(^mp)) - ^mp||, this distance is spatially varying. If it is small then the inversion is working as expected. Definition of “small” is context specific, 1mm may be small or large depending on required accuracy. If inversion is not sufficiently accurate, possibly try a different inversion algorithm, there are several available.

If this isn’t the issue, then possibly an issue with how the physical points in the resampled image space are converted to indexes for drawing. This is less likely as the problem would arise with the global transformation too, but I’m not looking at the code so making a best guess effort.