MeanErrorNorm and MaxErrorNorm in InvertDisplacementFieldImageFilter

I recently started a project where I would need to invert a BSplineTransformation. I dug now a bit into the different algorithms that ITK provides and had a look at the code. For the InvertDisplacementFieldImageFilter I do not really understand what the two parameters maxErrorToleranceThreshold and meanErrorToleranceThreshold actually should do. Unfortunately, of the three methods that are implemented, this one has the least documentation…
I found out that the “error norm” is calculated by scaling the displacement vector at each position by the inverse of the displacement field spacing and then computing the norm of that vector. The “meanError” is now simply the mean over the whole image and the “maxError” is the maximum. I do not understand why these are called “error” - as from the description I would have assumed that these give some idea about how well the inverse is actually constructed.
Especially it is puzzling to me why these values are set in the examples to something \ll 1? This would only make sense if the deformation would be close to zero, or not?

What I also found peculiar, is that I cannot get the same value back in python:

import SimpleITK as sitk
import numpy as np

ref = sitk.Image(100, 100, 100, sitk.sitkUInt8)
ref.SetSpacing([0.5, 0.5, 0.5])

np.random.seed(1234)
tfm = sitk.BSplineTransformInitializer(ref, [4, 4, 4], 3)
tfm.SetParameters(np.random.random((7 * 7 * 7 * 3, )))

disp = sitk.TransformToDisplacementField(
    tfm,
    sitk.sitkVectorFloat64,
    ref.GetSize(),
    ref.GetOrigin(),
    ref.GetSpacing(),
    ref.GetDirection(),
)

u = sitk.GetArrayFromImage(disp)
d_inv = np.divide(1, disp.GetSpacing())
err = np.linalg.norm(u * d_inv, axis=3)
print("Transform MaxNumpy:     ", err.max())
print("Transform MeanNumpy:    ", err.mean())

f_invert = sitk.InvertDisplacementFieldImageFilter()
f_invert.SetMaximumNumberOfIterations(10)
f_invert.SetMaxErrorToleranceThreshold(0.1)
f_invert.SetMeanErrorToleranceThreshold(0.001)
f_invert.SetEnforceBoundaryCondition(True)
u_invert = f_invert.Execute(disp)

print("MaxErrorNorm: ", f_invert.GetMaxErrorNorm())
print("MeanErrorNorm:", f_invert.GetMeanErrorNorm())

u = sitk.GetArrayFromImage(u_invert)
d_inv = np.divide(1, u_invert.GetSpacing())
err = np.linalg.norm(u * d_inv, axis=3)
print("MaxNumpy:     ", err.max())
print("MeanNumpy:    ", err.mean())

Which prints for me:

Transform MaxNumpy:      2.3317511267663256
Transform MeanNumpy:     1.862434889362297
MaxErrorNorm:  2.2723706213738817
MeanErrorNorm: 0.11159393739051904
MaxNumpy:      2.3295392382620137
MeanNumpy:     1.7557829087586647

The maximum error is actually not far off, but the average is off by a factor of around 16… Why is that the case?

Would it be a good idea for an inversion of a BSplineTransform to set the threshold to some value close to that of the original transformation, assuming that the inverse should have the same norm?

What this filter does is takes the displacement field, initializes inverse displacement with all zeroes, then updated the inverse estimate by minimizing the error = direct + inverse (which would ideally be =0). And this needs to take spacing into account, otherwise I doubt the author would put this spacing scaling into the formula.

2 Likes

Thanks for the clarification - I think I missed that part in the code… :sweat_smile: I thought it is only calculated on the result:

Okay, but then it totally makes sense that it is an error!

2 Likes

So can I compute the values on my own when knowing the initial displacement u and the final inverted displacement u^\prime and field spacing \delta as |\frac{-u+u^\prime}{\delta}|? (This gives however a way different result…)

I always see relative high values, but for example the test to transform points around yields very low differences (at least three orders of magnitude lower than the MeanError) and thus would like to know where these errors come from (spatially).

Have you tried just summing up the displacement field and its inverse, then visualizing that?

1 Like

This probably isn’t the filter you want to use. It’s primarily used to iteratively estimate the inverse during symmetric normalization registration optimization. At each iteration, the inverse displacement field of the current iteration is estimated based on the input forward displacement field along with initial guess InverseFieldInitialEstimate which is the inverse field from the last iteration. Since during the registration the displacement field is updated in small increments (and ensuring invertibility), this filter works well for iterative cases such as this. It’s not going to work so well for estimating the full field from identity.

Something more in line with what you’re probably looking for is this filter which takes as input a displacement field (or/and scattered vectors) and will estimate an inverse (with all the usual caveats that no checking is done to make sure the input field is actually invertible).

2 Likes

oh okay, interesting! I have seen that class while skimming the docs but haven’t used it so far.
The inverse transformation I got from InvertDisplacementFieldImageFilter does not look too bad (when looking at back and forth transformed points) but the MeanError and MaxError are pretty high.
I also tested the other two options InverseDisplacementField and IterativeInverseDisplacementField, but as they are not multithreaded, they are pretty slow and give worse results, when keeping the parameters in a range where the computation time is still feasible.

I’ll try to get the DisplacementFieldToBSplineImageFilter working and compare it to the result I got so far - but my C++ is not as fast as my Python, so it may take a while… :smiley:
Same goes for FixedPointInverseDisplacementField - is that something that can be used as well, or is its purpose also something different?

No, but that is easy to do! I’ll try it.

Okay, I tried to implement it but so far, I get errors like these:

terminate called after throwing an instance of 'itk::ExceptionObject'
  what():  /usr/include/ITK-5.2/itkBSplineScatteredDataPointSetToImageFilter.hxx:506:
ITK ERROR: BSplineScatteredDataPointSetToImageFilter(0x565115ff7ac0): The reparameterized point component -0.000838476 is outside the corresponding parametric domain of [0, 1).

TBH, the BSplineTransform that I put into that function is certainly not diffeomorphic by the Choi & Lee criterion

So is it correct I take a BSplineTransform, then convert it into a DisplacementField by using TransformToDisplacementFieldFilter and then use DisplacementFieldToBSplineImageFilter with EstimateInverseOn to give me a new displacement field, which then is the estimated inverse?

So is it correct I take a BSplineTransform, then convert it into a DisplacementField by using TransformToDisplacementFieldFilter and then use DisplacementFieldToBSplineImageFilter with EstimateInverseOn to give me a new displacement field, which then is the estimated inverse?

Yes. Also, the error is saying that at least one of your data points exist outside of the domain defining the B-spline object. For example, this could happen if you have a displacement on the image boundary which points to a location outside the image domain. The inverse in this case would not be defined. That’s why we clamp the image boundaries for the SyN image registration classes. This filter checks for, and ignores, such data outside the boundary but I would assume something similar is happening.

By the way, the functionality is also in ANTsPy (fit_bspline_displacement_field) if you prefer Python.

1 Like

Hi @ntustison,

The DisplacementFieldToBSplineImageFilter looks like a much more interesting filter than the name suggests, given that it can also compute the inverse. Is there additional documentation somewhere or a reference to a paper that should be added to the class description?

This filter is currently not available in SimpleITK, so I created a feature request to add it.

2 Likes

Thanks @zivy . Actually, this displacement field filter is a specialization of a much more generally applicable B-spline fitting filter which is used in other classes such as N4. Since this particular B-spline fitting approach is one of the proposed advantages over the original N3 algorithm, it was discussed in detail in the N4 paper, specifically Section 3. This more general filter is also available in ANTsPy (fit_bspline_object_to_scattered_data) and can be used to fit a variety of objects (curves, surfaces) for both open and closed parametric domains. Here are a couple examples of such usage. Given how generally useful they are, I would certainly agree that they would make a great addition to SimpleITK.

4 Likes