shrinkFactor causing weird artefacts with sitk.DisplacementFieldTransform

Hi everyone. I’m new to SimpleITK. It’s been great so far and I’m hoping to do more with it.

I’m trying to perform a dense deformable registration between two 3D volumes, initialized with an affine registration performed prior (see affine_matrix below). However, the moved image I’m getting contain artefacts at the edges of the image in all three dimensions (first and last planes, as well as the first and last pixels in x and y). I noticed that that if the shrink factors I set is not a factor of any of the size of the three dimensions, the remainder of the voxels would be set to a transformation of identity. For example, if there are 100 planes (i.e. z=100) and I set a shrink factor of 9, then one plane (the first plane) will be exactly the first plane of the moving image. This is true for the other dimensions as well.

Furthermore, if I set varianceForUpdateField to a non-zero number (field_smoothing kwarg in deformableAlign() below), the artefact returns (i.e. the first and last planes is identity or somewhere between identity and a somewhat accurate displacement vector in the other planes) even when the shrinkFactor is set to 1.

Here’s how I set up the deformable registration:

def deformableAlign(
    fixed, moving,
    fixed_vox=None,
    moving_vox=None,
    affine_matrix=None,
    ncc_radius=8,
    gradient_smoothing=1.0,
    field_smoothing=0.5,
    shrink_factors=[2,],
    smooth_sigmas=[2,],
    learning_rate=1.0,
    number_of_iterations=250,
    ):
    fixed, moving = numpyToSITK(fixed, moving, fixed_vox, moving_vox) # convert to sitk images, set spacing
    affine = matrixToAffineTransform(affine_matrix)

    irm = getDeformableRegistrationModel(
        fixed_vox,
        learning_rate,
        number_of_iterations,
        shrink_factors,
        smooth_sigmas,
        ncc_radius,
    )

    tdff = sitk.TransformToDisplacementFieldFilter()
    tdff.SetReferenceImage(fixed)

    df = tdff.Execute(affine)

    dft = sitk.DisplacementFieldTransform(df) ## consume an image to construct a displacement field transform
    dft.SetSmoothingGaussianOnUpdate(
        varianceForUpdateField=gradient_smoothing,
        varianceForTotalField=field_smoothing,
    )
    irm.SetInitialTransform(dft, inPlace=True)

    deformation = irm.Execute(sitk.Cast(fixed, sitk.sitkFloat32),
                              sitk.Cast(moving, sitk.sitkFloat32),
   )

    tdff.SetOutputPixelType(sitk.sitkVectorFloat32)
    return sitk.GetArrayFromImage(tdff.Execute(deformation))

def getDeformableRegistrationModel(
    fixed_vox,
    learning_rate,
    iterations,
    shrink_factors,
    smooth_sigmas,
    ncc_radius,
    ):

    # set up registration object
    ncores = int(os.environ["LSB_DJOB_NUMPROC"])
    sitk.ProcessObject.SetGlobalDefaultNumberOfThreads(2*ncores)
    irm = sitk.ImageRegistrationMethod()
    irm.SetNumberOfThreads(2*ncores)
    irm.SetInterpolator(sitk.sitkLinear)

    # metric
    irm.SetMetricAsANTSNeighborhoodCorrelation(ncc_radius)  ## might take a list
    irm.MetricUseFixedImageGradientFilterOff()

    # optimizer
    max_step = np.min(fixed_vox)
    irm.SetOptimizerAsGradientDescent(
        numberOfIterations=iterations,
        learningRate=learning_rate,
        maximumStepSizeInPhysicalUnits=max_step,
    )
    irm.SetOptimizerScalesFromPhysicalShift()
 
    # pyramid
    irm.SetShrinkFactorsPerLevel(shrinkFactors=shrink_factors)
    irm.SetSmoothingSigmasPerLevel(smoothingSigmas=smooth_sigmas)
    irm.SmoothingSigmasAreSpecifiedInPhysicalUnitsOn()

    # callback
    def callback(irm):
        level = irm.GetCurrentLevel()
        iteration = irm.GetOptimizerIteration()
        metric = irm.GetMetricValue()
        print("LEVEL: ", level, " ITERATION: ", iteration, " METRIC: ", metric)
    irm.AddCommand(sitk.sitkIterationEvent, lambda: callback(irm))
    return irm

Any help is appreciated! Thanks!

1 Like

Hello,

Please see the SimpleITK Notebook on transforms, specifically the section on bounded transforms. To understand how pixels outside the field are transformed.

For the above reason, it is better to keep the preserve the global affine transform and not render it into the displacement field. This can be done by using the affine transform in the “MovingIntialTransform” and the creating a CompositeTransform to combine the two. This is done the the following example: https://simpleitk.readthedocs.io/en/master/link_ImageRegistrationMethodDisplacement1_docs.html

Alternatively you could consider padding the original image ( and the displacement field ) or applying the affine transform by resampling the image before DisplacementFieldTransform registration.

1 Like

Hi Bradley

Thanks for your quick reply. Very helpful in confirming my suspicion about the bounded transformations and also great suggestions.

A composite transform was something I was trying to avoid. I am new to image registration, but as I understand, one incurs some error every transformation and interpolation step. Therefore, I thought the better thing to do was to combine them and to only apply the displacement field at the end. Would you agree with that, or do you think a affine+deformable composite transformation is still worth the try?

I think padding the moving image is a great idea. I’m a bit confused about padding the displacement field — I thought the displacement field is in the domain of the fixed image? As a result, should I be padding both the fixed and moving image, or just the fixed?

Also, I would like to understand what happens when the shrink factors are not a factor of the image dimensions, as I described above. Is it true that the remainder of the voxels are considered out-of-domain and consequently set to identity?

Related to the above question, why does setting a shrink factor of 1, which is a factor of the image dimensions, but a varianceForUpdateField to a non-zero number, still cause identity artefacts. What’s going on behind the scenes?

Regardless, I think padding is worth trying. How would you recommend padding the images without inducing artificial structures? For example I don’t want to induce an artificial edge by padding the image with a single value, even if the value is representative of the background. I guess some mean background value with some noise is desired. Is there a SimpleITK solution to this? or would I have to do something on my own in numpy or so?

Would love to hear your thoughts on these. Thanks!

Hello @jingxuan,

To clarify some things about transformation and interpolation chains:

  1. You want to avoid multiple interpolations, as you loose more high frequency information from the original continuous image signal with each resampling.
  2. A displacement field is a discrete approximation of a continuous mapping. If you convert a global transformation to a displacement field you will loose accuracy. Don’t take my word for it, try it out. Create an affine transformation and convert it into two displacement fields that occupy the same local region in space, but differ in spacing. See which of these displacement fields approximates the original affine transformation more accurately.

So, from theoretical standpoint affine+displacement-field is recommended over converting affine to displacement-field and then working just with that.

I don’t think you need to pad any of the images.

Hello @zivy

Thanks for responding and clarifying the clear benefit of performing a composite transform in this case.

I would like to ask you about this final comment you made. If I do it by way of a composite transform, are you saying that at the deformable registration step, I won’t have to worry about the image dimensions not being perfectly divisible by the shrink factors? That this is some weird effect of first initializing the deformable registration with a displacement field derived from an affine transform? That the artefact with the varianceForUpdateField would also go away?

Hello @jingxuan,

No, the use of a composite transformation does not address that issue. Padding the images is likely not a solution in this case.

The deformation field transform you are computing maps points from the fixed image to the moving image. The way you describe the artifact, the first and last slices in the resampled moving image are exactly the same as in the moving image, it would indicate that the fixed and moving images occupy the same region and that the displacement field either doesn’t overlap that region or is zero for the outer portions of this region. A possible solution is to enlarge the deformation field grid so that it overlaps a larger region.

I would start debugging by checking that the displacement field obtained from the affine transformation is working as expected.

Hi @zivy

Thanks for taking the time to respond.

Currently, the size of the deformable field grid is set to that of my fixed image, using SetReferenceImage(). This seems like the right thing to do. Are you suggesting that I expand the deformation grid to a size bigger than the fixed image?

I checked the displacement field obtained from the affine transform as you suggested. It produces the same result as that generated by applying the coordinate transform directly, with no artefact. More specifically, I applied both the displacement field df = tdff.Execute(affine) and the displacement field transform dft = sitk.DisplacementFieldTransform(df) (with resampler.SetTransform(dft) and they both turned out fine.

This tells me that the artefact probably came about from the sitk.ImageRegistrationMethod(). Something is wrong with setting the initial transform. Even when I ran the registration with 0 iterations, the artefact still exist.

Any clue what might be going on?

It is coming from the boundary condition in the DisplacementFieldTransfromParameterAdaptor used. If you use a composite transform, where the initial displacement field is zero it should improve.

Hi @zivy and @blowekamp

You guys were right. After trying the composite transform, I stopped seeing the artefact. Thanks for answering all my questions!

2 Likes