Registration: at image border + noisy images

I am trying to register a vertebrae atlas to a dataset of T1 images.

Using a jupyter notebook GUI we have created relatively good initial transforms to align the template masks to the dataset. I was hoping to be able to robustly refine the registration using SimpleITK.

An example initial positioning is here:
image
image

So far I am not reliably able to align the reference vertebrae masks to the T1 dataset. I am computing similarity metrics between the fixed T1 images and the moving T1 images corresponding to the reference masks.
While the obtained affine registration is reasonable, the bspline registration is unreliable and I get weird deformations.

I have tried to use fixed/moving masks, different metrics (mattes, ANTS ncc), cropped image regions, etc. My registration pipeline works well for the skull. I use a moving mask (vertebrae dilated by 15 pixels), and optimize in sequence: 1. similarity, 2. affine, 3. b-spline

Example results with weird deformations are show here (in yellow):
image

Questions:

  • the MR images are quite noisy in the neck region. Should I smooth the images first?
  • the region of interest is close to the image boundary: I have had trouble with this in the past. Should I pad the images? Should the padded region be excluded in the similarity metric (via mask)?

The images are already smoothed as part of registration. A large fraction of registration time is taken up by the smoothing operation.

Probably the most meaningful thing you can do is crop the images before BSpline step, and perform the registration on the cropped images. It should be both faster and more reliable. As the BSpline domain will be limited only to the region of interest/deformation, there should be less weird deformations. A good choice for cropping would be axis-aligned bounding box around your moving mask (vertebrae dilated by 15 pixels), or maybe even without dilation. Another thing to experiment with is size of transform domain mesh, try both increasing and decreasing.

1 Like

True, I guess if you smooth the images using some adaptive approach in a pre-processing step, you have to disable/reduce smoothing in the registration. I tested a few approaches and most impressed by total variation: GitHub - InsightSoftwareConsortium/ITKTotalVariation: External Module for Total Variation Algorithms, providing wrap for https://github.com/albarji/proxTV
While removing a lot of the noise in the neck, it still preserves the main edges around the vertebrae and elsewhere.


Regarding cropping:

  • the domain mesh is defined wrt the fixed image: if the corresponding point is outside the moving image, is it automatically discarded (masked), or do I need to explicitly set a mask?
  • how are the bspline grid displacements determined in the masked-out region (are they fixed to zero, or extrapolated)?
  • is the metric sampled over the fixed or the moving image?
  • what is the virtual domain (SetVirtualDomain)?

That is not necessarily true. The anisotropic smoothing reduces noise while preserving high frequencies. The smoothing in the registration is also used to expand the capture radius of the metric by removing high frequency of the images. It isa scale space operator and not just a denoting method especially when using the multi-scale registration approach.

2 Likes

Hello @dyoll,

Another approach to possibly consider:

Atlas - fixed image.
New image - moving image.

Step 0 - Initial affine transformation.
Step 1 - Register whole images refining the affine transformation from step 0.

You’ve already implemented this (likely with the fixed and moving image roles reversed from what I’m suggesting here).

Step 3 - For each vertebra in fixed image, crop image to vertebra (bounding box of the segmentation you have) and use as current fixed image. Initialize registration using affine transformation from previous step after you change its center to the center of the cropped volume while not changing the transformation effect (see “Modify transform center…” in this notebook). Independently register each of the vertebra to the moving image. You end up with an affine transformation for each of the vertebra, invert it and transform the segmentation to the moving image.

2 Likes

Thanks. When I first tried the suggestion by @dzenanz I thought it did not help, but it turned out I was not cropping correctly (confusion between GetBoundingBox and GetRegion).
Now it seems to run faster and more robustly.

Thanks everybody for your suggestions.

3 Likes