Advice on registering noisy head CT on template


I am trying to register a head CT (with headset) on a head CT template. The presence of a headset is challenging the whole registration step (I think). It can eventually succeed if I wait long enough.
I am looking for advice on how to make the process more robust and faster, based on my current configuration.

Fixed image config :
Size: (512, 512, 55)
Spacing: (0.3547742962837219, 0.3547742962837219, 2.438863754272461)
Origin: (92.43216705322266, 83.35346984863281, -12.921518325805664)
Direction: (-1.0, 0.0, 0.0, 0.0, -1.0, 0.0, 0.0, 0.0, 1.0)

Moving image config :
Size: (512, 512, 55)
Spacing: (0.474609375, 0.474609375, 5.0)
Origin: (-121.5, 14.5, -4.4000244140625)
Direction: (1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0)

initial_transform = sitk.CenteredTransformInitializer(fixed_image, 
# registration
registration_method = sitk.ImageRegistrationMethod()
registration_method.SetOptimizerAsGradientDescent(learningRate=1.0, numberOfIterations=200)

# Set the initial moving and optimized transforms.
optimized_transform = sitk.Euler3DTransform()
initial_transform = sitk.CenteredTransformInitializer(sitk.Cast(fixed_image,moving_image.GetPixelID()), 

The optimization process looks good to me, though in my opinion it seems pretty long regarding the simple task I want to solve.
Screenshot from 2020-03-18 19-41-07

I noticed that the fixed image is not in the same orientation as the moving image, I guess that having the same orientation will help a bit. I am not sure if the complexity to register comes from the parameters or the difference in the images.
I would really appreciate some guidance :slight_smile:


Many additional features and ways to optimized and improve a registration are demonstrated in the SimpleITK Notebooks:

Please look through Notebooks in the 60’s for registration tips.

1 Like

Thanks for the tutorial, adding Multi Resolution really helped to find a fast and good estimate of the final transform, love it !

I am now quite satisfied with the registration process, it seems accurate.
Though the resampled moving images have very noticeable artifacts, as you can see mostly on the skull contours

I guess the fixed image I am using has a higher Z resolution (more than twice actually) compared to the moving image. What would you advice in such situations ? resampling the fixed image to a lower depth resolution ? any ideas would be appreciated, thanks.

1 Like

Your moving image has a 1:10 x/y:z spacing ratio. The data limits the quality. Using a higher order interpolator may help somewhat.

The best way to proceed will depend on your particular application, purpose and quantified goals. For example if you wanted to proceed with additional registration then you would not resample, but use the produced transform as the “MovingInitialTransform” for the next phase of registration.

The only goal is to normalize all the CTs to have same head orientation, centering and scaling. It’s mostly a pre-processing step before deep learning algorithms and also to ease visual interpretation for the MD. I would rather use the original moving image spacing while preserving the ROI obtained from the registration.
Then I would only keep the slices from the moving image that are contained in the fixed image physical space, without resampling to the fixed image resolution.

Does it makes sense with regard to the intended goal ?
Thank you

1 Like

Hello @trypag,

Given your goal, deep learning preprocessing, I am not sure you need to register your data using the standard registration approach. I would first try a simpler route:

  1. Identify bounding head bounding box in each of the scans (Otsu filter works reasonably well for this).
  2. Compute the affine transformation between template bounding box (reference image) and each of the other boxes (center difference gives translation, size differences give scaling and orientation is assumed to be the same, but can be computed from axes of oriented bounding box if needed).
  3. Resample all of the bounding box images onto the reference image.

This was the approach taken in this Jupyter notebook which dealt with data augmentation. Possibly also relevant for your work.

Hello @zivy,

That’s right, deep learning is not so sensitive to translation and scaling, so I could go for a simpler way. I have already tried some parts of the data augmentation notebook and I might be going back to it in a few days, it’s really great and so under rated by the deep learning community !

Though affine registration does a pretty good job at focusing on what I am mostly interested in, the brain. Otsu filter would not help me to estimate this fine grained ROI. Some of my CT scans contain neck and shoulder parts, some not.
What I would like to achieve first is to get rid of these parts so that my ROI is centered on the brain. Hope this makes sense again :slight_smile:

Hello @trypag,
Given your explantation, data characteristics, the registration approach makes sense. We always try to use the simplest model, but life is complicated…

I am puzzled by this last resampling step …

moving_resampled = sitk.Resample(moving_image_true, fixed_image,
                                 final_transform, sitk.sitkLinear, 0.0, moving_image.GetPixelID())

What I would like is the following :
1/ Register the moving image on the fixed image in the physical space (DONE)
2/ Use the fixed image ROI as the final ROI
3/ Resample X:Y with the fixed image resolution AND leave the resampled image Z resolution to the original value

A first starting point would be to apply the transform without changing the original resolution (only use the fixed image ROI), then move to a solution where X and Y axis are sampled.

For 2/
I could use .TransformIndexToPhysicalPoint() to find the fixed image extend in the physical domain, then transform these positions to the moving image domain based on final_transform. Am I wrong ?

For 3/
No single idea about how to select which axis can be resampled or not, guidance would be appreciated again, thanks :slight_smile:

Hello @trypag,

This is readily achievable if you understand the SimpleITK/ITK tenet that an image is a physical grid in space. You just need to figure out how to describe various grids that occupy the same physical location. See this notebook for a discussion on resampling.

Below is code that should do what you want (haven’t run it so it may have errors, but the idea is there):

fixed_spacing = fixed_image.GetSpacing()
moving_spacing = moving_image.GetSpacing()
           #spacing x-y from fixed, z from moving
new_spacing = fixed_spacing[0:2] + moving_spacing[2:3] 
fixed_size = fixed_image.GetSize()
          # compute the new image size using the new spacing so that it occupies the same
          # region in space as the fixed image.
new_size = fixed_size[0:2] + (int(0.5 + fixed_spacing[2]/moving_spacing[2]*fixed_size[2]) ,)

moving_resampled = sitk.Resample(moving_image_true, new_size, final_transform, sitk.sitkLinear, fixed_image.GetOrigin(), new_spacing, fixed_image.GetDirection(), 0, moving_image.GetPixelID())

Hello @zivy,

Thank you so much for your help !
I took some time to read the tutorial again and then your solution. Everything makes sense except there :

new_size = fixed_size[0:2] + (int(0.5 + fixed_spacing[2]/moving_spacing[2]*fixed_size[2]) ,)

I would rewrite it :

new_size = fixed_size[0:2] + (int(0.5 + (fixed_spacing[2]*fixed_size[2])/moving_spacing[2] ,))

Could you explain what does 0.5 stands for ?
I understand that with (fixed_spacing[2]*fixed_size[2])/moving_spacing[2] we find the size of the moving image based on the size in the physical space, but I don’t get this 0.5 value.

Thanks !

You’re welcome.

The 0.5 is in combination with the cast, we want the nearest integral value to a positive value x so I did int(0.5+x). Likely the more Pythonic way is to do round(x).

1 Like

Alright, got it :slight_smile:
Thanks again, the case is solved !