Image Spacing and BSpline Registration Variability

Hello All -

I have run across something that I don’t quite understand, but I’m hoping someone here can help.

I am trying to register 2 medical images (PET, if that matters) using a BSpline registration. The images are the exact same dimensions (168x168x168), the pixel sizes are the same, and the pixels are isotropic (same size in all 3 dimensions).

I am using Mattes Mutual Information as the metric with 50 histogram bins, the LBFGS2 optimizer and SetOptimizerScalesFromPhysicalShift. I am using a sampling percentage of 2.5% and setting a seed value to reduce run-to-run variability.

If I set the image spacing to [1.0, 1.0, 1.0] and run the registration, then re-run the registration with the image spacing set to [4.0, 4.0, 4.0] I get pretty different results. I have also tried spacings of [0.1, 0.1, 0.1] and [10.0, 10.0, 10.0]. Interestingly the 0.1 and 10.0 spacings give results pretty close to the 4.0 spacing, whereas the 1.0 spacing seems to give the worst results.

In addition, if I run the registration multiple times for each spacing, the variability for the 1.0 spacing is much greater than the variability of the 0.1, 4.0 or 10.0 spacing (for the 0.1, 4.0 and 10.0 spacing the run-to-run variability is imperceptible, but this is not true for the 1.0 spacing).

Is the image spacing, somehow used in setting the optimizer scales? Is there something else I am missing here? What should the spacing be set to? Since the pixels are isotropic, I thought the actual spacing value really didn’t matter as long as all 3 were the same, but this is obviously not true …

Thanks for any light you may be able to shed on this!

David

Optimizer scales are dependent on voxel spacing:

However, I would expect spacings of 1.0 to yield the same results as spacings of 4.0, 10.0 and 0.1. Maybe spacing of 1.0 has special treatment somewhere in the code?

As PET images are usually not that big, sampling percentage seems somewhat low. Does variability between runs reduces with higher sampling percentage?

Hello @cdcooke,

If all that is changed for each experiment is the image spacing then I expect the resulting BSpline transform to change (subject to image content - not dealing with empty images and other non standard content).

Just to confirm, when you change the image spacing, you did not change the BSpline control point grid spacing? If you only changed the image spacing, then for each of the registrations the BSpline transform was applied to a different sub-region of the image. Think of the image as a physical object and the BSpline control point grid as another physical object, they should overlap for the the BSpline transform to have an effect.

Hope this clarifies things a bit and allows you to investigate a bit more, or continue the discussion?

Hi @dzenanz, thank you for your comments. I tried setting the sampling percentage to 5% and 10%, and while the variability decreased (from 7.2% => 3% => 1.4% over 20 runs), it still doesn’t approach the really low variability of some of the other element spacings (4, 5 and 6 were ~9e-5, 7, 8 and 9 were 0, 10 was 4e-5, .1 was 6e-4). I should also mention that I am using a Regular sampling strategy …

And @zivy thank you for your comments. Yes, you are correct, the only thing I changed was the element spacing, and I am working with real images (nothing non standard or empty). I am setting the mesh size for the BSpline control points to [2, 2, 2] and that also did not change from run to run. I’m not quite sure I understood your explanation that “If you only changed the image spacing, then for each of the registrations the BSpline transform was applied to a different sub-region of the image.” Is there a relationship between the BSpline mesh size, the image spacing and how the BSpline transform is calculated? I assumed (perhaps incorrectly) that a mesh size of [2,2,2] meant 8 control points equally spaced within the volume. I have tried other mesh sizes but they did not work very well (although the image spacing was set to 1.0 back when I did all of that).

Thanks for any additional light you may be able to shed!

David

Hi @cdcooke,

Mesh size of [2,2,2] only defines the control point grid size. The default grid spacing is [1,1,1], so if you are changing the image spacing the BSpline control grid overlaps with different portions of the image, if at all. Print the BSpline Transform to see the settings.

Thanks @zivy! This is all getting very interesting :>)

So I was able to print the grid origin and spacing. For a mesh size of [2, 2, 2], an Image Spacing of [1, 1, 1] and an image size of [168, 168, 168], I get a grid origin of [-84.75, -84.75, -84.75] and a grid spacing of [84.125, 84.125, 84.125]. The Transform Domain Physical Dimensions are [168.25, 168.25, 168.25]. This gives 5 control points at [-84.75, -0.625, 83.5, 167.625, 251.75], if I did that properly. Which is interesting because the first two control points and the last control point appear to be outside the image space?

Setting the Image Spacing to [2, 2, 2] predictably multiplies everything by 2. The gird origin is now [-169.5, -169.5, -169.5], the grid spacing is now [168.25, 168.25, 168.25] and the Transform Domain Physical Dimensions are [336.5, 336.5, 336.5]. This gives 5 control points at [-169.5, -1.25, 167, 335.25, 503.5] and given that the physical dimensions are now 336.5, again the first two control points and the last control point appear to be outside the image space?

Not that I would want to, but I don’t see anyway to change the grid spacing or grid origin? Other than perhaps the “fixed parameters” (it looks like parameters 4 - 6 are the grid origin and 7 - 9 are the grid spacing)? I’m not even sure, if I could change them, what I would change them too? I would presume that ITK has chosen the correct grid origin and spacing.

Any additional thoughts?

Thanks!

David

@cdcooke that seems normal and fine. There need to be more control points than you ask for, and the number depends on the order of the BSpline, For the default order 3, there are 3 extra control points (2 at the lower end of physical domain, and 1 at the upper).

@zivy thought you might be setting these parameters manually, and botching it. It looks like that is not the case.

1 Like

@dzenanz @zivy Thank you both for your comments. I have done some additional investigations that I will explain, but unfortunately, I am still no closer to understanding why BSpline registration changes simply from changing the Image Spacing.

I believe I have determined that BSpline registration does not appear to use any optimizer scales (hence the behavior I am seeing can’t be due to optimizer scales being set differently for different Image Spacings). I have tried all 3 of the SetOptimizerScalesFrom{PhysicalShift,IndexShift,Jacobian}, and printed the optimizer scales from the registration as it is happening, and they are always 1. In addition, I tried not using either of these 3 options (ie, no SetOptimizerScales setting) and during the registration there were 0 optimizer scales. For all 4 cases, though there were some differences, the resulting registrations were largely the same. I also tried manually setting the scales, but that resulted in a memory corruption/crash.

So, at this point I am at a loss. I cannot explain why the BSpline registration has varying degrees of variability depending solely on the Image Spacing. And I have run out of ideas on how to track this down any further. If anyone has any additional ideas I can try I would love to hear them. I will keep thinking on this as well …

Thanks again for all of your help -

David

Hello @cdcooke,

One last thing to try in order to reduce variability, likely not the real issue here but worth trying. Please set the number of threads to one via the SetGlobalDefaultNumberOfThreads. The multi-threaded implementation introduces some variability due to order of operations, expected to be minimal.

Hi @zivy,

Thanks for the suggestion! I have run several tests now and the results are, well, interesting.

With the SetGlobalDefaultNumberOfThreads set to 1, I can confirm that the variability from run-to-run for a specific image spacing is completely eliminated, at least for Image Spacings of 1 and 2 (it takes quite a while to run single-threaded, so I just ran these 2). So, for whatever reason, threading does seem to add to the variability, though this variability is also somehow modulated by the Image Spacing.

Though the run-to-run variability is eliminated with the Number of threads set to 1, I still get variable results from different image spacings. Using image spacings of: 0.1, 0.5, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10; I get about 9.6% difference in Single-threaded mode and 8.8% in multi-threaded mode.

For my specific case, it looks like image spacings of between 4.0 and 10.0 seem to produce the best, most reproducible results; though I would have a very hard time explaining to someone why that is …

Thanks again for the suggestion! I’m happy to experiment more if you think of anything else -

David