Offset changes with each registration

Hi! I tried to register two images, using the following code:

Registration framework setup.

registration_method = sitk.ImageRegistrationMethod()

#Set Interpolator

#Set Metric as Mutual Information

#Set Optimizer
registration_method.SetOptimizerAsGradientDescent(learningRate=0.1, numberOfIterations=1000, convergenceMinimumValue=1e-6, convergenceWindowSize=10)

#Set initial transform 
registration_method.SetInitialTransform(initial_transform, inPlace=True)

# Setup for the multi-resolution framework
registration_method.SetShrinkFactorsPerLevel(shrinkFactors = [4,2,1])
registration_method.SetSmoothingSigmasPerLevel(smoothingSigmas = [2,1,0])

#Show the graph of Mi against iterations
#registration_method.AddCommand(sitk.sitkStartEvent, metric_start_plot)
#registration_method.AddCommand(sitk.sitkEndEvent, metric_end_plot)
#registration_method.AddCommand(sitk.sitkMultiResolutionIterationEvent, metric_update_multires_iterations) 
#registration_method.AddCommand(sitk.sitkIterationEvent, lambda: metric_plot_values(registration_method))

#Execute the registration
final_transform = registration_method.Execute(fixed_image, moving_image)

#Print results
print('Final metric value: {0}'.format(registration_method.GetMetricValue()))
print('Optimizer\'s stopping condition, {0}'.format(registration_method.GetOptimizerStopConditionDescription()))

However, every time I run the code with the exact same two images, the offset (from the final transform), metric value, changes. Is there a reason why it changes?

Please look at the seed parameter for the SetMetricSamplingPercentage it default to sitkWallClock. This adds some time based randomness to the samples generated. If it set to your favorite integer is should make the process more reproducible.

However, if the registration is not converging to similar place with a little random sampling then this is an indication that there is a stability problem with the registration that needs more investigation. Is it converting to a local minima? ( optimization parameter tuning ) Are there multiple local minima? ( under sampling, noise or problematic data ).


Does this mean if I set the


to a number, for example, 1, it should be around the same results?

Here’s what I got from doing the registration callback (found from simpleitk notebook in this link: [ SimpleITKTutorialMICCAI2015 ](, using the above code and gradient line descent as optimizer:

But, when I run it again, the metric value is so better:

Also, can I ask: shouldn’t the optimizer’s stopping condition be the point where the metric value is the lowest, but it didn’t choose the iteration where the metric value was the lowest according to the graph.

Hello @Yew_Shu_Ning,

Per @blowekamp’s suggestion, if you want the registration’s random sampling to be reproducible you need to set the random seed. Looking at the SetMetricSamplingPercentage method we see that it has two parameters, percentage and seed. You are required to set the percentage, and if you set the optional seed value to a fixed integer you will get results that are more reproducible as you always use the same seed for the random number generator.

With respect to the optimizer’s stopping condition, it is stopping after 11 iterations in the lowest level of the pyramid, this is the lowest similarity value for that level. The graph is displaying the similarity metric for all levels of the pyramid. The leftmost portion of the graph is the highest pyramid level, using the smallest image. Then we restart the registration at a lower level in the pyramid, using a larger image and plot the values during that phase. Thus, the lowest similarity value in the graph does not correspond to the best transformation. The lowest value in the rightmost portion of the graph does correspond to the best transformation.

Note, that as the registration is using a TranslationTransform you don’t need to call SetOptimizerScalesFromPhysicalShift as your parameters are commensurate. If you mix parameters, e.g. mm and radians then you need to call this scaling function.

Finally, I suggest looking at the newest online tutorial as you’re browsing material from 5 years ago


Thank you so much for the explanation. By specifying the seed value, the offset and metric value did not change anymore. So, is it true to assume that the most optimal stopping condition is the minimal similarity value at the lowest level of the pyramid?

Yes, as long as the optimizer didn’t stop due to early termination (reaching maximal number of iterations, which in your case it didn’t. set to 1000 and stopped after 11).

Thank you! Can I ask whether the offset I got is the same as the translation shifts for the translation used in registration? If not, how can i print out the translation shifts?

Yes, the term offset for TranslationTransform refers to the translation optimized during registration.

Thank you for your explanations, setting the seed parameter works fine. Yet one question remains unclear for me:
If I set MetricSamplingPercentage to 1, why isn’t the registration deterministic then? For my understanding, in this case every point of the image should be included in calculation of the metric value, which means the points are no more chosen randomly. But what is the reason that this is not the case?

1 Like

Hello @fred_weiss,

Setting the MetricSamplingPercentage to 1 when using the random sampling strategy will not use every pixel/voxel in the image because the sampling is with replacement.

If you want to use all of the pixels, then set the sampling strategy to NONE.


If I recall correctly even with the “NONE” strategy each pixel is perturbed a random amount. So the seed should still be used for reproducibility.