metric value registration

Hey guys, I would ask you a question (maybe stupid or banal for someone). I applied the registration method exactly as illustrated here SimpleITK-Notebooks/60_Registration_Introduction.ipynb at master · InsightSoftwareConsortium/SimpleITK-Notebooks · GitHub but adapted for 2D-2D images. I would like to know why every time that I run the “registration_method.Execute” I got different results in the evaluation metric as well as in the Final Transform Matrix values even if I don’t change anything (nor the images nor the parameters setting. Thank you!

Hello @Enrico_Sarneri,

The question is neither stupid nor banal it actually is insightful :wink: . The underlying issues are relevant for a broad range of algorithms, all of which use some form of randomness/stochastic computations or involve multi-threading. Equally relevant for the soup of the day, deep learning (see this paper).

The section on registration reproducibility discusses it and shows how to reduce/eliminate the variability in the context of ITK/SimpleITK.

2 Likes

perfect, it makes completely sense! Thank you. Just a little more question disjointed from that, regarding the MattesMutualInformation, how should I interpret the range values? In my case for example on the y-axis the interval is -0.0230, -0.0210. Are the values closer to -0.0210 better or the contrary? I didn’t find any documentation about that.

  1. Mutual information is not bounded so there are no general values which reflect that images are well aligned.
  2. In ITK/SimpleITK all metrics are configured for minimization, so the transformation resulting in a metric value of -0.0230 is better than the transformation resulting in the metric value of -0.0210.
2 Likes

Coming back to this…in order to consider a “trustable” result about the metrics and the transform values, do you think it could be a good way to repeat the registration process different times and take the mean value of the different results? Instead of doing just one shot and considering it as a trustworthy result. In such a case which could it be a good amount of repetitions? 100? 1000?
Thanks

Very good question, unfortunately there is no theoretically sound answer to it, so we most often make decisions based on context specific constraints and empirical results, our model of the world and experimentation.

Things to consider:

  1. Does the variability in the output matter for the application at hand? The variability should not be evaluated as differences in transformation parameter space, use target-registration-error in physical space. A difference of 0.5 degrees can be significant or insignificant, depending on the maximal distance from the center of rotation, and the required accuracy. If a single registration run is always sufficiently accurate then no need for multiple runs, otherwise might be worth performing multiple runs.
  2. What are the time constraints? This helps determine an upper bound on the number of repetitions.
  3. Confidence in initialization. Most often, variable quality initialization can lead to intermittent failures. A reasonable strategy is to run multiple registrations and perturb the initial transformation parameter values for each one. How much to perturb depends on confidence and is determined experimentally. Bounds on perturbation - change of x units in parameter leads to a change of Ymm in the physical world - we always evaluate effects in the physical world, not in the transformation’s parameter space.
  4. Taking the mean value of the transformation parameters will make sense if you do not have outliers, no significant failures. Possibly take the median (see paper on multidimensional medians).
1 Like

Ok thanks, yes the fact is that with different runs I have often very different results in the parameters of translation matrix for example that bring to not-so-good registration accuracy in mm. I also would to highlight that the metric I’m using to measure these shifts is by doing an image intensity profile of the two balls I have to register (one in the DRR - reference - and one in the EPID image). This is the only way that came into my mind since I have no ground truth labeled reference points and anything else. So maybe there could be a more proper way…