Is there a decent way to quantify the "goodness" of a CT to CT scan registration?

That is, if we compute a transform based on registering two CT scans, and have a region of interest, what’s the best quantitative measure of how good that registration is in that region? Scrolling through slices and looking to see that they “line up well enough when overlapped” … that can’t be the best / recommended way of saying it’s good, is it? But that’s what we are currently doing.

Our goals include:

  • To detect when a registration did not work well, and ask the user to repeat it, but trying to get a better initial transform.
  • To enable novice users (non-image-analysis experts) to perform our registrations and have confidence that the system can tell them whether it’s good or not.
  • To be able to have many users run the same set of registrations, and then compare their results. We can use the transforms that come from each run, but in general, we do not have a ground truth transform to compare to, so unless we say “the average of all runs” is the ground truth, we don’t know if they are all bad and just similar to each other, or if they are all good.

For reference, we are using elastix, and ITK 4.12, and our registration is achieved using elastix parameters as follows: Registration is “MultiResolutionRegistration”, NumberOfResolutions is 4, Metric is “NormalizedMutualInformation”, Optimizer is “AdaptiveStochasticGradientDescent”.

Is there a way we can use the final value of the NormalizedMutualInformation metric as a quantifier?

Anybody have a recommendation?

Evaluating the metric you used for registration on that particular region of interest.

You will have to experiment with your data to come up with what are good values of the metric, what are mediocre, and what are bad. And that will probably change if you change the anatomy being looked at, some scanner settings etc.

The metric value which elastix spits out is probably for the whole image, not just your region of interest.

Other people might have further suggestions, in particular elastix developers such as @Niels_Dekker and @mstaring.