Template Matching in SimpleITK

I have used template matching in previous code, and was wondering if template matching is available in SimpleITK? Or if there are specific functions that will achieve the same outcome?

Thank you!

Hello @elapins,

The equivalent of template matching in ITK/SimpleITK would be the Exhaustive optimizer, see this notebook for a discussion in the context of initialization. A slightly different use case is shown in this notebook.

1 Like

I have tried adding this type of exhaustive optimizer, and had some confusion about the step size/grid search. No matter what step sizes I use, does this search actually search the full image? I noticed that when I increased the grid size, the results varied and would be further away from the initialization points.

As well, is it possible to write a general template matching algorithm that will work for any image? Or will the inputs always vary dependent on the images being used?

Hello @elapins,

The way the exhaustive optimizer works is by defining a grid in the transformation’s parameter space and using those parameter settings as the “registration result”. The metric is evaluated for each of these potential transformations and the one with the lowest value is returned as the result.

You need to have a clear understanding of your transformation’s parameter space and what are the appropriate bounds in which you are searching.

For example, a 2D rigid transformation parameterized as a translation and rotation angle, so we have a 3D parameter space. You need to identify the relevant search space. For example tx \in [-100, 10], ty \in [50, 700], \theta \in [0, \pi/4]. Then you need to decide on the grid. For example, sample tx every 0.5mm, sample ty every 5mm and sample \theta every \pi/8 radians.

You then feed this grid into the optimizer and get the parameters that provide the lowest metric value.


For the example you just provided, what would the grid and step inputs be then? I am confused about how to get the correct bounds and steps inputted.

THank you!!!

The FFTNormalizedCorrelationImageFilter performs template matching to find a global optimal solution when translation is the solvable parameter.

The maximum of the the cor-filter needs to be found and a transform constructed. Here is some pseudo code:

    print("FFT Correlation...")
    out = sitk.FFTNormalizedCorrelationImageFilter(image, template)

    print("Detecting peak...")
    out = sitk.SmoothingRecursiveGaussian(out)
    print("\tConnected components and maxima...")
    cc = sitk.ConnectedComponent(sitk.RegionalMaxima(out, fullyConnected=True))
    print("\tLabel statistics...")
    stats = sitk.LabelStatisticsImageFilter()
    stats.Execute(out, cc)
    labels = sorted(stats.GetLabels(), key=lambda l: stats.GetMean(l))

    peak_pt = out.TransformContinuousIndexToPhysicalPoint(stats.GetBoundingBox(labels[-1])[::2])
    peak_value = stats.GetMean(labels[-1])

    center_pt = out.TransformContinuousIndexToPhysicalPoint([p / 2.0 for p in out.GetSize()])

    translation = [c - p for c, p in zip(center_pt, peak_pt)]
    translation += [0] * image.GetDimension() - len(translation))

    print("FFT peak correlation of {0} at translation of {1}".format(peak_value, translation))
1 Like

hi @blowekamp ,

Thank you so much for your help! I am just confused about the following lines of code in your pseudo code:

translation = [c - p for c, p in zip(center_pt, peak_pt)]
translation += [0] * image.GetDimension() - len(translation))

I keep getting an error with the second line “unsupported operand type(s) for -: ‘list’ and ‘int’”

Could you explain to me what these two lines are exactly doing? I am trying to re-write this in c++ as well, so any guidance on this section would be great.

Thank you!

I believe this line is suppose to pad translation with zeros so that it’s the length of the image’s dimensions.

what is the [0] supposed to be indexing?

The above was just “pseudo” code. I think it was suppose to be constructing a list of just one element with the zero value.

ok thanks! Is the translation output with respect to the center of the image? or top left corner