I am trying to find out TRE of 3D Multimodal Rigid Medical Image Registration using metric JointHistogramMutualInformation. The TRE function of SimpleITK takes a list of landmark points from the fixed and moving images, as input parameters.

The TRE concept is independent from the metric used for registration. To compute TRE you need to obtain pairs of corresponding points in the fixed and moving coordinate systems. Note that these points are not used in the registration process itself. You then use the transformation estimated by the registration process to transform the points from the fixed coordinate system to the moving coordinate system and compute the distance between the transformed point and the actual point in the moving coordinate system (the TRE).

The easiest way to obtain corresponding points is to manually identify them (e.g. the RegistrationPointDataAquisition GUI used in this Jupyter notebook).

Thank you @zivy .
I could get the coordinates from point_acquisition_interface.get_points(). Hope this is the method you hinted.

I have another query.
With point_acquisition_interface.get_points(), we are manually selecting the corresponding points in the two images. However, in 3D images, two consecutive slices can have very similar images. Is there a way to decide which slice gives the correct correspondence with the fixed image?

For example, in the below two figures, both slice 4 and 5 in the moving image have similarity with slice 4 of the fixed image.

Slice 4 of Fixed image and Slice 4 of Moving image

Generally speaking, accurately and precisely localizing corresponding points in a manual fashion is not easy because anatomical structures are usually smooth, no unique well defined corner points (vessel bifurcations are an exception and are reasonably well defined). This is why fiducials, external markers, are often used.

Having said that, you can use another registration to help with the task of localizing corresponding points changing it from a fully manual process to a semi-automatic one. Local rigid registration is used to refine a coarse manual localization which is then visually verified. This notebook formulates the task - you will need to implement the solution.

This paper discusses the approach and may also be helpful.

I have gone through the notebook and the paper that you referred.

My task is to compare TRE of three intensity based rigid registration algorithms (JointHistogramMutualInformation, MattesMutualInformation and My_Metric). Is it logical to use the semi-automatic method here? If yes, which intensity based registration, among the three, should I use for refining the manual localization?

Yes, it is logical to use the semi-automated method. It’s assumption is that the object remains locally rigid which is a closer approximation to reality than the assumption that the whole object is rigid.

The reason for using a semi-automated approach is that the human operator can identify cases where the automated localization fails and ignore those.

With respect to which metric to use for the local registration, assuming all metrics are equal in terms of accuracy, the most straightforward approach is to use all three and combine the results. An ensemble regression, the final result is the mean point - don’t average the transformations that is a non-linear space and is more complicated to do than what you need.