Registering binary mask to x-ray image (2D to 2D) rigid image registration SimpleITK

I have an xray image and it’s corresponding predicted binary mask image (2D). I’m currently using simpleITK to rigidly register the binary mask to the x-ray image. However, when I apply the final transform to the moving image i.e. binary mask image, the binary mask is not correctly transformed which I suspect the registration is not correct. What could be the reason or am I missed something in the code ?

I attached the code that I used to achieve this task and sample two image versions including an xray and corresponding predicted binary mask. Here I used Mutual information as the error metric.

for xray_path in xray_paths:
    xray_name_with_ext = os.path.basename(xray_path)
    pred_binary_mask_path = os.path.join(pred_binary_masks_path, xray_name_with_ext)
    fixed_image = sitk.ReadImage(real_kv_path, sitk.sitkFloat32)
    moving_image = sitk.ReadImage(pred_binary_mask_path, sitk.sitkFloat32)
    transform = sitk.Euler2DTransform()
    initial_transform = sitk.CenteredTransformInitializer(fixed_image,
    registration_method = sitk.ImageRegistrationMethod()
    # Similarity metric settings
    registration_method.SetInitialTransform(initial_transform, inPlace=False)
    final_transform = registration_method.Execute(fixed_image, moving_image)
    # Always check the reason optimization terminated.
    print('Final metric value: {0}'.format(registration_method.GetMetricValue()))
    print('Optimizer\'s stopping condition, {0}'.format(registration_method.GetOptimizerStopConditionDescription()))
    moving_resampled = sitk.Resample(moving_image, fixed_image, final_transform, sitk.sitkLinear, 0.0, moving_image.GetPixelID())
    castFilter = sitk.CastImageFilter()

    # Convert floating type image (imgSmooth) to int type (imgFiltered)
    moving_resampled = castFilter.Execute(moving_resampled)
    sitk.WriteImage(moving_resampled, os.path.join(output_dir, xray_name_with_ext)) (118.0 KB)

Hello @isuruwi,

Please clarify the context in which you are working. How was the segmentation derived from the original image? If you know the spatial transformations used during the processing, i.e. we are in the context of deep learning, then you need to undo/invert those transformations and perform a resampling on the segmentation using the inverted transform.

If we are in some other context and the only path forward is via registration:

  1. Try converting the binary segmentation to a distance map and use that as the moving image (similarity metric will have a smoother terrain, enables better convergence).
  2. Try a more robust initialization strategy (see this notebook for discussion). Given the specific example image and segmentation’s size, spacing, origin and direction cosine information, the CenteredTransformInitializer is equivalent to the identity transform.

Finally, it does look like the liver is segmented, though there is an offset:

Code for creating the above image (assumes identity transformation, otherwise we would first need to resample the segmentation and then do the overlay):

import SimpleITK as sitk

image = sitk.ReadImage("sample_data/xray_img/1.png")
segmentation = sitk.ReadImage("sample_data/binary_mask/1.png")
contour_overlaid_image = sitk.LabelMapContourOverlay(
    sitk.Cast(segmentation, sitk.sitkLabelUInt8),
    contourThickness=[4, 4],
    dilationRadius=[3, 3])
sitk.WriteImage(contour_overlaid_image, 'contour_overlaid.png')

Hi Zivy,

The liver segmentation was obtained from a deep learning based approach by passing each intreatment xray image. However, because I don’t have Groundtruth to evaluate, I try to rigidly register it to the corresponding in-treatment x-ray image with the above code to see if the 2D Rigid image registration wants to move the predicted binary mask relative to the real kV image. If so, this indicates that there is an error in the location of the predicted binary mask image. That is how I intend to evaluate it. Hope you udnerstand the scenario.

Relying on registration for this task is not recommended because it is not guaranteed to converge to the correct result. It is better to directly analyze the segmentation. For example, the expectation is that the contour of the segmentation is very close to high gradient regions in the original image. Analyzing the gradient magnitudes in a region-of-interest defined by the segmentation is more stable and is directly evaluating the segmentation and not confounded by the registration. See code below as a starting point:

import SimpleITK as sitk
import numpy as np

image = sitk.ReadImage("sample_data/xray_img/1.png")
gradient_magnitude = sitk.GradientMagnitude(image)

# assume background value is zero
segmentation = sitk.ReadImage("sample_data/binary_mask/1.png") != 0
half_contour_width_in_mm = 5
segmentation_contour = sitk.Abs(sitk.SignedMaurerDistanceMap(segmentation, squaredDistance=True, useImageSpacing=True))<half_contour_width_in_mm 

# multiply by 255 so we don't need to adjust intensities in Fiji
sitk.Show(segmentation_contour*255, "seg contour")

gradient_magnitude_on_contour = sitk.GetArrayViewFromImage(gradient_magnitude)[sitk.GetArrayViewFromImage(segmentation_contour)==1]

print("Statistics for gradient magnitude along contour")
print(f"std: {np.std(gradient_magnitude_on_contour)}")
quantiles = [0.0, 0.25, 0.5, 0.75, 1.0]
print(f"quantiles {quantiles}: {np.quantile(gradient_magnitude_on_contour,quantiles)}")