3D Registration Problem - Resulting Image truncated after the resampling

Hi all,

I’ve been struggling with a 3D image registration issue for weeks, and I find myself with more questions than answers. I’m working on registering 3D microCT images of bones to analyze their evolution over time following drug treatments. Unfortunately, my resampled images are consistently “truncated” or cropped at strange angles after registration.

Initially, I believed this behavior was unique to my dataset, as the examples in the notebook seemed to work fine. However, I managed to reproduce the same issue using the notebook tutorial from the SimpleITK repository:

SimpleITK Registration Tutorial

Here’s a brief overview of the problem:

  • Input Images: In my case, I select an alpha value of 1 because I’m primarily interested in the output image.
  • Observation: The 2D section looks fine at z = 6

However, at z = 5, a significant portion of the image is mapped out of the domain (in black as the default pixel value is = 0):

The truncation occurs mainly at the boundaries of the 3D images. Sometimes it’s one side of the z-axis, sometimes it’s the other.

I’ve reviewed similar issues on GitHub, but none of the suggested solutions resolved my problem:

I would appreciate any insights or solutions on why this truncation is happening and how I can resample my moving image to align with my fixed image without losing any information in any 2D slice.

I feel quite stuck and haven’t been able to make progress. Any help or guidance would be greatly appreciated.

Thank you !

Hello @Bigfahma,

This is a feature, not a bug. When resampling onto the fixed image grid some of the moving image information may not be included. If this is not desirable, you need to define a different resampling grid that contains all of the region occupied by both images.

For details on how to do this please see the toolkit’s Transforms and Resampling notebook, specifically the section titled Resampling after registration. Possibly worth skimming all notebooks in the repository to get an overview of the wide range of available functionality.

2 Likes

Hi @zivy,

Thank you for your response. I did have a look at the notebook you sent me, and I have already tried sometime similar to calculate the output size of the resampling grid.

Here’s a snippet of my code, that I took from the toolkit’s Transforms and Resampling notebook , and adapted to my 3D images.

import SimpleITK as sitk
import numpy as np
import itertools

def compute_common_bounding_box(images, transforms):
    """
    Compute the axis-aligned bounding box that encompasses all transformed images.

    Parameters:
        images (list): List of SimpleITK image objects.
        transforms (list): List of corresponding transforms for each image.

    Returns:
        tuple: A tuple containing new_origin, new_size, new_spacing, new_pixel_type, and new_direction.
    """
    dim = images[0].GetDimension()
    boundary_points = []

    for image, transform in zip(images, transforms):
        # Consider all corners of the image in 3D
        for boundary_index in itertools.product(*zip([0] * dim, [sz - 1 for sz in image.GetSize()])):
            # Convert index to physical space
            physical_point = image.TransformIndexToPhysicalPoint(boundary_index)
            # Transform the point to the fixed image coordinate system
            transformed_point = transform.GetInverse().TransformPoint(physical_point)
            boundary_points.append(transformed_point)

    # Calculate the bounding box coordinates
    max_coords = np.max(boundary_points, axis=0)
    min_coords = np.min(boundary_points, axis=0)

    new_origin = min_coords
    new_spacing = images[0].GetSpacing()
    new_pixel_type = images[0].GetPixelID()
    new_size = (((max_coords - min_coords) / new_spacing).round().astype(int)).tolist()
    new_direction = np.identity(dim).ravel()

    return new_origin, new_size, new_spacing, new_pixel_type, new_direction


def resample_images_to_common_grid(images, transforms):
    """
    Resample images onto a common grid.

    Parameters:
        images (list): List of SimpleITK image objects.
        transforms (list): List of corresponding transforms for each image.

    Returns:
        list: List of resampled SimpleITK image objects.
    """
    # Compute bounding box for all images
    new_origin, new_size, new_spacing, new_pixel_type, new_direction = compute_common_bounding_box(images, transforms)

    resampled_images = []
    for image, transform in zip(images, transforms):
        resampled_image = sitk.Resample(
            image,
            new_size,
            transform,
            sitk.sitkLinear,
            new_origin,
            new_spacing,
            new_direction,
            0.0,
            new_pixel_type,
        )
        resampled_images.append(resampled_image)
    return resampled_images

# Example usage:
fixed_image = sitk.ReadImage('path_to_fixed_image.nii')
moving_image = sitk.ReadImage('path_to_moving_image.nii')

# Replace with the actual registration result transform
registration_result = sitk.Transform(3, sitk.sitkIdentity)

images = [fixed_image, moving_image]
transforms = [sitk.Transform(3, sitk.sitkIdentity), registration_result]

resampled_images = resample_images_to_common_grid(images, transforms)

I tested this on both the MRI data from 60_Registration_Introduction.ipynb and my own, but some 2d slices in the boudaries of the moving image are still being mapped out of domain (the fixed image as well):

image
image

What am I doing wrong? I compute the common bounding box from both images, measure the new resampling grid’s dimension, and resample both images to this new grid.

Thanks a lot for your guidance and efforts.

Hello @Bigfahma,

I took a second look and you are not doing anything incorrectly.

This issue arises when viewing a 3D object by slicing it along one axis. It is pronounced because of the low rate of sampling in that direction (a.k.a. aliasing). If you view the image using a viewer that supports multi-planar views in 3D (axial, sagittal, coronal), e.g. ITK-SNAP, you will see that all is well, no data loss. A simplified example using a cylinder is given below:

import SimpleITK as sitk
import numpy as np

# create z axis aligned cylinder with large spacing in z direction
# non-isotropic images are common
axis_aligned_cylinder_slice = (sitk.GaussianSource(sitk.sitkUInt8, [512,512], [64,64], [256,256] )>60)*255
vol = sitk.JoinSeries([axis_aligned_cylinder_slice]*5)
vol.SetSpacing([0.3,0.3,3])

# rotate cylinder around x axis and center of image
tx = sitk.Euler3DTransform()
tx.SetRotation(np.deg2rad(10), 0, 0)
tx.SetCenter(vol.TransformContinuousIndexToPhysicalPoint([sz/2 for sz in vol.GetSize()]))

# resample onto original image grid
vol_resampled = sitk.Resample(vol,tx)

# view the image using a multi-planner view (set the path to your ITK-SNAP installation)
image_viewer = sitk.ImageViewer()
image_viewer.SetApplication("/Applications/ITK-SNAP.app/Contents/MacOS/ITK-SNAP")
image_viewer.Execute(vol_resampled)

# resample onto original image spatial extent, but with a denser grid along z axis,
# change the order of magnitued of z_spacing to see how it effects the visualization
z_spacing = 0.3
origin = vol.GetOrigin()
new_spacing = [0.3,0.3,z_spacing]
new_size = [int(round(osz * ospc / nspc)) for osz, ospc, nspc in zip(vol.GetSize(), vol.GetSpacing(), new_spacing)]

vol_resampled_dense = sitk.Resample(vol, new_size, tx, sitk.sitkLinear, origin, new_spacing)
image_viewer.Execute(vol_resampled_dense)
3 Likes