What type of transformation aligns the centres of the two volumes?

I am working on the medical image registration task which requires two volumes to be affinely aligned globally, what transformation should I apply to make them aligned?

Hello @mahesh_bhosale,

If the volumes are expected to be aligned via an affine transformation then you should use the AffineTransform. Please take a look at the SimpleITK Jupyter notebook repository. The Python notebooks starting with 6 illustrate various aspects of registration and are a good place to start.

Yes, I looked at them but I am not sure whether results I am getting is correct or not, I am using the below code with exhaustive optimizer and

#!/usr/bin/env python

This script demonstrates the use of the Exhaustive optimizer in the
ImageRegistrationMethod to estimate a good initial rotation position.

Because gradient descent base optimization can get stuck in local
minima, a good initial transform is critical for reasonable
results. Search a reasonable space on a grid with brute force may be a
reliable way to get a starting location for further optimization.

The initial translation and center of rotation for the transform is
initialized based on the first principle moments of the intensities of
the image. Then in either 2D or 3D a Euler transform is used to
exhaustively search a grid of the rotation space at a certain step
size. The resulting transform is a reasonable guess where to start
further registration.

fixed_img = "/Datasets/CHAOS/CHAOS_Train_Sets/Train_Sets/MR/1/T1DUAL/DICOM_anon/InPhase/IMG-1.nii.gz"
moving_img = "/Datasets/CHAOS/CHAOS_Train_Sets/Train_Sets/MR/2/T1DUAL/DICOM_anon/InPhase/IMG-2.nii.gz"
output_transform_file = "/Datasets/CHAOS/CHAOS_Train_Sets/Train_Sets/MR/1/T1DUAL/DICOM_anon/InPhase/transform_1_2.mat"

import SimpleITK as sitk
import sys
import os
from math import pi

def command_iteration(method):
    if method.GetOptimizerIteration() == 0:
        print("Scales: ", method.GetOptimizerScales())
        f"{method.GetOptimizerIteration():3} "
        + f"= {method.GetMetricValue():7.5f} "
        + f": {method.GetOptimizerPosition()}"

if len(sys.argv) < 4:
        "<fixedImageFilter> <movingImageFile>",

fixed = sitk.ReadImage(fixed_img, sitk.sitkFloat32)

moving = sitk.ReadImage(moving_img, sitk.sitkFloat32)

R = sitk.ImageRegistrationMethod()


sample_per_axis = 12
if fixed.GetDimension() == 2:
    tx = sitk.Euler2DTransform()
    # Set the number of samples (radius) in each dimension, with a
    # default step size of 1.0
    R.SetOptimizerAsExhaustive([sample_per_axis // 2, 0, 0])
    # Utilize the scale to set the step size for each dimension
    R.SetOptimizerScales([2.0 * pi / sample_per_axis, 1.0, 1.0])
elif fixed.GetDimension() == 3:
    tx = sitk.Euler3DTransform()
            sample_per_axis // 2,
            sample_per_axis // 2,
            sample_per_axis // 4,
            2.0 * pi / sample_per_axis,
            2.0 * pi / sample_per_axis,
            2.0 * pi / sample_per_axis,

\# Initialize the transform with a translation and the center of rotation from the moments of intensity.
tx = sitk.CenteredTransformInitializer(fixed, moving, tx)



R.AddCommand(sitk.sitkIterationEvent, lambda: command_iteration(R))

outTx = R.Execute(fixed, moving)

print(f"Optimizer stop condition: {R.GetOptimizerStopConditionDescription()}")
print(f" Iteration: {R.GetOptimizerIteration()}")
print(f" Metric value: {R.GetMetricValue()}")

sitk.WriteTransform(outTx, output_transform_file)

 if "SITK_NOSHOW" not in os.environ:
    resampler = sitk.ResampleImageFilter()

    out = resampler.Execute(moving)

    # simg1 = sitk.Cast(sitk.RescaleIntensity(fixed), sitk.sitkUInt8)
    # simg2 = sitk.Cast(sitk.RescaleIntensity(out), sitk.sitkUInt8)
    # cimg = sitk.Compose(simg1, simg2, simg1 // 2.0 + simg2 // 2.0)
    # sitk.Show(cimg, "ImageRegistrationExhaustive Composition")
    sitk.WriteImage(out, "registered_1_2_tr_ex.nii.gz")

Hello @mahesh_bhosale,

The code you posted is using a rigid Euler3DTransform, while you stated that the expected transformation between the two images you have is an affine transformation. Not sure what is your expectation when you are not using the expected type of transformation. You will get a result, it will just not be accurate.

Also, the code is only sampling the parameter space for the rotation angles and not the translation. Not sure if this is intentional on your part or if it is just how the original code was written. For additional details see this Jupyter notebook.

The previous code I pasted was from one of the registration scripts - Image Registration Method Exhaustive — SimpleITK 2.0rc2 documentation
I will try using some part of the code in this script for affine registration Image Registration Method Displacement 1 — SimpleITK 2.0rc2 documentation