Rotate an image without cropping effect


I’m trying to rotate an image around the gravity center of the binary mask of the ROI.
To do that, I use in the python version of ITK the following functions :

itk.CenteredEuler3DTransfrom to define my the parameters of my transform (rotation matrix, center of rotation)

itk.resample_image_filter to generate my rotated output image, I use the LinearInterpolateImageFunction on my binary mask to generate my interpolator.

Everything works fine except one annoying detail, in some case, the image is cropped because the new physical position of some pixels is outside of the physical domain of the new image.

For now, I use the input_image as reference into my resampler, I know it’s wrong because I should myself specify new origin, spacing and direction for the output image.

I could of course correct this mistake by computing the new value of origin, spacing and direction the resampler but it feels like to me I’m applying the transform twice. Moreover the calculation is not direct since the rotation center is different from the origin. It doesn’t seem natural to me. Is there something I can do?

Transform corners of the image, and use them to determine the new bounding box. Then use that to specify the origin and the size for resampling. There are probably some examples of that. Search yourself, or maybe wait for more responses.

1 Like

Follow @dzenanz’s advice. Related SimpleITK code, dealing with multiple images can be found in this notebook, create_images_in_shared_coordinate_system.

Hey, thank you for your answer

I tried to adapt the code you sent me for my problem, here is what i have :

for i in range(0,36):

   rotation = rotation_matrix(10*i,0,180,'zyx') #define the rotation matrix

   transform = CenteredEuler3DTransfrom(center_rotation,rotation,translation)  
   #A simple call of the function itk.CenteredEuler3DTransfrom

   interpolator = itk.LinearInterpolateImageFunction.New(img)

   box = transform.TransformPoint(img.TransformIndexToPhysicalPoint(

   output_size = itk.Size[3]()
   output_size[0] = int(np.abs(np.round(box[0]/img.GetSpacing()[0])))
   output_size[1] = int(np.abs(np.round(box[1]/img.GetSpacing()[1])))
   output_size[2] = int(np.abs(np.round(box[2]/img.GetSpacing()[2])))

   output_origin = transform.TransformPoint(img.GetOrigin())

   img_rotated = itk.resample_image_filter(
       size = output_size,
       output_spacing = img.GetSpacing(),

It still gives me poor results with cropped images, though it seems to me that I understood what you explained to me. My image is a 3D slice of size (512,256,1)

Hello @Lermontov,

SimpleITK code below, trivial to convert to ITK:

import SimpleITK as sitk
import itertools
import numpy as np

file_name = ""

image = sitk.ReadImage(file_name)
interpolator = sitk.sitkLinear

# Create transform, 45 degree rotation around z axis, center or rotation is image center
image_center = image.TransformContinuousIndexToPhysicalPoint(
    [(i - 1) / 2.0 for i in image.GetSize()]
transform = sitk.Euler3DTransform(image_center, 0.0, 0.0, 0.785398)

# Arbitrary decision to use original spacing, may need to change to something else (e.g. min spacing)
new_spacing = image.GetSpacing()
# new_spacing = [min(image.GetSpacing())] * image.GetDimension()

# Compute new grid based on transformed bounding box of original image
boundary_points = []
for boundary_index in list(
    itertools.product(*zip([0] * image.GetDimension(), image.GetSize()))
max_coords = np.max(boundary_points, axis=0)
min_coords = np.min(boundary_points, axis=0)
new_origin = min_coords
new_size = (((max_coords - min_coords) / new_spacing).round().astype(int)).tolist()

    sitk.Resample(image, new_size, transform, interpolator, new_origin, new_spacing)
1 Like

Hello @zivy @dzenanz

Thanks to your answer I was able to correct my code and make it works with your advises and indications. Though, I noticed that for more complex transformation (combine z-axis rotation with x-axis or y-axis) the final returned image is simply black. Do you have any idea of why?

Hello @Lermontov,

Depending on the transformation and new_spacing you may end up with a large empty volume where only a few x-y slices contain information. If you change the transformation to:

transform = sitk.Euler3DTransform(image_center, 0.785398, 0.785398, 0.785398)

You will see what I mean, using the training_001_ct.mha or training_001_mr_T1.mha images.

1 Like

Hello @zivy thanks for your answer,

Indeed I noticed this problem. I was able to resolve it by applying first x rotation + resample before applying z rotation and new resample !

Thank you for your help

You should compose the transforms, not apply resampling twice. Resampling multiple times reduces image quality unnecessarily.

1 Like