Applying a 4x4 affine transform to a 3D volume

I am trying to apply a 4x4 affine transform to a 3D volume to rotate the volume 1 degree along the Y-axis. This is a very simple example that I am trying to prove works so that I can use a more complicated 4x4 transform matrix later.

In the code below, I have a list of file paths to the 10 image slices of the 3D volume (img_paths), and I have a helper module, imageh, that can read and write images as numpy arrays to/from the file system.

import os
import itk
import math
import numpy as np
import imageh

# ==============================================================================
# Read slices and create image volume
# ==============================================================================
img_slices = []
for img_path in img_paths:
    img_slice = imageh.load_image(img_path)
    img_slices.append(img_slice)
img_volume = np.stack(img_slices, axis=0)
itk_img_volume = itk.image_from_array(img_volume)

# ==============================================================================
# Set rotation degrees
# ==============================================================================
degrees = 1

# ==============================================================================
# Create Affine Transform
# ==============================================================================
TransformType = itk.AffineTransform[itk.D, 3]
affine_transform = TransformType.New()

# Convert to radians
rads = degrees * math.pi / 180

# We want to rotate around the Y axis...
params = itk.OptimizerParameters[itk.D](12)
params[0] = math.cos(rads)
params[1] = 0
params[2] = math.sin(rads)
params[3] = 0
params[4] = 1
params[5] = 0
params[6] = -math.sin(rads)
params[7] = 0
params[8] = math.cos(rads)
params[9] = 0
params[10] = 0
params[11] = 0

affine_transform.SetParameters(params)

# ==============================================================================
# Setup the resample filter
# ==============================================================================
Image3d = type(itk_img_volume)
FilterType = itk.ResampleImageFilter[Image3d, Image3d]
filter = FilterType.New()
InterpolatorType = itk.NearestNeighborInterpolateImageFunction[Image3d, itk.D]
interpolator = InterpolatorType.New()
filter.SetInterpolator(interpolator)
filter.SetDefaultPixelValue(0)
filter.SetOutputOrigin(itk_img_volume.GetOrigin())
filter.SetOutputSpacing(itk_img_volume.GetSpacing())
filter.SetSize(itk_img_volume.GetLargestPossibleRegion().GetSize())
filter.SetOutputDirection(itk_img_volume.GetDirection())
filter.SetInput(itk_img_volume)
filter.SetTransform(affine_transform)

# ==============================================================================
# Execute the resample filter and convert result to numpy array
# ==============================================================================
filter.Update()
transformed_itk_img = filter.GetOutput()
transformed_vol = itk.array_from_image(transformed_itk_img)

# ==============================================================================
# Output transformed slices to the file system
# ==============================================================================
for i in range(len(img_paths)):
    img_path = img_paths[i]
    output_path = f'{output_dir}{os.path.basename(img_path)}'
    img = transformed_vol[i, ...]
    imageh.save_image(img, output_path)

When I write the 10 transformed image slices out to the file system, here are my results:

01.tif (910.1 KB)
transformed_01.tif (29.9 KB)

02.tif (910.2 KB)
transformed_02.tif (56.0 KB)

03.tif (910.4 KB)
transformed_03.tif (83.1 KB)

04.tif (910.4 KB)
transformed_04.tif (108.4 KB)

05.tif (909.9 KB)
transformed_05.tif (134.7 KB)

06.tif (910.5 KB)
transformed_06.tif (161.7 KB)

07.tif (910.5 KB)
transformed_07.tif (189.0 KB)

08.tif (910.3 KB)
transformed_08.tif (213.5 KB)

09.tif (909.9 KB)
transformed_09.tif (240.9 KB)

10.tif (910.2 KB)
transformed_10.tif (268.1 KB)

I would expect to still see most of the image for each image because a 1 degree rotation isn’t very much at all. If I go above 1 degree to maybe 5 degrees, I see complete black for all images.

Does anyone know what I’m doing wrong?

By default, transformation center is the coordinate origin. You probably want to find the physical coordinates of a pixel in the center of your image, and pass that to CenteredAffineTransform. Or handle this some other way.

1 Like

I looked at your images now. When visualizing them in slicer (in.nrrd, transformed.nrrd), it strikes me immediately: rotating along X or Y will quickly bring this thin 3D volume out of its field of view. If you set the slices to be thicker than 1 (e.g. 10, or 20, or 50), or if you rotate around Z axis, this input will serve debugging purposes better. So far, I don’t think there is anything wrong with your code.

1 Like

You could also consider using AffineTransform::Rotate3D if you are unsure about your math.