Orientation did not changed after rotation

Hello I adapted method of 3d rotation around center point from [1]. It basically constructs rotation matrix and do euler transformation.
However when I check direction and origin the do not change, although in principle direction changes with rotation.
Is it by design or is it a bug?

import pandas as pd
import numpy as np
import SimpleITK as sitk

def matrix_from_axis_angle(a):
    """ Compute rotation matrix from axis-angle.
    This is called exponential map or Rodrigues' formula.
    a : array-like, shape (4,)
        Axis of rotation and rotation angle: (x, y, z, angle)
    R : array-like, shape (3, 3)
        Rotation matrix
    ux, uy, uz, theta = a
    c = np.cos(theta)
    s = np.sin(theta)
    ci = 1.0 - c
    R = np.array([[ci * ux * ux + c,
                   ci * ux * uy - uz * s,
                   ci * ux * uz + uy * s],
                  [ci * uy * ux + uz * s,
                   ci * uy * uy + c,
                   ci * uy * uz - ux * s],
                  [ci * uz * ux - uy * s,
                   ci * uz * uy + ux * s,
                   ci * uz * uz + c],

    # This is equivalent to
    # R = (np.eye(3) * np.cos(theta) +
    #      (1.0 - np.cos(theta)) * a[:3, np.newaxis].dot(a[np.newaxis, :3]) +
    #      cross_product_matrix(a[:3]) * np.sin(theta))

    return R

def resample(image, transform):
    This function resamples (updates) an image using a specified transform
    :param image: The sitk image we are trying to transform
    :param transform: An sitk transform (ex. resizing, rotation, etc.
    :return: The transformed sitk image
    reference_image = image
    interpolator = sitk.sitkLinear
    default_value = 0
    return sitk.Resample(image, reference_image, transform,
                         interpolator, default_value)

def get_center(img):
    from python to test
    width, height, depth = img.GetSize()
    return img.TransformIndexToPhysicalPoint(int(width/2), int(np.ceil(height/2))
    , int(np.ceil(depth/2)))

def rotation3d(image, axis, theta):

    This function rotates an image across each of the x, y, z axes by theta_x, theta_y, and theta_z degrees
    :return: The rotated image
    theta = np.deg2rad(theta)
    euler_transform = sitk.Euler3DTransform()
    image_center = get_center(image)

    direction = image.GetDirection()

        axis_angle = (direction[2], direction[5], direction[8], theta)
    elif (axis==2):
        axis_angle = (direction[1], direction[4], direction[7], theta)
    elif (axis==1):
        axis_angle = (direction[0], direction[3], direction[6], theta)
    np_rot_mat = matrix_from_axis_angle(axis_angle)
                                ,np_rot_mat[2][0],np_rot_mat[2][1],np_rot_mat[2][2] ])
    resampled_image = resample(image, euler_transform)
    return resampled_image

image = sitk.ReadImage(imagePath)

rotated=rotation3d(image,2, 30)


print(f"  origin_a {origin_a} origin_b {origin_b} direction_a {direction_a} origin_b {origin_b}")


origin_a (-172.89999389648438, 179.296875, -368.0)
origin_b (-172.89999389648438, 179.296875, -368.0)

direction_a (1.0, 0.0, 0.0, 0.0, -1.0, 0.0, 0.0, 0.0, 1.0)
direction_b (1.0, 0.0, 0.0, 0.0, -1.0, 0.0, 0.0, 0.0, 1.0)

1)python - SimpleITK Rotation of volumetric data (e.g. MRI) - Stack Overflow

The code above resamples the pixels of the image while using the original images are a reference image for the image geometry/meta-data.

If you want to transform the geometry/meta-data of an image by an affine transform without resampling the pixels the TransformGeometryImageFilter can be used.

1 Like

Thanks for response @blowekamp ! I looked into docs but still do not see how to apply the rotation matrix i used for transforming pixel array to spatial metadata

Direction is first row and column (i do not remember the order ) of direction cosine matrix so next i should probably recreate full direction cosine matrix from direction stored in sitk Image and compose somehow original direction cosine mateix with rotation matrix ; I am mot matematician so it is not fully clear for me . But if you would be willing to point me to the source explaining this transformation I would be able to understand it better

Hello @Jakub_Mitura,

What appears to be confusing you is the fundamental concept of an image as a physical object in space, a regular grid with intensity values associated with the grid vertices. The example code below illustrates the difference between (1) rotating the image and then resampling it onto the original image grid (what your original code did) and (2) rotating the image and updating its geometry (origin, direction cosine matrix).

When using option 1 you will see that the image looks different, intensity values that were outside the original grid are lost. When using option 2, the intensity values are all retained, but the grid they are on has changed. Open the results using ITK-SNAP (multiple instances) and see that the linked cursor works for the two rotated images (with resampling and just changing the image geometry).

import pandas as pd
import numpy as np
import SimpleITK as sitk

file_name = "original.nrrd"

image = sitk.ReadImage(file_name)

# 20 degree rotation around the image center
rotation2D = sitk.Euler2DTransform()
rotation2D.SetAngle(np.pi / 9)
        [(sz - 1) / 2 for sz in image.GetSize()]

rotated_and_resampled = sitk.Resample(image, rotation2D)
rotated = sitk.TransformGeometry(image, rotation2D)

sitk.WriteImage(rotated_and_resampled, "rotated_and_resampled_onto_original_image_grid.nrrd")
sitk.WriteImage(rotated, "rotated.nrrd")

Sample image:
original.nrrd (364.8 KB)


Thanks for response; to be honest i always thought that rotating image is changing both pixel array and spatial metadata at the same time ; like for example in case of cropping both origin changes and the size of the underlying array. So i was wrong thanks for explaining!

Just to understand geometry processing from direction cosine matrix perspective. By rotating around the center we need to just compose our chosen direction cosine matrix with original one to get a new one that indicates where in space we really are . Can you tell me how this composing operation of direction cosine matrix and rotation matrix is called mathemathically so i could read more about it ?

1 Like

Hello @Jakub_Mitura,

This is just basic affine transformation composition. The math is clearly described in the class documentation.

1 Like