How to use direction and origin information to show correct image orientation in numpy

Dear Community,

I have two sitk images im1 and im2 with same physical size and shape. They are displayed correctly in slicer3d in terms of orientation. But they have different origins and directions as following,

im1_origin = (124.0, 77.23600006103516, 0.0)
im2_origin = (0,0,0)

im1_direction = (-1.0, 0.0, 0.0, 0.0, -1.0, 0.0, 0.0, 0.0, 1.0)
im2_direction = (1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0)

When they are converted to numpy arrays, as you know, their orientation are now different since they are simply arrays now.

So how can I transform (i.e. flipping axis) im1 to the orientation of im2 using the information above before converting to numpy arrays so I can have them in the desired orientation in numpy?

Thank you!

You can resample one of the images to have the same geometry (origin, spacing, and axis drections) as the other image. Then the same voxel indices will correspond to the same physical locations in both images, which makes some processing operations trivial (such as adding two images). Note that most image processing operations should not be performed in voxel space but in physical space (taking into account image spacing).

1 Like

What Andras says if very true:

However for the simple case of changing the sign and order of the image’s axis the OrientImageFilter could be used. This filter just uses a composite of the FlipeImageFilter and the PermuteAxesImageFilter, while maintaining the same physical spacing of the images.

I’m in the process of wrapping this filter for SimpleITK now: Adding wrapping for OrientImageFilter.h by blowekamp · Pull Request #798 · SimpleITK/SimpleITK · GitHub

3 Likes