I’m trying to rotate an image around the gravity center of the binary mask of the ROI.
To do that, I use in the python version of ITK the following functions :
itk.CenteredEuler3DTransfrom to define my the parameters of my transform (rotation matrix, center of rotation)
itk.resample_image_filter to generate my rotated output image, I use the LinearInterpolateImageFunction on my binary mask to generate my interpolator.
Everything works fine except one annoying detail, in some case, the image is cropped because the new physical position of some pixels is outside of the physical domain of the new image.
For now, I use the input_image as reference into my resampler, I know it’s wrong because I should myself specify new origin, spacing and direction for the output image.
I could of course correct this mistake by computing the new value of origin, spacing and direction the resampler but it feels like to me I’m applying the transform twice. Moreover the calculation is not direct since the rotation center is different from the origin. It doesn’t seem natural to me. Is there something I can do?
Transform corners of the image, and use them to determine the new bounding box. Then use that to specify the origin and the size for resampling. There are probably some examples of that. Search yourself, or maybe wait for more responses.
Follow @dzenanz’s advice. Related SimpleITK code, dealing with multiple images can be found in this notebook, create_images_in_shared_coordinate_system.
I tried to adapt the code you sent me for my problem, here is what i have :
for i in range(0,36):
rotation = rotation_matrix(10*i,0,180,'zyx') #define the rotation matrix
transform = CenteredEuler3DTransfrom(center_rotation,rotation,translation)
#A simple call of the function itk.CenteredEuler3DTransfrom
interpolator = itk.LinearInterpolateImageFunction.New(img)
box = transform.TransformPoint(img.TransformIndexToPhysicalPoint(
(itk.size(img)[0]-1,
itk.size(img)[1]-1,
itk.size(img)[2]-1)))-transform.TransformPoint(img.GetOrigin())
output_size = itk.Size[3]()
output_size[0] = int(np.abs(np.round(box[0]/img.GetSpacing()[0])))
output_size[1] = int(np.abs(np.round(box[1]/img.GetSpacing()[1])))
output_size[2] = int(np.abs(np.round(box[2]/img.GetSpacing()[2])))
output_origin = transform.TransformPoint(img.GetOrigin())
img_rotated = itk.resample_image_filter(
img,
transform=transform,
interpolator=interpolator,
output_origin=output_origin,
size = output_size,
output_spacing = img.GetSpacing(),
output_direction=img.GetDirection()
)
It still gives me poor results with cropped images, though it seems to me that I understood what you explained to me. My image is a 3D slice of size (512,256,1)
Thanks to your answer I was able to correct my code and make it works with your advises and indications. Though, I noticed that for more complex transformation (combine z-axis rotation with x-axis or y-axis) the final returned image is simply black. Do you have any idea of why?
Depending on the transformation and new_spacing you may end up with a large empty volume where only a few x-y slices contain information. If you change the transformation to: