Generate Rotational Maximum Intensity Projections

I am very new to using ITK, and am working using SITK on Python.
I have 3D PET scans. From each 3D matrix, I have generated 2 MIPs at 90 degrees - Sagittal and Coronal. This was easily achieved using the sitk.MaximumProjection function.

How can I generate Maximum Intensity Projection (MIP), at 10 degree rotations?
So that every PET scan gives me 36 MIPs.

Hello @divya_chou,

ITK/SimpleITK do not have this functionality (general volume rendering). Possibly take a look at VTK which is also available in Python. This very old posting shows how to do it using VTK (C++). The toolkit has changed since then but this should be a reasonable starting point. For VTK questions see the VTK discourse.

Thanks for pointing in the direction of VTK.

However Iā€™m wondering that since ITK offers functionality to generate Coronal and Sagittal MIP (using sitk.MaximumProjection function), and it offers 3D transform using Euler3DTransform, is it not feasible to generate rotational MIPs using ITK?

You ought to be able to rotate the image, resample it to a new orientation and then generate MIP images of the rotated image to get what you want.

@dchenā€™s solves the problem by moving the image, my original thinking involved moving the camera. If you only want to use SimpleITK, Iā€™d go with @dchenā€™s solution.

Could you point me to the exact functions to be used in this case?

Hello @divya_chou,

So the two relevant elements are the rigid transformation class, and the resample filter.

In this case the most appropriate transformation is VersorRigid3DTransform. You should set the rotation center to the center of your volume, versor_tx.SetCenter(center_of_volume), and set the rotation using the axis-angle representation, versor_tx.SetRotation(axis, angle).

You then use the ResampleImageFilter to resample the original volume with the transform. You will need to define the resampling grid.

Finally, you apply the MIP to the resampled volume.

3 Likes

Thanks @zivy.
I followed your answer and have come up with the following code so far. And I have a few followup questions:

np_vol = pet_volume #shape  is 180 X 512 X 512
sitk_vol = sitk.GetImageFromArray(pet_volume)

center_of_volume =  [np_vol.shape[0]/2 , np_vol.shape[1]/2 , np_vol.shape[2]/2] # centre =90,256,256

axis = [0,0,1]
degrees = 20
radians = np.pi * degrees / 180
versor_tx = sitk.VersorRigid3DTransform()
versor_tx.SetCenter(center_of_volume)
versor_tx.SetRotation(axis, radians)
  1. Am I right in calculating the center_of_volume as mid point of each dimensionā€™s number of pixels?

  2. How can I define the resampling grid?

  3. How to use the ResampleImageFilter?

Many thanks for your guidance.

1 Like

Hello @divya_chou,

There are several issues with your code, primarily because you are thinking of an image as an array of pixels and not as a spatial object. This is a fundamental concept in ITK/SimpleITK.

  1. The sitk_vol is constructed from a numpy array and is missing all of the spatial information (origin, spacing, direction cosine matrix). Please see this Jupyter notebook section titled ā€œConversion between numpy and SimpleITKā€.
  2. The center of the volume is the midpoint index wise, but you need to convert it to physical coordinates using the imageā€™s TransformContinuousIndexToPhysicalPoint .
  3. Defining the resampling grid and working with transforms is described in this Jupyter notebook.
1 Like

Hello @zivy .
Iā€™ve incorporated your inputs and my code looks like this. Right now Iā€™ve hard coded many values as Iā€™m trying to get the code to work for 1 PET Scan. The source of the values are mentioned as a comment.

np_vol = pet_resampled[0]
sitk_vol = sitk.GetImageFromArray(pet_resampled[1])
type(sitk_vol)

#Adding components of spatial objects
sitk_vol.SetDirection([1,0,0,0,1,0,0,0,1]) #Identity Matrix
sitk_vol.SetSpacing([5,5,5]) # Since pixel spacing is 5 x 5 and the slice thickness is 5mm
sitk_vol.SetOrigin([-337.88238150935,-510.94804353575,-768]) # Values from Image Patient Position (IPP) from DICOM

#Function to convert the centre index to origin, from index to millimeter. Created this as was unable to get the TransformContinuousIndexToPhysicalPoint to work
def idx_mm(orig,idx,spacing):
mm_x= idx[0]*spacing[0] + orig[0]
mm_y= idx[1]*spacing[1] + orig[1]
mm_z= idx[2]*spacing[2] + orig[2]
cent = [mm_x,mm_y,mm_z]
return cent

center_of_volume_idx = np.array([np_vol.shape[0]/2,np_vol.shape[1]/2,np_vol.shape[2]/2])
centre_of_vol_mm = idx_mm([-337.88238150935,-510.94804353575,-768],center_of_volume_idx,[5,5,5]) # Output = [47.11761849064999, -170.94804353575, -428.0]
axis = [0,1,0]
degrees = 20
radians = np.pi * degrees / 180
versor_tx = sitk.VersorRigid3DTransform()
versor_tx.SetCenter(centre_of_vol_mm)
versor_tx.SetRotation(axis, radians)

#Defining the grid
grid = sitk.GridImageSource()
grid.SetOutputPixelType(sitk.sitkUInt16)
grid.SetSize([180 X 512 X 512]) #same as np_vol.shape
grid.SetSigma([0.5, 0.5,0.5]) # Used value from code example. What does this mean?
grid.SetGridSpacing([5.0, 5.0,5.0]) #same as Spacing of sitk_vol
grid.SetOrigin([-337.88238150935,-510.94804353575,-768]) #Value of IPP
grid.SetSpacing([5.0, 5.0,5.0]) # Used value from code example. Unclear about difference from GridSpacing

#Performing rotation
rotated_sitk = sitk.Resample(grid, sitk_vol, versor_tx ,sitk.sitkCosineWindowedSinc, 0,sitk.sitkUInt16)
rotated_ndarr= sitk.GetArrayFromImage(rotated_sitk)
plt.imshow(rotated_ndarr[:,:,101]) # Plotting a random slice from rotated object shows Blank
plt.imshow(np_vol[:,:,101]) # Whereas Plotting a random slice from original nd array of the volume showed values of the PET scan

I have the following questions:

  1. Are the identified sources of information for the hardcoded values in line with the correct interpretation?
    Eg: while defining the sitk volumne as a spatial object,
    Origin = IPP
    Spacing = Pixel_spacing_X, Pixel_spacing_Y, Slice_Thickness
    Direction Cosine matrix = 100(for x), 010 (for y) and 001 (for z)

  2. Could you explain what the parameters used while defining the Grid mean? The documentation is not very clear about this.

  3. While performing the rotation which interpolator should one use?

  4. Finally on plotting the rotated object, I get a blank output - This is documented as one of the common errors, and is attributed to setting the wrong grid - But the details of the parameters expected by the Grid and what they mean are fuzzy to me.

Thanks for humoring the plethora of questions. Coming from a Python background, Iā€™m finding it difficult to understand the Doxygen documentation, and am relying on the Jupyter examples which contain very few examples of 3D rotation.

Hello @divya_chou,
Below is a short script that does what you want. I think this is as far as I go, you should be able to modify it for your needs:

import SimpleITK as sitk
import numpy as np

file_name = 'training_001_ct.mha'
image = sitk.ReadImage(file_name)

projection = {'sum': sitk.SumProjection,
              'mean':  sitk.MeanProjection,
              'std': sitk.StandardDeviationProjection,
              'min': sitk.MinimumProjection,
              'max': sitk.MaximumProjection}
ptype = 'mean'
paxis = 0

rotation_axis = [0,0,1]
rotation_angles = np.linspace(0.0, 2*np.pi, int(360.0/20.0))
rotation_center = image.TransformContinuousIndexToPhysicalPoint([(index-1)/2.0 for index in image.GetSize()])
rotation_transform = sitk.VersorRigid3DTransform()
rotation_transform.SetCenter(rotation_center)


#Compute bounding box of rotating volume and the resampling grid structure

image_indexes = list(zip([0,0,0], [sz-1 for sz in image.GetSize()]))
image_bounds = []
for i in image_indexes[0]:
    for j in image_indexes[1]:
        for k in image_indexes[2]:
            image_bounds.append(image.TransformIndexToPhysicalPoint([i,j,k]))

all_points = []
for angle in rotation_angles:
    rotation_transform.SetRotation(rotation_axis, angle)    
    all_points.extend([rotation_transform.TransformPoint(pnt) for pnt in image_bounds])
all_points = np.array(all_points)
min_bounds = all_points.min(0)
max_bounds = all_points.max(0)
#resampling grid will be isotropic so no matter which direction we project to
#the images we save will always be isotropic (required for image formats that 
#assume isotropy - jpg,png,tiff...)
new_spc = [np.min(image.GetSpacing())]*3
new_sz = [int(sz/spc + 0.5) for spc,sz in zip(new_spc, max_bounds-min_bounds)]


proj_images = []
for angle in rotation_angles:
    rotation_transform.SetRotation(rotation_axis, angle) 
    resampled_image = sitk.Resample(image1=image,
                                    size=new_sz,
                                    transform=rotation_transform,
                                    interpolator=sitk.sitkLinear,
                                    outputOrigin=min_bounds,
                                    outputSpacing=new_spc,
                                    outputDirection = [1,0,0,0,1,0,0,0,1],
                                    defaultPixelValue = -1000, #HU unit for air in CT, possibly set to 0 in other cases
                                    outputPixelType = image.GetPixelID())
    proj_image = projection[ptype](resampled_image, paxis)
    extract_size = list(proj_image.GetSize())
    extract_size[paxis]=0
    proj_images.append(sitk.Extract(proj_image, extract_size))

# Stack all images into fuax-volume for display
sitk.Show(sitk.JoinSeries(proj_images))
                                
4 Likes

Hello @zivy
I was able to achieve the objective by using a different overload of the Resample function.

resampled_image = sitk.Resample(sitk_vol, output_size, versor_tx, sitk.sitkLinear, output_origin, output_spacing, output_direction)

In this case, I did not have to define the grid. Rather, I just defined the extreme points of the volume needed to be transformed. I found the details to perform this under the section " Defining the Resampling Grid" of this Jupyter notebook

Many thanks for your guidance towards the right direction.

3 Likes