3D Resampling

Hi everyone,

I am working with a dataset that contains studies acquired in either axial or sagittal plane. I am loading the images with the ImageSeriesReader. Trying to unify the dataset, I would like to resample them to – let’s assume – isotropic voxels and reorient them so that the pixel matrix of all images can be read as if it was acquired in axial plane.

I had no problem with resampling to the new spacing, but I struggle with reorientation. First of all, very common approach over the internet seems to be to simply transpose the matrix of a sagittal image, e.g. numpy.transpose(matrix_in_sagittal, (1,2,0)). This would transpose axes and allow for a axial-like reading of the matrix. Slicing this transposed matrix would give you [axial, coronal, sagittal] axes, but I am pretty sure they are still misoriented. Also, if I understand correctly, the legacy way of matching this would be to resample the image to desired direction.

After going through some very useful tips in your repo and notebooks, I think I have an initial code working:

new_spacing = [1.0, 1.0, 1.0]
new_direction = [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0]
interpolator = sitk.sitkLinear

extreme_points = [series_volume.TransformIndexToPhysicalPoint((0,0,0)), 
                  series_volume.TransformIndexToPhysicalPoint((series_volume.GetWidth(),0,0)),
                  series_volume.TransformIndexToPhysicalPoint((series_volume.GetWidth(),series_volume.GetHeight(),0)),
                  series_volume.TransformIndexToPhysicalPoint((0,series_volume.GetHeight(),0)),
                  series_volume.TransformIndexToPhysicalPoint((0,0,series_volume.GetDepth())), 
                  series_volume.TransformIndexToPhysicalPoint((series_volume.GetWidth(),0,series_volume.GetDepth())),
                  series_volume.TransformIndexToPhysicalPoint((series_volume.GetWidth(),series_volume.GetHeight(),series_volume.GetDepth())),
                  series_volume.TransformIndexToPhysicalPoint((0,series_volume.GetHeight(),series_volume.GetDepth()))]
affine = sitk.Euler3DTransform(series_volume.TransformContinuousIndexToPhysicalPoint([sz/2 for sz in series_volume.GetSize()]))
inv_affine = affine.GetInverse()  # doesn't seem to affect the process?
extreme_points_transformed = [inv_affine.TransformPoint(pnt) for pnt in extreme_points]
min_x = min(extreme_points_transformed)[0]
min_y = min(extreme_points_transformed, key=lambda p: p[1])[1]
min_z = min(extreme_points_transformed, key=lambda p: p[2])[2]
max_x = max(extreme_points_transformed)[0]
max_y = max(extreme_points_transformed, key=lambda p: p[1])[1]
max_z = max(extreme_points_transformed, key=lambda p: p[2])[2]

new_origin = [min_x, min_y, min_z]
new_size = [math.ceil((max_x-min_x)/new_spacing[0]),math.ceil((max_y-min_y)/new_spacing[1]),math.ceil((max_z-min_z)/new_spacing[2])]

minimum_value = sitk.GetArrayFromImage(series_volume).min()
resampled = sitk.Resample(series_volume, new_size, sitk.Transform(), sitk.sitkLinear, new_origin, new_spacing, new_direction, int(minimum_value), series_volume.GetPixelID())

The output looks good on a few examples, but I am afraid I might be missing something. I am also still confused about spacings. For instance, if I decided to resample images to anisotropic spacing, would the code above still work, given that the x,y,z order is different in sagittal and axial acquisitions?

I would really appreciate your thoughts and comment on the code above.

Thank you for all the resources you provide – they are extremely helpful, although I am still very hesistant regarding image orientation / coordinates and resampling.

Hello @jwitos,

I think you can obtain what you want with less code by using the PermuteAxesImageFilter (procedural interface PermuteAxes) and then call the make_isotropic function from this Jupyter notebook.

Let us know if this doesn’t simplify things and we’ll take a second look at your code.

2 Likes

I would recommend the DICOMOrientImageFilter over the permute. With this filter you can set the DesiredCoordinateOrientation to “LPS” and it will figure out the required permute order and perform flipping as needed to approximate the desired orientation.

4 Likes

Thank you @blowekamp and @zivy. I was indeed able to simplify the workflow by running a DICOMOrientImageFilter first:

orientation_filter = sitk.DICOMOrientImageFilter()
orientation_filter.SetDesiredCoordinateOrientation("LPS")
reoriented = orientation_filter.Execute(series_volume)

and Resample second to get new spacing:

new_spacing = [0.8, 0.8, 1.1]  # anisotropic example
interpolator = sitk.sitkLinear
new_size = [int(round(osz*ospc/nspc)) for osz,ospc,nspc in zip(reoriented.GetSize(), reoriented.GetSpacing(), new_spacing)]
minimum_value = sitk.GetArrayFromImage(reoriented).min()

resampled = sitk.Resample(reoriented,
                          new_size,
                          sitk.Transform(),
                          sitk.sitkLinear,
                          reoriented.GetOrigin(),
                          new_spacing,
                          reoriented.GetDirection(),
                          int(minimum_value),
                          reoriented.GetPixelID())

Would you agree those two are reasonable steps? Also, is sitk.DICOMOrientImageFilter().GetOrientationFromDirectionCosines(series_volume.GetDirection()) the most appropriate to get current image orientation?

Hello @jwitos,

Yes the two steps make sense, and yes that’s the way to get the current image orientation.

One last thing, for more concise code, you can use the DICOMOrientImageFilter's procedural interface instead of the object oriented.

reoriented = sitk.DICOMOrient(series_volume, "LPS")
4 Likes