Hi everyone,
I am working with a dataset that contains studies acquired in either axial or sagittal plane. I am loading the images with the ImageSeriesReader
. Trying to unify the dataset, I would like to resample them to – let’s assume – isotropic voxels and reorient them so that the pixel matrix of all images can be read as if it was acquired in axial plane.
I had no problem with resampling to the new spacing, but I struggle with reorientation. First of all, very common approach over the internet seems to be to simply transpose the matrix of a sagittal image, e.g. numpy.transpose(matrix_in_sagittal, (1,2,0))
. This would transpose axes and allow for a axial-like reading of the matrix. Slicing this transposed matrix would give you [axial, coronal, sagittal] axes, but I am pretty sure they are still misoriented. Also, if I understand correctly, the legacy way of matching this would be to resample the image to desired direction.
After going through some very useful tips in your repo and notebooks, I think I have an initial code working:
new_spacing = [1.0, 1.0, 1.0]
new_direction = [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0]
interpolator = sitk.sitkLinear
extreme_points = [series_volume.TransformIndexToPhysicalPoint((0,0,0)),
series_volume.TransformIndexToPhysicalPoint((series_volume.GetWidth(),0,0)),
series_volume.TransformIndexToPhysicalPoint((series_volume.GetWidth(),series_volume.GetHeight(),0)),
series_volume.TransformIndexToPhysicalPoint((0,series_volume.GetHeight(),0)),
series_volume.TransformIndexToPhysicalPoint((0,0,series_volume.GetDepth())),
series_volume.TransformIndexToPhysicalPoint((series_volume.GetWidth(),0,series_volume.GetDepth())),
series_volume.TransformIndexToPhysicalPoint((series_volume.GetWidth(),series_volume.GetHeight(),series_volume.GetDepth())),
series_volume.TransformIndexToPhysicalPoint((0,series_volume.GetHeight(),series_volume.GetDepth()))]
affine = sitk.Euler3DTransform(series_volume.TransformContinuousIndexToPhysicalPoint([sz/2 for sz in series_volume.GetSize()]))
inv_affine = affine.GetInverse() # doesn't seem to affect the process?
extreme_points_transformed = [inv_affine.TransformPoint(pnt) for pnt in extreme_points]
min_x = min(extreme_points_transformed)[0]
min_y = min(extreme_points_transformed, key=lambda p: p[1])[1]
min_z = min(extreme_points_transformed, key=lambda p: p[2])[2]
max_x = max(extreme_points_transformed)[0]
max_y = max(extreme_points_transformed, key=lambda p: p[1])[1]
max_z = max(extreme_points_transformed, key=lambda p: p[2])[2]
new_origin = [min_x, min_y, min_z]
new_size = [math.ceil((max_x-min_x)/new_spacing[0]),math.ceil((max_y-min_y)/new_spacing[1]),math.ceil((max_z-min_z)/new_spacing[2])]
minimum_value = sitk.GetArrayFromImage(series_volume).min()
resampled = sitk.Resample(series_volume, new_size, sitk.Transform(), sitk.sitkLinear, new_origin, new_spacing, new_direction, int(minimum_value), series_volume.GetPixelID())
The output looks good on a few examples, but I am afraid I might be missing something. I am also still confused about spacings. For instance, if I decided to resample images to anisotropic spacing, would the code above still work, given that the x,y,z order is different in sagittal and axial acquisitions?
I would really appreciate your thoughts and comment on the code above.
Thank you for all the resources you provide – they are extremely helpful, although I am still very hesistant regarding image orientation / coordinates and resampling.