[SimpleITK, Python, Registration]
I have a dataset of CT and CBCT scans; CBCTs don’t have a limited FOV, but they do contain a smaller portion of body compared to CT, i.e. they have less slices than CT scans.
What I want to do is to perform a rigid registration of a CBCT to a CT to see which part of the body in CT is also found in CBCT, and then get rid of the slices in CT that are outside of the “scope” of CBCT.
I don’t need the registered image as output at all, but only the original CBCT as well as the original CT without the slices that are out of the “scope” of CBCT.
Hope my explanation is clear enough. If anyone wonders why I’d like to do anything such – I’m trying to constrain the data that’s being seen by a generative adversarial network in order to prevent any bias when learning how to translate a CBCT to CT.
Since I would like it to be efficient, I thought I could ask here for some advice. Not very experienced with SimpleITK as I mostly used it as an interface for getting data in and out of my deep learning pipelines.
For each of the eight CBCT-volume’s edge points final_transform.TransformPoint. Note that before transforming them you need to get the eight points in physical space from the CBCT index space, use CBCT_image.TransformIndexToPhysicalPoint method with the combinations from [0,0,0] and CBCT.GetSize()-1.
Get the indexes into the CT image for the eight transformed points using CT_image.TransformPhysicalPointToContinuousIndex or CT_image.TransformPhysicalPointToIndex and you now know which CT slices you keep and which throw away.
Hi @zivy, this is perfect, works flawlessly! Thank you!
Here’s the code:
import SimpleITK as sitk
from numpy import mean
from itertools import product
def limit_CT_to_scope_of_CBCT(CT, CBCT):
registration_transform = get_registration_transform(fixed_image=CBCT,
moving_image=CT)
# Start and end positions of CBCT volume
start_position = [0,0,0]
end_position = [point-1 for point in CBCT.GetSize()]
# Get all corner points of the CBCT volume
corners = list(product(*zip(start_position, end_position)))
# Transform all corner points from index to physical location
physical_corners = [CBCT.TransformIndexToPhysicalPoint(corner) for corner in corners]
# Find where is the scope of CBCT located in CT by using the data of how CT was registered to it
transformed_corners = [registration_transform.TransformPoint(corner) for corner in physical_corners]
# Transform all corners points from physical to index location in CT
final_corners = [CT.TransformPhysicalPointToIndex(corner) for corner in transformed_corners]
# Get z-axis (slice) index of each corner and sort them.
# The first four elements of it are the slice indices of the bottom of the volume,
# while the other four are the slice indices of the top of the volume.
z_corners = sorted([xyz[2] for xyz in final_corners])
# The registered image can be sloped it in regards to z-axis, so its corners might not lay in the same slice.
# Averaging them is a way to decide at which bottom and top slice the CT should be truncated.
start_slice = int(round(mean(z_corners[:4])))
end_slice = int(round(mean(z_corners[4:])))
return CT[:, :, start_slice:end_slice]