There is a certain region in these images (MRI) whose boundaries end in one view but can be easily observed in another ( e.g., the posterior limit in the coronal plane can be extended in the sagittal plane). In Slicer I have segmented said region in both planes, exported the segmentations to NIFTI files, and would now like to combine them into one complete “object” for further processing. There is the “copy/move” function (CopySegmentFromSegmentation) in the segmentations module in slicer, but our real data isn’t in .seg.nrrd format and upon exporting the resulting segmentation to a binary label map, there are two separate labels overlapping whereas I am aiming for a single connected label. The “collapse labelmap layers” button produces many holes and strange results.
I wanted to know which is the best way to achieve a seamless combined label made of 2+ separate labels of the same region from different images of the same patient taken in different planes.
I can provide info about what I’ve tried and why I couldn’t get it to work if needed. Otherwise I am just curious to know how others would approach it.
Thanks for the response.
I didn’t choose the Slicer forum because I don’t plan to use slicer in my solution, nor do I use it to acquire the data. It was just an example of what I am trying to do as well as what the data looks like.
I have two NIfTI files and and I wanted to know if I can use ITK to do something similar, independent of Slicer.
In fact I have looked into the Binary AND filter as even though around 80 percent of the images overlap, the point is to include the overlapping space plus the extremities provided by each individual image. The main problem I face is that even if I find a way to combine the images, they don’t have the same orientation. I don’t know how to make sure they are respecting the same reference space when combining.
I have been trying with itkImageMaskSpatialObject thanks to this use case excerpt in chapter 5.1 of the book:
Results of segmentations are best stored in physical/world coordinates so that they can be combined and compared with other segmentations from other images taken at other resolutions. Segmentation results from hand drawn contours, pixel labelings, or model-to-image registrations are treated consistently.
@zivy seems to be the author of that notebook, so he should be able to answer your question. But it sounds like you are looking in the right direction.
You can easily combine the imges if they are all in the correct physical location. You just need to create a blank image that covers the entire region of interest, and resample all segmentations to match the geometry of this common image, and then add them all to the blank image.
Please follow @lassoan’s guidance. For code see the Transforms and Resampling notebook, section titled Resampling after registration the third code cell in that section does something similar to what you want. The primary difference is that in your case the “registration” transformation would be the identity, so the code can be simplified.
Thank you for the advice and the code. I now have all the segments in the correct physical location. Of course the segments are still individually existing. Is there a way I can combine them into one fluid segment such that the new image only contains one segment consisting of all regions of both original segments?
Not sure what you mean by “fluid segment”, hopefully the code below does what you want (either step 1 or step 2):
import SimpleITK as sitk
#three 2D segmentation images, labels are non-zero values, background is zero, no overlaps between segmentations
segmentations = [sitk.Image([10,10], sitk.sitkUInt8),
sitk.Image([10,10], sitk.sitkUInt8),
sitk.Image([10,10], sitk.sitkUInt8)]
segmentations[0][0:3,0:3] = 1
segmentations[1][3:6,3:6] = 2
segmentations[2][7:9,7:9] = 3
#step1: assuming segmentations don't overlap, just add the images and you don't loose information
combined_segmentations = sitk.NaryAdd(segmentations)
# "look" at the combined segmentation image
print(f'combined segmentations:\n{sitk.GetArrayViewFromImage(combined_segmentations)}')
#step2: possibly change all labels to one
single_label_segmentation = combined_segmentations != 0
# "look" at the combined segmentation, single value, image
print(f'combined segmentations, single label:\n{sitk.GetArrayViewFromImage(single_label_segmentation)}')