Combining overlapping segmentations of same region from different images.

Hello everyone,

There is a certain region in these images (MRI) whose boundaries end in one view but can be easily observed in another ( e.g., the posterior limit in the coronal plane can be extended in the sagittal plane). In Slicer I have segmented said region in both planes, exported the segmentations to NIFTI files, and would now like to combine them into one complete “object” for further processing. There is the “copy/move” function (CopySegmentFromSegmentation) in the segmentations module in slicer, but our real data isn’t in .seg.nrrd format and upon exporting the resulting segmentation to a binary label map, there are two separate labels overlapping whereas I am aiming for a single connected label. The “collapse labelmap layers” button produces many holes and strange results.

I wanted to know which is the best way to achieve a seamless combined label made of 2+ separate labels of the same region from different images of the same patient taken in different planes.

I can provide info about what I’ve tried and why I couldn’t get it to work if needed. Otherwise I am just curious to know how others would approach it.

Thank you!

Segment editor has Logical operators, which includes “union”. This is probably what you want.

Though, this question was probably more appropriate for Slicer forum.

Thanks for the response.
I didn’t choose the Slicer forum because I don’t plan to use slicer in my solution, nor do I use it to acquire the data. It was just an example of what I am trying to do as well as what the data looks like.

I have two NIfTI files and and I wanted to know if I can use ITK to do something similar, independent of Slicer.

In any case thanks again I will continue to try.

Maybe then Or filter?

In fact I have looked into the Binary AND filter as even though around 80 percent of the images overlap, the point is to include the overlapping space plus the extremities provided by each individual image. The main problem I face is that even if I find a way to combine the images, they don’t have the same orientation. I don’t know how to make sure they are respecting the same reference space when combining.

I have been trying with itkImageMaskSpatialObject thanks to this use case excerpt in chapter 5.1 of the book:

Results of segmentations are best stored in physical/world coordinates so that they can be combined and compared with other segmentations from other images taken at other resolutions. Segmentation results from hand drawn contours, pixel labelings, or model-to-image registrations are treated consistently.

There is also this function create_images_in_shared_coordinate_system that I would like to try. Do you think I am looking in the right area?

@zivy seems to be the author of that notebook, so he should be able to answer your question. But it sounds like you are looking in the right direction.

You can easily combine the imges if they are all in the correct physical location. You just need to create a blank image that covers the entire region of interest, and resample all segmentations to match the geometry of this common image, and then add them all to the blank image.

2 Likes

Hello @ramirez,

Please follow @lassoan’s guidance. For code see the Transforms and Resampling notebook, section titled Resampling after registration the third code cell in that section does something similar to what you want. The primary difference is that in your case the “registration” transformation would be the identity, so the code can be simplified.

1 Like

Thank you for the advice and the code. I now have all the segments in the correct physical location. Of course the segments are still individually existing. Is there a way I can combine them into one fluid segment such that the new image only contains one segment consisting of all regions of both original segments?

Thanks again for the help so far.

Hello @ramirez,

Not sure what you mean by “fluid segment”, hopefully the code below does what you want (either step 1 or step 2):

import SimpleITK as sitk

#three 2D segmentation images, labels are non-zero values, background is zero, no overlaps between segmentations
segmentations = [sitk.Image([10,10], sitk.sitkUInt8),
                 sitk.Image([10,10], sitk.sitkUInt8),
                 sitk.Image([10,10], sitk.sitkUInt8)]
segmentations[0][0:3,0:3] = 1
segmentations[1][3:6,3:6] = 2
segmentations[2][7:9,7:9] = 3

#step1: assuming segmentations don't overlap, just add the images and you don't loose information
combined_segmentations = sitk.NaryAdd(segmentations)

# "look" at the combined segmentation image
print(f'combined segmentations:\n{sitk.GetArrayViewFromImage(combined_segmentations)}')

#step2: possibly change all labels to one
single_label_segmentation = combined_segmentations != 0

# "look" at the combined segmentation, single value, image 
print(f'combined segmentations, single label:\n{sitk.GetArrayViewFromImage(single_label_segmentation)}')

Step 1 in the code snippet is what I was doing that resulted in separate labels. Step 2 is exactly the idea I was looking for.

Thank you all for your help it is very much appreciated.

Sabino

1 Like