Simple region-growing segmentation of the small intestine is terrible, using many seed points and 3D T2 MRI data

I am currently trying to apply region growing to 3D T2 MRI scans which I have been given, to segment the end of the small intenstine (i.e. the terminal ileum). I am using several features from SimpleITK. The problem is, the output of my region growing is very inaccurate and, after a lot of tinkering, I am unsure what to do.

I have been given:

  • some T2 MRI 3D scans
  • coordinates of the terminal ileum to use for these scans
  • example/model segmentations for these scans.
  • several scans without any terminal ileum coordinates or example/model segmentations

I understand that, for region growing, you must provide seed points within the area of interest. I use the coordinates of the terminal ileum, which were given to me, as seed points. There are many of these points, typically around 1200 in total for one case. I am currently using scans/data which are only in the axial modality.

I am using SimpleITKā€™s ConnectedThreshold to try and accomplish this region growing task (please see the bottom of this question for my code). For the upper and lower thresholds of ConnectedThreshold, I looked at what the general intensities were (within the region of interest) using ITKSnap. Below, I have provided one example case:

The original T2 MRI scan looks like this (NIfTI file format, i.e. .nii.gz):

original T2 MRI scan

The provided model segmentation example for this case looks like this (NIfTI file format), this represents the model result which I am attempting to get via my region growing:

model region growing segmentation

(Please note I have put the segmentation as an overlay to the original image).

The centreline coordinates (and thus, the seed points) for this case are as follows:

[(228, 210, 14), (228, 211, 14), (228, 212, 14), (228, 213, 14), (229, 214, 14), (230, 215, 14), (231, 216, 14), (231, 217, 14), (231, 218, 14), (231, 219, 14), (231, 220, 14), (231, 221, 14), (231, 222, 14)...

(These points were original provided in an xml file, with directions too, e.g. one point would be: <point x="228" y="210" z="14" xd="189.999995470047" yd="174.99999582767487" zd="61.59996795654297"/>. I am not using the directions, would this be something I should be using somehow?)

And finally, my (unfortunately terrible) region growing output is as follows (overlayed on top of the original image):

![TODO

As you can see, it is very very different to the model segmentation seen above. I feel that the main problems are:

  • Not understanding how to intuitively find the best ā€˜thresholdā€™ for the lower and upper threshold values I need to provide to SimpleITKā€™s ConnectedThreshold function. At the moment, I just increase/decrease some initial lower/upper threshold values (respectively) until the area of the segmentation is not too big. I got these initial threshold values by eye-balling the area of interest on SimpleITK.
  • The algorithm not understanding the region it is supposed to stay in.
  • The colon being quite a difficult organ to work with, since itā€™s quite complex?
  • Motion artifacts from the person moving when the scan was being done (rare)

I am unsure what to do as I have tried quite a few things. I feel that the data is somewhat difficult to work with, but I do feel it is possible to reach the model example result somehow. Could anyone point me in a direction they feel may be worth exploring, or any tools which could aid me?

Any help is much appreciated.


My code for region growing is as follows, with a few steps before:

    def generate_region_growing_segmentation(img: sitk.Image, seed_points: list):
    """Generates a segmentation using region growing

    Args:
        img (sitk.Image): NIfTI input image file
        seed_points (list): List of (x, y, z) integer seed points

    Returns:
        sitk.Image: region growing segmentation as sitk.Image
    """

    point_acquisition_interface = gui.PointDataAquisition(img)
    point_acquisition_interface.set_point_indexes(seed_points)
    initial_seed_point_indexes = point_acquisition_interface.get_point_indexes()
    print(f"[1/6] Received seed points ...")

    # N4 bias field correction
    print(f"[2/6] Running N4 bias field correction ...")
    img_float32   = cast_image_to_float32(img)
    corrected_img = sitk.N4BiasFieldCorrection(img_float32)
    corrected_img = img_float32

    # Denoising
    print(f"[3/6] Running denoising ...")
    denoised_img = sitk.CurvatureFlow(image1=corrected_img,
                            timeStep=0.1, #0.125
                            numberOfIterations=5)

    # Connected threshold region growing - ACTUAL REGION GROWING, SEE NEXT CODE SNIPPET
    print(f"[4/6] Running connected threshold region growing ...")
    seg_explicit_thresholds = region_growing(denoised_img, initial_seed_point_indexes)

    # Voting binary hole filling - CURRENTLY UNUSED
    print(f"[5/6] Running voting binary hole filling ...")
    imgWhiteMatterNoHoles = sitk.VotingBinaryIterativeHoleFilling(
        image1=seg_explicit_thresholds,
        radius=[2]*3,
        maximumNumberOfIterations=10,
        majorityThreshold=1,
        backgroundValue=0.0,
        foregroundValue=1.0
    )

    # Binary morphological closing
    print(f"[6/6] Running binary morphological closing ...")
    vectorRadius = (1, 1, 1)
    kernel = sitk.sitkBall
    seg_explicit_thresholds_clean = sitk.BinaryMorphologicalClosing(
        seg_explicit_thresholds, vectorRadius, kernel
    )

    return seg_explicit_thresholds_clean
def region_growing(img, seed_points, lower=80, upper=340, modality='axial'):
 num_of_pixels = sys.maxsize

 # Maximum number of pixels allowed in segmentation, calculated by averaging example segmentations
 modality_maximum_pixel_map = {'axial': 12000} 
 max_num_pixels_in_seg = modality_maximum_pixel_map[modality]

 while num_of_pixels > max_num_pixels_in_seg and upper > 0 and lower < 210:
     # Randomly increase/decrease the thresholds at random to get to a stage 
     # where there aren't too many pixels in the segmentation
     val = random.randint(0, 1) 
     if val == 1:
         lower += 0.05
     else:
         upper -= 0.05

     # SimpleITK ConnectedThreshold
     seg_explicit_thresholds = sitk.ConnectedThreshold(
         img, seedList=seed_points, lower=lower, upper=upper
     )
     
     num_of_pixels = np.count_nonzero(sitk.GetArrayFromImage(seg_explicit_thresholds) == 1)
     print(f"num_of_pixels: {num_of_pixels} | upper: {upper} | lower: {lower}")

 return seg_explicit_thresholds

The reason why I randomly add and subtract 0.1 from the lower and upper thresholds (respectively) is due to how Iā€™ve seen (from trial and error) that the closer the threshold gap, the smaller the generated area is. The average area (number of 1ā€™s in the image) of a ā€˜modelā€™ segmentation (from my calculations) is around 12000. However, I understand that, the way which I am doing it, is quite unintuitive - how else could I find the best thresholds to use? The segmented region is very incorrect - even though I have provided so many seed points. I have also tried truncating the seed points - yet this hasnā€™t helped. Please note the following:

  • The seed points get ā€˜less accurateā€™ the further away they are from the start (i.e. nearer to the end of the array).
  • I need to do this for several images, not just one. The threshold values in the region of interest vary from image to image. I am unsure how to find a good solution to this. I feel like I will need some adaptive region growing algorithm.

The end of the console output is as follows:

...
num_of_pixels: 12251 | upper: 225.1999999999739 | lower: 192.45000000001193
num_of_pixels: 12237 | upper: 225.1999999999739 | lower: 192.50000000001194
num_of_pixels: 12172 | upper: 225.1499999999739 | lower: 192.50000000001194
num_of_pixels: 12145 | upper: 225.09999999997387 | lower: 192.50000000001194
num_of_pixels: 9860 | upper: 225.04999999997386 | lower: 192.50000000001194
[5/6] Running voting binary hole filling ...
[6/6] Running binary morphological closing ...

Did you look at other region growing filters? For example, ConfidenceConnected.

Do your seed points form a centerline of the intestine? If so you could use that: limit the segmentation to stay within e.g. 1cm of the centerline.

I also feel that TubeTK can be somehow helpful here. @Stephen_Aylward might have more concrete ideas.

1 Like

Hello @aliabidi ,

Using the ConnectedThreshold method will work in very constrained settings (rarely on clinical images), primarily because the thresholds need to be directly set by the user. Possibly the ConfidenceConnected will do better, the user is indirectly setting the thresholds and relying more on the data. This Jupyter notebook illustrates how to work with the region growing filters.

Some observations with respect to the input:

  1. There appear to be motion artifacts in the region-of-interest (coronal view, bottom right) the bumps/indentations on the personā€™s side.
  2. Using all 1200 center-line points as seeds does not reflect a viable scenario as no human operator will provide this amount of input. Use a more realistic scenario, even if the results are sub-optimal. If the segmentation was great with 1200 seed points that would not reflect actual performance when a human operator would provide several seed points (less than 20).

Before going down the coding path, highly recommend exploring the segmentation algorithms available in 3D Slicer. That will allow you to cover many more options very quickly without the need to implement something specific in code.

3 Likes

Thank you for the reply! I wanted to ask: would having lots of seed points cause the segmentation to be less accurate? Since this is a 3D image, the points span across different slices. In terms of the goal I am trying to achieve, would it matter that a human operator is unable to provide several seed points (if I already have the points)?

Hi Dženan, thanks for the ideas! How could I limit the segmentation to stay within 1cm of the centerline? Does SimpleITK provide any feature for this?

If you can rasterize the centerline as one pixel wide line, you could use SignedDistanceField to compute distances to it in the image. You could take that distance into account somehow, for example by thresholding it at 10.0 mm away.

1 Like

But I donā€™t know if rasterization can be done using SimpleITK.

1 Like

I fully agree. The best is to first explore what approaches work well, use manual methods (paint, draw, threshold, import models, etc.) to define some inputs and use semi-automatic methods (Grow from seeds, Fast marching, Watershed, etc.) to improve the results. If you have a good semi-automatic workflow then you can automate the trivial manual steps. You can go even further and use the segmentation to train a neural network, which may be able to provide segmentations fully automatically.

Regarding your specific segmentation task. There is no such thing as a perfect segmentation, so it would be very important to know what is the clinical purpose of the segmentation, what are the most important details that you want to accurately extract and what errors are tolerable.

First of all, if you just need visualization then you donā€™t need to segment the image but you can use volume rendering (raycasting), because the contrast agent already has somewhat distinctive intensity range. It is much easier to set up a beautiful volume rendering (you need to tune 1-2 threshold values) than creating an accurate binary segmentation. Setting up an endoscopic visualization (in that the lumen of the intestines is transparent so that you can fly through) is fairly straightforward by editing the volume rendering transfer functions. You can also mask out irrelevant regions from the segmentation before volume rendering with a very rough approximate segmentation to reduce clutter (e.g., threshold, keep selected island, grow by a small margin, use to mask volume).

If you need segmentation: Since the images are contrasted, thresholding may get you usable results. The challenge is that if you use a too small intensity range then the region will have lots of holes inside; but if you use a wider intensity range then the segmentation may go through thin walls of the intestines. In 3D Slicerā€™s Segment Editor module, you can use Threshold effect to highlight the contrast-filled region in the image (make the intensity range large enough to completely fill the contrasted region) and then choose Use for masking (anything outside will be excluded from the segmentation). This mask may include various image regions, so use Paint effect to paint seeds inside the intestines, and paint with a different segment in all other regions that you are not interested in. Then use Grow from seeds method to grow complete segments from the seeds. After Grow from seeds is initialized, you can paint further scribbles to fine-tune the results. The main challenge is to prevent intestines wall being added to the intestines lumen segment (because that would result in holes that connect two different sections). You may be able to fill the holes by adding more seeds inside walls or change the Editable intensity range. You may also try to preprocess the image using anisotropic filtering to make the contrast filling more uniform, without blurring the boundary between lumen and wall of the intestines. There are many other tools and options, but without knowing the clinical task it would be too much to discuss them all.

If you are willing to invest time into creating many segmentations and interested in ā€œAIā€ based segmentation tools then you can use MONAILabel and its 3D Slicer plugin to segment data sets and train a model.

2 Likes

A pixel can easily be set by img[seed_idx] = 1.

1 Like