Saving LabelOverlay Image as DICOM series

Hi!

I have a DICOM series, and with the segmentations I create a LabelOverlay Image. When I write it out as a DICOM series, I modified the following datatags in the created image, to get an RGB image.:

  image_slice.SetMetaData("0028|0002", str(3)) #Samples per Pixel Attribute
  image_slice.SetMetaData("0028|0004", "RGB")

The LabelOverlay is saved as DICOM series, but Slicer is not able to load it, it gives the following ERROR: Could not load: prediction as a Scalar Volume.
When I check one pixel of the LabelOverlay Image it is in fact not a scalar, but (-1024,-1024,-1024). I though setting the samples per pixel attribute to 3 will fix this.
What tag should I modify to be able to load the LabelOverlay DICOM series into slicer?

This question is better asked on the Slicer forum. I suspect that if you want the image to be interpreted as RGB, the values should be in [0,255] range. -1024 is suggestive of a CT intensity image.

Is your result a valid DICOM image?

Check it with dciodvfy.

Hello @Franciska,

If you are working with Slicer and want to visualize your segmentation overlaid onto the anatomical data (MR, CT etc.) you don’t have to save the segmentation as DICOM. If this is not a requirement, then read on, otherwise follow @dzenanz’s and @dclunie’s advice.

Slicer supports seg.nrrd, nrrd file with specific metadata which Slicer auto-magically knows is a segmentation overlay. Possibly the following code may be of use:

def update_label_image_seg_nrrd_dict(sitk_label_image, label_to_name_dict=None, background_value=0):
    # Color lookup table (LUT) defined in itkLabelToRGBFunctor which claims that these
    # are "a good selection of distinct colors for plotting and overlays".
    # Also tried the color lookup tables used by fsleyes (https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/FSLeyes),
    # mgh-cma-freesurfer.lut etc., larger LUT, but the colors were not appropriate, too many similar colors
    # so visually hard to distinguish between different labels even if the colors are not exactly the same.
    # The downside to the ITK LUT is that it only has 20 entries, afterwards the colors are "reused"
    # for a different label (label0 and label21 will have the same color). If this is an issue and you
    # really need a unique color for each label then define a larger color LUT.
    lut = ["0.5450980392156862 0.5450980392156862 0.0",
    "1.0 0.24313725490196078 0.5882352941176471",
     "0.5450980392156862 0.2980392156862745 0.2235294117647059",
     "0.0 0.5254901960784314 0.5450980392156862",
     "0.803921568627451 0.40784313725490196 0.2235294117647059",
     "0.7490196078431373 0.24313725490196078 1.0",
     "0.0 0.5450980392156862 0.27058823529411763",
     "0.7803921568627451 0.08235294117647059 0.5215686274509804",
     "0.803921568627451 0.21568627450980393 0.0",
     "0.12549019607843137 0.6980392156862745 0.6666666666666666",
     "0.41568627450980394 0.35294117647058826 0.803921568627451",
     "1.0 0.0784313725490196 0.5764705882352941",
     "0.27058823529411763 0.5450980392156862 0.4549019607843137",
     "0.2823529411764706 0.4627450980392157 1.0",
     "0.803921568627451 0.30980392156862746 0.2235294117647059",
     "0.0 0.0 0.803921568627451",
     "0.5450980392156862 0.13333333333333333 0.3215686274509804",
     "0.5450980392156862 0.0 0.5450980392156862",
     "0.9333333333333333 0.5098039215686274 0.9333333333333333",
     "0.5450980392156862 0.0 0.0"]

    labels = set(np.unique(sitk.GetArrayViewFromImage(sitk_label_image)))
    labels.discard(background_value)
    # Each label has specific settings.
    for i, label in enumerate(labels):
        # If we provide this parameter we expect the dictionary to contain names for
        # all labels, otherwise we get an exception (be consistent).
        if label_to_name_dict:
            sitk_label_image.SetMetaData(f"Segment{i}_Name", label_to_name_dict[label])
        sitk_label_image.SetMetaData(f"Segment{i}_LabelValue", str(label))
        sitk_label_image.SetMetaData(f"Segment{i}_ID", str(i))
        sitk_label_image.SetMetaData(f"Segment{i}_ColorAutoGenerated", "0")
        sitk_label_image.SetMetaData(f"Segment{i}_Color", lut[i%len(lut)])
        sitk_label_image.SetMetaData(f"Segment{i}_Layer", "0")
        sitk_label_image.SetMetaData(f"Segment{i}_NameAutoGenerated", "0")
        sitk_label_image.SetMetaData(f"Segment{i}_Tags", "Segmentation category and type - 3D Slicer General Anatomy list~SCT^85756007^Tissue~SCT^85756007^Tissue~^^~Anatomic codes - DICOM master list~^^~^^|")


segmentation_label_image = X
label_to_name_dict = {1:"right lung", 5:"left lung"}
update_label_image_seg_nrrd_dict(segmentation_label_image, label_to_name_dict)

sitk.WriteImage(
    segmentation_label_image,
    "segmentation_visualization.seg.nrrd",
    useCompression=True,
    compressor="gzip",
)

It gives the same error when the values are in the [0,255] range. Thanks for the suggestion, I also asked this on the Slicer forum.

Here is the code. The slices, loaded in one-by-one all look good, but I still can’t load them as one 3D volume.
If I save it as nii.gz it looks good. Unfortunately it is a requierement to save it as DICOM series.
(I also tried it with switching modality to “OT” but it didn’t make a difference)

import SimpleITK as sitk
import os
import time

class SaveOutlinedPrediction():
    def __init__(self,ctpath,predictionpath,savepath):
        self.ctpath=ctpath
        self.predictionpath=predictionpath
        self.savepath=savepath
        self.originalct=self.readDicomSerie()
        self.resampleToOriginalFilter=self.resampleToOriginal(self.originalct)


    def __call__(self):
        predictionct=sitk.ReadImage(self.predictionpath)
        rgboriginal = sitk.Cast(sitk.RescaleIntensity(self.originalct), sitk.sitkUInt8)
     
        red = [255, 0, 0]
        green = [0, 255, 0]
        blue = [0, 0, 255]
        
        resampledpred=self.resampleToOriginal(predictionct)
        labelpred=sitk.Cast(resampledpred, sitk.sitkLabelUInt8)

        contour_overlaid_image = sitk.LabelMapContourOverlay(labelpred,
        rgboriginal,
        opacity=1,
        contourThickness=[4, 4, 4],
        dilationRadius=[3, 3, 3],
        colormap=red + green + blue,
    )

        self.writeDicom(contour_overlaid_image)
        #self.writeNii(contour_overlaid_image)


  
    def readDicomSerie(self):
        reader = sitk.ImageSeriesReader()
        dicom_names = reader.GetGDCMSeriesFileNames(self.ctpath)
        reader.SetFileNames(dicom_names)
        reader.MetaDataDictionaryArrayUpdateOn()
        reader.LoadPrivateTagsOn()
        image = reader.Execute()
        return image



    def resampleToOriginal(self,imageToResample):
        origdirection=self.originalct.GetDirection()
        origorigin=self.originalct.GetOrigin()
        resample = sitk.ResampleImageFilter()
        resample.SetInterpolator(sitk.sitkNearestNeighbor)
        resample.SetOutputDirection(origdirection)
        resample.SetOutputOrigin(origorigin)
        resample.SetOutputSpacing(self.originalct.GetSpacing())
        resample.SetSize(self.originalct.GetSize())


        imageToResample.SetOrigin(origorigin)
        imageToResample.SetDirection(origdirection)

        resampled=resample.Execute(imageToResample)
        return resampled


    
    def writeNii(self,outlinedImage):
        sitk.WriteImage(outlinedImage, self.savepath, useCompression = True,compressionLevel = 1)


    def writeDicom(self,outlinedImage):
        if not os.path.exists(self.savepath):
            os.makedirs(self.savepath)

        # Load original series for meta
        series_IDs = sitk.ImageSeriesReader.GetGDCMSeriesIDs(self.ctpath)
        series_file_names = sitk.ImageSeriesReader.GetGDCMSeriesFileNames(self.ctpath, series_IDs[0])
        self.series_reader = sitk.ImageSeriesReader()
        self.series_reader.SetFileNames(series_file_names)
        self.series_reader.MetaDataDictionaryArrayUpdateOn()
        self.series_reader.LoadPrivateTagsOn()
        self.series_reader.Execute()


        report_writer = sitk.ImageFileWriter()
        report_writer.KeepOriginalImageUIDOn()


        tags_to_copy = [
            "0010|0010",  # Patient Name
            "0010|0020",  # Patient ID    def writeDICOMFromMeta(self):
            "0020|0011",  # Series Number
            "0010|0030",  # Patient Birth Date
            "0020|000d",  # Study Instance UID, for machine consumption
            "0020|0010",  # Study ID, for human consumption
            "0008|0020",  # Study Date
            "0008|0030",  # Study Time
            "0008|0050",  # Accession Number
            "0008|1030",  # Study Description
        ]

        modification_time = time.strftime("%H%M%S")
        modification_date = time.strftime("%Y%m%d")
        direction = outlinedImage.GetDirection()
        series_tag_values = [
            (k, self.series_reader.GetMetaData(0, k))
            for k in tags_to_copy
            if self.series_reader.HasMetaDataKey(0, k)
        ] + [
            ("0008|0031", modification_time),  # Series Time
            ("0008|0021", modification_date),  # Series Date
            ("0008|0008", "DERIVED\\SECONDARY"),  # Image Type
            (
                "0020|000e",
                "1.2.826.0.1.3680043.2.1125."
                + modification_date
                + ".1"
                + modification_time,
            ),
            # Series Instance UID
            (
                "0020|0037",
                "\\".join(
                    map(
                        str,
                        (
                            direction[0],
                            direction[3],
                            direction[6],
                            # Image Orientation (Patient)
                            direction[1],
                            direction[4],
                            direction[7],
                        ),
                    )
                ),
            ),
            (
                "0008|103e",
                self.series_reader.GetMetaData(0, "0008|103e") + "_prediction",
            ),
        ]  # Series Description


        for i in range(outlinedImage.GetDepth()):
            image_slice = outlinedImage[:, :, i]

            for tag, value in series_tag_values:
                    image_slice.SetMetaData(tag, value)

            image_slice.SetMetaData("0008|0012", time.strftime("%Y%m%d"))
            image_slice.SetMetaData("0008|0013", time.strftime("%H%M%S"))        
            image_slice.SetMetaData("0028|0004", "RGB")
            image_slice.SetMetaData("0028|0002", str(3))
            image_slice.SetMetaData("0028|0100", str(8))
            image_slice.SetMetaData("0028|0101", str(8))
            image_slice.SetMetaData("0028|0102", str(7))
            image_slice.SetMetaData("0028|0103", str(0))
            image_slice.SetMetaData("0020|0013", str(i))
            image_slice.SetMetaData("0008|0060", "CT")
            image_slice.SetMetaData(
                    "0020|0032",
                    "\\".join(
                        map(str, outlinedImage.TransformIndexToPhysicalPoint((0, 0, i)))
                    ),
                )

            report_writer.SetFileName(os.path.join(self.savepath, str(i) + ".dcm"))
            report_writer.Execute(image_slice)




if __name__ == "__main__":

    ctpath=r'/dicomfile/original/SE000002'
    predpath=r'/dicomfile/lungsegm/SE000002.nii.gz'
    savepath=r'/dicomfile/outline/SE000002'

    SaveOutlinedPrediction(ctpath,predpath,savepath)()

Hello @Franciska,

Nothing jumps out as being incorrect. Can you share the output?

One comment, you don’t need to read the whole original series to get the “metadata”. Only reading the metadata from the first image should suffice:

file_reader = sitk.ImageFileReader()
# Use the first image in the first series in the directory
file_reader.SetFileName(sitk.ImageSeriesReader.GetGDCMSeriesFileNames(self.ctpath)[0])
file_reader.ReadImageInformation()

#print the metadata just to show that it's there
for k in file_reader.GetMetaDataKeys():
  print(file_reader.GetMetaData(k))

Thanks for the suggestion @zivy , I fixed that now.
I uploaded example files here

Hello @Franciska,

Looking at the color volume image, segmentation contours overlaid onto CT as an RGB volume in DICOM format:

  1. Running @dclunie’s dciodvfy on the DICOM files obviously reports errors.
  2. If I try to read the color DICOM images as a volume using SimpleITK things look as expected, GDCM identifies them as a single series and the color contours and CT make sense, viewed using Fiji (sitk.Show).
  3. Slicer does not identify the set of slices as belonging to the same series, but can load each of them independently and display them with the color overlay.

At this point I am not sure if this is a Slicer issue or indeed the aggregation of the images into the volume is not correct because of a problem with the DICOM tags (I don’t think so).

@lassoan, @dclunie are likely to have a better idea of what is causing the issue for Slicer.

@lasson, can Slicer accept a labelmap in DICOM format? Based on documentation looks like not, but possibly there is a way to do it?

1 Like

Unfortunately Orthanc can’t load it either.
And an other weird thing is that when I drag the directory into Slicer, and it asks me to select a reader, if I select Any Data instead of Load directory into DICOM database, it loads it properly.

In Slicer we have not implemented color 3D image loading from DICOM because. 1. Such images are not used clinically. 2. Such images are also not very useful, as practically none of the commonly used 3D image processing, quantification, and visualization methods are suitable for RGB images due to the low bit depth (just 8 bits per channel) and 3 channels (it is not clear how the color information should be utilized). Color images in DICOM are generally used for storing screen captures, which are always 2D or 2D+t, but never 3D; and these images are just viewed in 2D, but not processed or further analyzed.

The appropriate DICOM information object type for storing your “label overlay” (i.e., segmentation) is the DICOM Segmentation Object. You can conveniently convert any ITK image formats to/from DICOM Segmentation Object using dcmqi.

2 Likes