Convert CT mask to PET mask

Hello

Apoligies if my question is outside the scope of this forum, but here it goes:

I am in a situation where I have to convert a mask created on a CT image in to a mask that applies to a PET image. E.g. from 480x480x600 -> 256x256x156 in dimensions. With the ultimate goal of doing SUV “readings/measurements” of the region within the PET roi.
Normally people have been using an old proprietary program for this transformation, but in a project I am trying to do the same, but in Python. Until now I have simply been using pydicom and numpy, for all my manipulations. As the images and roi’s come in DICOM files and an RTSTRUCT respectively.

My initial solution has been to:

  • Find the contour points for the CT mask.
  • Transfer the CT pixel coordinates to CT mm coordinates
  • Then find “nearest/corresponding” mm coordinates in the PET coordinates using euclidean distance
  • Then convert PET mm coords. to PET pixels
  • And then fill the resulting PET contour points.

With this method I am able to reproduce the max, mean and std values within +/- 10%. But I believe there must be a smarter, more right and robust way.

I have tried to deduce how the proprietary software works, and I believe it tries to conserve the volume, as it makes these strange cut-offs in the masks sometime, but always ends up with a near identical volume as the CT mask.
I have thought about doing resampling of the images with SimpleITK, but fear that it might distort the true value.

So I will be very gratefull for any inputs or suggestions that you might have, especially if I can use SimpleITK to solve this.

Any help is much appreciated.

Regards
Jakob

1 Like

ITK and SimpleITK can be used to solve this. There are two choices for computing statistics:

I am not sure whether you can supply feature image (PET) and label image (from CT) at different resolutions. If you can, you will not need to resample the mask.

Resampling just picks closest voxel in mm coordinate as you described, and does not try to preserve volume.

2 Likes

Hello @Jakob_Riber_Rasmusse,

Usually PET voxel sizes are larger than the CT ones, for more accurate computations (closer to the underlying continuous signals) resampling the original PET to a higher resolution is preferable over resampling the high frequency mask to a lower resolution.

I would recommend: (1) resample the PET image onto the CT grid (you can try the various interpolators and not just use the default linear one (2) follow @dzenanz’s advice to use the LabelStatisticsImageFilter on the resampled PET and the CT mask.

2 Likes

Hello @zivy and @dzenanz

Thank you for answering! I have look in to the functions you mention, and they do look like what I need for extracting the statistics.
My primary concern is with the transform between PT/CT coordinates.

I have actually thought about doing the resampling thing Zivy, but I have been concerned that the extracted values will be biased, especially the Max value. Is this anything you have experience with?

And do you guys know if it is standard practice in the PT/CT world to upsample like this and read SUV values? If you know of any articles describing and discussing this, it will be of great interest to me, as I have been unable to find it myself.

/Jakob

Hello @Jakob_Riber_Rasmusse,

It really has been a long while since I worked with PET/CT, so my knowledge is a bit dated (circa 2007). A key assumption of these systems is that the transformation between the PET and CT coordinate systems is known with high accuracy, as the CT image is used for attenuation correction of the PET data. Errors in this transformation result in errors in the attenuation correction and consequentially in all quantities you are deriving from the PET image.

This is the reason why I feel comfortable upsampling the PET to the CT spacing and using the CT mask on the upsampled PET.

Bottom line, as your goal is to replicate the behavior of a black box up to a certain error in percentages, I’d suggest that you use multiple PET/CT images, obtain the output from the black box and compare to the suggested approach. As the level of effort is low, short program using SimpleITK, you’ll get a definitive answer relatively easily.

Hey @zivy

Thank you for the reply, you reasoning makes good sense. And some initial testing with upsampling of the PET volume also shows promising results, so that is great.

Will try to get back with some final results.

/Jakob

Hi @Jakob_Riber_Rasmusse

This may be too late, but for the benefit of others who might be looking to do the same, there may be easier ways to do this with SimpleITK, but this is what I do:

Assuming the CT and PET images have the same FrameOfReferenceUID (FOR), as they likely would for any given study, I would create a SimpleITK Image representation of the CT mask, such that the resulting “label image” contains the same image attributes as the CT image itself.

Then resample the CT label image to the PET image’s grid. If all goes well your resampled CT label image will yield the desired PET label image. Then a simple conversion to Numpy array would enable you to do further processing.

If the CT and PET images do not share the same FOR, register the CT image to the PET image. A LandmarkBasedTransformInitializer using fiducials for both images may be required if a geometry or moments-based initialization of the registration doesn’t yield good results. Once you have a good registration result, resample the CT label image to the PET image grid using the final transformation from the registration.

Regards,
Cristiano