Resample to same Spacing and Size and Align to same Origin



I have a set of 4 DICOM CT Volumes which I am reading with SimpleITK ImageSeriesReader. Two of the images represent the CT of patient before and after the surgery. The other two images are binary segmentation masks segmented on the former 2 CT images. The segmentations are a ROI of their source CT.

All the 4 CT images, have different Size, Spacing, Origin and Direction. I have tried applying this GitHub gist to resize my images to 512x512x512 and Spacing 1x1x1. However, it seems to only work for the source CT images. It doesn’t work on the segmentation images. The segmented structure is always placed in the center of the CT image, instead of the correct location.

This are my “raw” image with its corresponding segmentation.

And this is the result which is obviously incorrect.

This is my code:
def resize_resample_images(images):
#Define tuple to store the images
tuple_resized_imgs = collections.namedtuple('tuple_resized_imgs', ['img_plan', 'img_validation', 'ablation_mask', 'tumor_mask'])
# %% Create Reference image with zero origin, identity direction cosine matrix and isotropic dimension
dimension = images.img_plan.GetDimension()
reference_direction = np.identity(dimension).flatten()
reference_size_x = 512
reference_origin = np.zeros(dimension)
data = [images.img_plan, images.img_validation, images.ablation_mask, images.tumor_mask]

reference_physical_size = np.zeros(dimension)
for img in data:
    reference_physical_size[:] = [(sz - 1) * spc if sz * spc > mx else mx for sz, spc, mx in
                                  zip(img.GetSize(), img.GetSpacing(), reference_physical_size)]
reference_spacing = [reference_physical_size[0] / (reference_size_x - 1)] * dimension
reference_size = [int(phys_sz/(spc) + 1) for phys_sz,spc in zip(reference_physical_size, reference_spacing)]

reference_image = sitk.Image(reference_size, images.img_plan.GetPixelIDValue())
reference_center = np.array(
    reference_image.TransformContinuousIndexToPhysicalPoint(np.array(reference_image.GetSize()) / 2.0))
data_resized = []
for idx,img in enumerate(data):
    #%% Set Transformation
    transform = sitk.AffineTransform(dimension) # use affine transform with 3 dimensions
    transform.SetMatrix(img.GetDirection()) # set the cosine direction matrix
    # TODO; set the center correctly
    # transform.SetCenter(reference_center)
    transform.SetTranslation(np.array(img.GetOrigin()) - reference_origin) # set the translation.
    # Modify the transformation to align the centers of the original and reference image instead of their origins.
    centering_transform = sitk.TranslationTransform(dimension)
    img_center = np.array(img.TransformContinuousIndexToPhysicalPoint(np.array(img.GetSize()) / 2.0))
    centering_transform.SetOffset(np.array(transform.GetInverse().TransformPoint(img_center) - reference_center))
    centered_transform = sitk.Transform(transform)

    #%% set all  output image parameters: origin, spacing, direction, starting index, and size.
    resampler = sitk.ResampleImageFilter()
    if idx==0 or idx==1:
    elif idx==2 or idx==3:
    resampled_img = resampler.Execute(img)
resized_imgs = tuple_resized_imgs(img_plan=data_resized[0],
return resized_imgs

(Ziv Yaniv) #2

I believe this Jupyter notebook does what you are looking for (sample multiple intensity images of various sizes onto the same sized image and do the same with segmentations).

Let me know if this doesn’t address your needs and I’ll take a closer look at your code.


Hi , thank for your reply. I did follow that notebook. I assume the “Create reference domain” is the part that interests me.

I assume something is incorrect with my transformation, because the segmentation mask is always place in the center. Possibly with my centering transform, but I can’t tell exactly what.


So I still get the inccorect placement of the segmentation.
I am providing the dimensions of the 2 images displayed in the pictures if it helps.
image mask direction (1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0)
image mask origin (-119.89099884033203, 29.718700408935547, -1002.7899780273438)
mask spacing (0.892, 0.892, 0.79998779296875)
segmentation size (17, 16, 16)
image direction (1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0)
image origin (-204.63099670410156, -228.06930541992188, -1170.0)
image spacing (0.892, 0.892, 0.800048828125)
image size (512, 512, 381)

(Ziv Yaniv) #5

I expected the size of the segmentation mask to be the same as the image. That is what the original code in the notebook assumed. Does the original segmentation have a different size/origin than the image? If yes, then the centering of the volumes is probably the culprit (try removing that).

Also, the condition for selecting interpolator is strange, when idx==1 it will always be nearest neighbor so not sure why it also appears in the ‘if’ statement:

    if idx==0 or idx==1:
    elif idx==1 or idx==3:


Yes, the segmetation mask has a different size, origin and spacing.
I also assumed that the centering of the volumes is the culprit so I removed it. However I obtained only grey images after removing the centering. Maybe I am outside the physical space of the reference image.

As a solution to the different size, I was thinking to use SimpleITK.Paste(*args), to paste the mask in the same size as its source. But I didn’t find whether it preserves the spacing and the other parameters.

Also, thank you, you pointed out correctly that the condition is wrong.
I am storing the images and their segmentation mask in a tuple. So at index=0 and index=1 will be the original CT images so I want to apply Linear Interpolation. for the binary segmentation masks stored at index 2 and 3 in the tuple I want to use NearestNeighbor Interpolator.

(Ziv Yaniv) #7

Use Paste if the original image and segmentation have the same spacing (I assume the segmentation was obtained on the original image so this is likely).

If this isn’t the case then use resample with identity transformation, zero as the default pixel value, and nearest neighbor interpolation (assuming here that the origin of the original segmentation places it in the correct location with respect to the original image):

new_segmentation = sitk.Resample(original_segmentation, original_image.GetSize(),


Thank you.
Some of the segmentation files have the same spacing as their original image, some do not. So I used PASTE if the spacing was similar and RESAMPLE if the spacing differed. Then I applied the Transformations and Resampled again to move them in the same space and have similar spacing for all the images.

It seems everything is working (by visually assessing it), however I get some grey slices towards the end of the CT stack (assuming counting in z-direction starting index 0). Is that correct? I assume it’s because a number of CT have less slices than the Reference size they were resampled on. I am sorry if this something obvious, I am just starting with medical images processing.

(Ziv Yaniv) #9

This approach sounds reasonable to me (I may be biased as I pointed you in this direction).

You are doing well for someone who has just started with medical imaging, hopefully the following will move you further along:

The main difference between medical images and other types of images is that they are
spatial objects. That is, the CT image occupies a specific region in space with a
known size (think of it as a box in 3D space).

This brings us to your question with respect to grey slices. The reference_image represents
a box of certain dimensions (sizes*spacings), the largest size of expected patients, so that everyone will
fit into this box without distorting their physical size. When you have a smaller patient they fit in the box with space to spare (grey slices). As a sanity check you can measure structures in the original CT and in your resampled CT and see that their size remained the same (fewer/more pixels but larger/smaller spacing between them).

As you are working with CT: during the initial code development it is good that you get these
grey slices, showing you where the original data existed and where you were looking outside
of the original (grey). When you are happy with the code you should switch to a more appropriate
value, in the case of CT, a value of -1000 (Hounsfield value for air).

Looking at the code, you have:


This is incorrect usage of the SetDefaultPixelValue method. The GetPixelIDValue referes to the type of the
pixel which unfortunately is a number and thus accepted by the method without complaints. If you don’t set
it, the default is zero, any arbitrary value you use is fine (including whatever GetPixelIDValue returned).