Resampling .nii to isotropic spacing + fixed size

Hi, I am trying to resample a set of 3D volumes to train a Deep Learning network.
I would like to resample them all to a specific size (say 64,64,64), but the catch is that for ease (not a necessity) of computing a loss function i would like to keep the voxel spacing also isotropic and fixed (say [1,1,1] or [0.5,0.5,0.5]).
Is this logically correct or is it essential to keep the spacing as (Original physical size)/(new image size).

I think it is possible by interpolation and size trickery. I wrote a function that generates images with required spacing but flexible image size. See if you can play with it and implement what you want. Also do share the updates if that works for your purpose.

def make_isotropic(image, interpolator = sitk.sitkLinear, spacing = None):
    '''
    Many file formats (e.g. jpg, png,...) expect the pixels to be isotropic, same
    spacing for all axes. Saving non-isotropic data in these formats will result in
    distorted images. This function makes an image isotropic via resampling, if needed.
    Args:
        image (SimpleITK.Image): Input image.
        interpolator: By default the function uses a linear interpolator. For
                      label images one should use the sitkNearestNeighbor interpolator
                      so as not to introduce non-existant labels.
        spacing (float): Desired spacing. If none given then use the smallest spacing from
                         the original image.
    Returns:
        SimpleITK.Image with isotropic spacing which occupies the same region in space as
        the input image.
    '''
    original_spacing = image.GetSpacing()
    # Image is already isotropic, just return a copy.
    if all(spc == original_spacing[0] for spc in original_spacing):
        return sitk.Image(image)
    # Make image isotropic via resampling.
    original_size = image.GetSize()
    if spacing is None:
        spacing = min(original_spacing)
    new_spacing = [spacing]*image.GetDimension()
    new_size = [int(round(osz*ospc/spacing)) for osz, ospc in zip(original_size, original_spacing)]
    return sitk.Resample(image, new_size, sitk.Transform(), interpolator,
                         image.GetOrigin(), new_spacing, image.GetDirection(), 0, # default pixel value
                         image.GetPixelID())

Well the coding as such is not the issue, i use the function below to get what i need.
I just wanted to know if what I am doing is logically correct.
The physical size of the resampled image is different from the original one when i use the method below, as both the spacing and size is being changed.

Is this change in physical size during resampling ok??

def reSample_fixedSize_fixedSpacing (img_sitk,interpolation, new_size, new_spacing):
    
    #img_sitk : sitk image (3D)
    #interppolation: sitk.sitkLinear
    #new_size : 64
    #new_spacing: [1,1,1]
 
    dimension = img_sitk.GetDimension()
    reference_size = [new_size,new_size,new_size]
    reference_physical_size = np.zeros(img_sitk.GetDimension())
    reference_physical_size[:] = [(sz-1)*spc if sz*spc>mx  else mx for sz,spc,mx in zip(img_sitk.GetSize(), img_sitk.GetSpacing(), reference_physical_size)]
    reference_direction = img_sitk.GetDirection()
    reference_origin = img_sitk.GetOrigin()
    
    reference_image = sitk.Image(reference_size, img_sitk.GetPixelIDValue())
    reference_image.SetOrigin(reference_origin)
    reference_image.SetSpacing(new_spacing)
    reference_image.SetDirection(reference_direction)

    reference_center = np.array(reference_image.TransformContinuousIndexToPhysicalPoint(np.array(reference_image.GetSize())/2.0))
    
    transform = sitk.AffineTransform(dimension)
    transform.SetMatrix(img_sitk.GetDirection())
    transform.SetTranslation(np.array(img_sitk.GetOrigin()) - reference_origin)
  
    centering_transform = sitk.TranslationTransform(dimension)
    img_center = np.array(img_sitk.TransformContinuousIndexToPhysicalPoint(np.array(img_sitk.GetSize())/2.0))
    centering_transform.SetOffset(np.array(transform.GetInverse().TransformPoint(img_center) - reference_center))
    centered_transform = sitk.CompositeTransform([centering_transform,transform])
    new_img = sitk.Resample(img_sitk, reference_image, centered_transform, interpolation, 0.0)
    return new_img

Hello @Meliz,

Generally speaking, what you are doing is correct.

The challenge is that you need to use “correct” values for new_size and new_spacing to obtain useful results. When resampling you are placing a physical grid (reference_image) onto the original image and obtaining the intensities at those locations. To place the grid you use the transform. Note that the transform maps the center of the grid to the center of the original image. This is an implicit assumption that the object of interest is close to the center of the original image.

Based on the choice of new_size and new_spacing you may get useful results or completely useless ones depends on the physical size of the grid defined by these settings and the imaged object. Try the following to get a better feel for it:

file_name = 'training_001_ct.mha'
image = sitk.ReadImage(file_name)
new_size = 64

new_spacing = [0.5]*3
sitk.Show(reSample_fixedSize_fixedSpacing(image, sitk.sitkLinear, new_size, new_spacing), f'useless for {file_name}')

new_spacing = [4.0]*3
sitk.Show(reSample_fixedSize_fixedSpacing(image, sitk.sitkLinear, new_size, new_spacing), f'useful for {file_name}')