Motion modeling in-between frames via deformation fields

Hello,

I would like to ask for some guidance on how it would be possible to use deformation fields acquired from groupwise (or pairwise) registration to model (interpolate/simulate) the motion in-between frames.

Data that I have is 4D-CT 10 breathing phases, where for each breathing phase I have full chest CT (3D object) and full ROI (3D tumor mask). The shape is 10 (phases),128(height),128(width),128(depth) in 4D numpy array format. I also have amplitudes of chest during respiratory motion from a surface scanner signal at each time point from the full acquisition (continuous).

Now what I would like to achieve is to get from 10 breathing phases to, lets say for example, 90 to get a full breathing cycle. So I would like to interpolate 8 phases between original phase 1 and phase 2, then again interpolate 8 phases between original phase 2 and phase 3 and so on to get as smooth as possible motion. In addition, if possible, I would like to guide the interpolation based on the amplitudes that I have.

Basically simulating/modeling full respiratory motion during the 4D-CT acquisition.

Now focusing only on the ROIs-Tumor:

What I have tried out so far:

  1. I have applied groupwise registration to align all of the images to a common reference frame.
parameter_object = itk.ParameterObject.New()
groupwise_parameter_map = parameter_object.GetDefaultParameterMap('groupwise')
print("Original Metric:", groupwise_parameter_map['Metric'])
print("Original FinalGridSpacingInPhysicalUnits:", groupwise_parameter_map['FinalGridSpacingInPhysicalUnits'])
groupwise_parameter_map['Metric'] = ['AdvancedMattesMutualInformation']
groupwise_parameter_map['UseCyclicTransform'] = ['true']
groupwise_parameter_map['FinalGridSpacingInPhysicalUnits'] = ['64.0','64.0','64.0']
groupwise_parameter_map['Transform'] = ['AffineTransform']
groupwise_parameter_map['RegularizationType'] = ['Gaussian']
groupwise_parameter_map['RegularizationKernels'] = ['5']
parameter_object.AddParameterMap(groupwise_parameter_map)

elastix_object = itk.ElastixRegistrationMethod.New(image_itk_4D_tumor_extracted, image_itk_4D_tumor_extracted)
elastix_object.SetParameterObject(parameter_object)
elastix_object.SetLogToConsole(True)

elastix_object.UpdateLargestPossibleRegion()
 
result_image = elastix_object.GetOutput()
result_transform_parameters_tumor = elastix_object.GetTransformParameterObject()
numpy_array_tumor = itk.array_from_image(result_image)
  1. I have validated the results visually and it provides somewhat satisfying results, the motion is quite okay and cyclical (as it should be).

  2. Then I moved on to the interpolation, where I tried to first interpolate between two most distinct phases based on motion (distance difference):

fixed_image=itk.image_from_array(numpy_array_tumor[4])
moving_image=itk.image_from_array(numpy_array_tumor[0])

parameter_object = itk.ParameterObject.New()
parameter_map_rigid = parameter_object.GetDefaultParameterMap('affine')
parameter_map_rigid['Metric'] = ['PCAMetric2']

parameter_map_rigid['FinalGridSpacingInPhysicalUnits'] = ['4.0','4.0','4.0']
parameter_map_rigid['Transform'] = ['EulerTransform']

parameter_map_rigid['RegularizationType'] = ['Gaussian']
parameter_map_rigid['RegularizationKernels'] = ['2']

parameter_object.AddParameterMap(parameter_map_rigid)

result_image, result_transform_parameters = itk.elastix_registration_method(
    fixed_image, moving_image,
    parameter_object=parameter_object,
    log_to_console=True)

deformation_field = itk.transformix_deformation_field(moving_image, result_transform_parameters)

deformation_field = itk.array_from_image(deformation_field)

image_itk_tumor = sitk.GetImageFromArray(numpy_array_tumor[4])
interpolated_images = []
num_steps = 8  
index = 0
for step in tqdm(range(1, num_steps + 1)):
    fraction = step / num_steps
  
    scaled_deformation_field = deformation_field * fraction
    scaled_deformation_field = sitk.GetImageFromArray(scaled_deformation_field.astype(np.float64),isVector=True)
    
    interpolator = sitk.ResampleImageFilter()
    interpolator.SetReferenceImage(image_itk_tumor)
    interpolator.SetInterpolator(sitk.sitkBSpline)
    interpolator.SetTransform(sitk.DisplacementFieldTransform(scaled_deformation_field))
    interpolated_image = interpolator.Execute(image_itk_tumor)
    interpolated_images.append(sitk.GetArrayFromImage(interpolated_image))

interpolated_images = np.stack(interpolated_images,axis=0)
  1. After validating the results visually, it seems that the motion is quite off:
    a) It starts for some reason in between the phase 0 and phase 4 and not from phase 0 is it should.
    b) It ends way above the phase 4 and not at the phase 4.

My questions would be:

  1. Am I going in the right direction based on my goal ?
  2. What could I do differently to make the motion more accurate ?

Sorry for the broad question, just looking for some guidance.

Attaching a GIF for reference with what I am working at the moment. This is original data, no registration and no interpolation done here.
original_ct_data_final

Hello @Darius_Grazulis,

There appears to be a bug in the configuration of the resampling code. The reference image and resampled image are both from phase 4. The reference image is the fixed_image from registration and the image given to the Execute method should be the moving_image which is the image from phase 0 in the code.

Also, if you are using a ResampleImageFilter, create it and configure it outside the loop. Inside the loop only set the updated transform and call the Execute method. The current code creates a new object and configures it every iteration.

2 Likes

Thank you so much for the answer @zivy. Indeed there was a bug and now results are a bit better. However, there are still some pair of phases that show very weird motion. For example, this is the code now:

fixed_image=itk.image_from_array(numpy_array_tumor[4])
moving_image=itk.image_from_array(numpy_array_tumor[3])
visualization_utils.visualize(numpy_array_tumor[4],'uno.jpg')
visualization_utils.visualize(numpy_array_tumor[3],'unodos.jpg')

parameter_object = itk.ParameterObject.New()
parameter_map_rigid = parameter_object.GetDefaultParameterMap('rigid')

parameter_map_rigid['Metric'] = ['AdvancedMattesMutualInformation']
parameter_map_rigid['FinalGridSpacingInPhysicalUnits'] = ['64.0','64.0','64.0']
parameter_map_rigid['Transform'] = ['EulerTransform']

parameter_map_rigid['RegularizationType'] = ['Gaussian']
parameter_map_rigid['RegularizationKernels'] = ['2']

parameter_object.AddParameterMap(parameter_map_rigid)

result_image, result_transform_parameters = itk.elastix_registration_method(
    fixed_image, moving_image,
    parameter_object=parameter_object,
    log_to_console=True)

deformation_field = itk.transformix_deformation_field(moving_image, result_transform_parameters)

deformation_field = itk.array_from_image(deformation_field)


image_itk_tumor_fixed = sitk.GetImageFromArray(numpy_array_tumor[4])
image_itk_tumor_moving = sitk.GetImageFromArray(numpy_array_tumor[3])

interpolated_images = []
num_steps = 14 
index = 0
interpolator = sitk.ResampleImageFilter()
interpolator.SetReferenceImage(image_itk_tumor_fixed)
interpolator.SetInterpolator(sitk.sitkBSpline)


for step in tqdm(range(1, num_steps + 1)):
    fraction = step / num_steps

    if step == 1:
        zero_deformation_field = np.zeros_like(deformation_field)
        scaled_deformation_field = sitk.GetImageFromArray(zero_deformation_field.astype(np.float64), isVector=True)
    else:
        scaled_deformation_field = deformation_field * fraction
        scaled_deformation_field = sitk.GetImageFromArray(scaled_deformation_field.astype(np.float64), isVector=True)


    # new_origin = image_itk_tumor_fixed.GetOrigin()
    # scaled_deformation_field.SetOrigin(new_origin)

    # scaled_deformation_field.SetSpacing(image_itk_tumor_fixed.GetSpacing())
    # scaled_deformation_field.SetDirection(image_itk_tumor_fixed.GetDirection())

    interpolator.SetTransform(sitk.DisplacementFieldTransform(scaled_deformation_field))
    interpolated_image = interpolator.Execute(image_itk_tumor_moving)
    interpolated_images.append(sitk.GetArrayFromImage(interpolated_image))
    

interpolated_images = np.stack(interpolated_images,axis=0)
visualization_utils.create_gif_for_one(interpolated_images)

This is the result between these phases (attached GIF) and that is how they actually look like (attached images). As you can see, they are almost identical, but there is some strange movement. Maybe you could help with that ?
ttttooooodooooo_final


Okay, it seems now the issue is with the registration itself, EulerTransform provides these type of results…

Adding full motion modeling code, in case someone might need something similar in the future. The movement is still a bit side to side in some cases, but it’s the best yet:

current_phase = int(signal_scanner['breathing_phase'][0])  # Initialize to the first phase in your data

amplitudes = []

num_phases = new_deformation_images_tumor.shape[0]  # This should be 10 based on your print statement


interpolated_images =[]

signal_scanner = signal_scanner.iloc[:100]
for el in tqdm(range(len(signal_scanner['amplitude'])), desc="INTERPOLATION PROGRESS"):
    current_phase_in_loop = int(signal_scanner['breathing_phase'][el])
    
    # Check if current iteration is the last element or if the phase has changed
    is_last_element = el == len(signal_scanner['amplitude']) -1
    if current_phase == current_phase_in_loop and not is_last_element:
        amplitudes.append(signal_scanner['amplitude'][el])
    else:
        # Calculate next index safely, ensuring it wraps correctly
        if is_last_element:
            amplitudes.append(signal_scanner['amplitude'][el])
        next_index = (current_phase + 1) % 10
        
        print(f"Current phase: {current_phase}, Next index: {next_index}")
        fixed_image=itk.image_from_array(numpy_array_tumor[next_index])
        moving_image=itk.image_from_array(numpy_array_tumor[current_phase])

        parameter_object = itk.ParameterObject.New()
        parameter_map_rigid = parameter_object.GetDefaultParameterMap('rigid')

        parameter_map_rigid['Metric'] = ['AdvancedMattesMutualInformation']
        parameter_map_rigid['FinalGridSpacingInPhysicalUnits'] = ['64.0','64.0','64.0']
        parameter_map_rigid['Transform'] = ['AffineTransform']

        parameter_map_rigid['RegularizationType'] = ['Gaussian']
        parameter_map_rigid['RegularizationKernels'] = ['2']

        parameter_object.AddParameterMap(parameter_map_rigid)

        result_image, result_transform_parameters = itk.elastix_registration_method(
            fixed_image, moving_image,
            parameter_object=parameter_object,
            log_to_console=True)

        deformation_field = itk.transformix_deformation_field(moving_image, result_transform_parameters)

        deformation_field = itk.array_from_image(deformation_field)

        fixed_image_sitk = sitk.GetImageFromArray(numpy_array_tumor[next_index])
        moving_image_sitk = sitk.GetImageFromArray(numpy_array_tumor[current_phase])

        interpolator = sitk.ResampleImageFilter()
        interpolator.SetReferenceImage(fixed_image_sitk)
        interpolator.SetInterpolator(sitk.sitkLinear)

        for step in tqdm(range(0, len(amplitudes))):
            fraction = step / len(amplitudes)
            if step == 1:
                # Apply zero deformation for the first step
                zero_deformation_field = np.zeros_like(deformation_field)
                scaled_deformation_field = sitk.GetImageFromArray(zero_deformation_field.astype(np.float64), isVector=True)
            else:
                # Apply scaled deformation for subsequent steps
                scaled_deformation_field = deformation_field * fraction
                scaled_deformation_field = sitk.GetImageFromArray(scaled_deformation_field.astype(np.float64), isVector=True)

            # new_origin = fixed_image_sitk.GetOrigin()
            # scaled_deformation_field.SetOrigin(new_origin)

            # scaled_deformation_field.SetSpacing(fixed_image_sitk.GetSpacing())
            # scaled_deformation_field.SetDirection(fixed_image_sitk.GetDirection())

            # Apply the scaled deformation field to the fixed image
            interpolator.SetTransform(sitk.DisplacementFieldTransform(scaled_deformation_field))
            interpolated_image = interpolator.Execute(moving_image_sitk)
            # if step == len(amplitudes) - 1:
            #     interpolated_images.append(numpy_array_tumor[next_index])
            # else:
            interpolated_images.append(sitk.GetArrayFromImage(interpolated_image))
            
        
        # interpolated_images.append(numpy_array_tumor[current_phase])
        
        #Reset for next cycle
        amplitudes = [signal_scanner['amplitude'][el]]  # Start next list with current amplitude
        current_phase = current_phase_in_loop

ttttooooodooooo_final

1 Like

Hi @Darius_Grazulis ,

Obviously if you like your solution, you can ignore the following but since I recently dealt with this issue of continuously interpolating transforms within a longitudinal trajectory, I figure I’d weigh in.

First, a caveat—and I hesitate to promote a separate package in this forum, but the complete solution is currently only available in our Python ANTsPy or R ANTsR packages. But all the heavy lifting is done by two filters which we put in ITK:

So I imagine someone could easily put this in e.g., SimpleITK to have a purely ITK-sanctioned solution.

My scenario is somewhat similar to yours—I have a longitudinal trajectory of seven mice brain template images spanning from the early embryonic stage to the latter post-natal stage. And the temporal spacing between them is non-linear. The deformation between the two endpoints is such that going the route of a groupwise template just wouldn’t work. So instead, I generate a 4-D velocity flow model (represented by a 4-D ITK image) from which I can generate a displacement field transform from any two time points within the temporal span. The input consists of seven corresponding point sets (represented as numpy arrays), the time point within the trajectory represented for each point set, and a relative weighting for each point (e.g., I weight edge points more than others).

Here’s a fully-contained, yet very simple example of the use of this tool in both ANTsPy and ANTsR where I build a velocity flow model going from a rectangle to a square to a circle, represented by 8 points. Again, it’s a toy example but one can run it very quickly and get the idea. A more complicated example using the scenario described with the mouse brains is found here. It takes a little longer to run but all the steps are there including both the image data and the code to use pairwise registration between developmental stages to generate the corresponding point sets. Also included are examples of how one can use the derived velocity transform to generate displacement field transforms to transform data between any two time points.

Nick

3 Likes

Thank you so much for sharing this. I will definitely try it out later on and will let you know how it goes.

Indeed, the current solution is not sufficient enough for me (although it seemed like it is at first), because the motion of the tumor in some cases gets really weird.

As an additional question, maybe you would know if it would be possible to use like an external signal for interpolation guidance in your provided example ? I have a surface scanner signal of chest amplitude, which might improve the results.

Best,
Darius

Not off the top of my head but I’d have to look more closely at what you actually have.

But, for now, I would suggest taking a look at the example and trying it out on your data to see if it works for you. If so, then you can swing back around and ask if there’s a way to incorporate the additional information into your scenario. I’d be happy to look more closely then but perhaps you’ll arrive at the answer yourself.