Is there a way to combine two registered images together preserving all the information from non-overlapping parts of both images?

Hi folks,
Suppose, I ve got two 3d images, and after running the registration on them I am able to recover 3d transform which includes shift and rotation. Combining these two images together will give me a union of two parallelograms slightly shifted and roated relative to one another. Is there a way to get the whole union, NOT just one of these parallelograms? In other words, I want to create some kind of map from two separate pieces which have some overlap.

I am using Python and I ve tried to use sitk.ResampleImageFilter(), but it just returns moving image framed into coordinates of the fixed image (which is not what I want).

It seems like a basic thing to me, so maybe anyone has done it before? :slightly_smiling_face:

Hello Pavel,

Please look at the “Defining Resampling Grid” section of the SimpleITK Notebook: “Transforms and Resampling” for an example on how to compute the bounds used for the resampling filter.

After a bit of consideration, here is what I’ve come up with:
the function ''combine_two_images" does the trick…

Also, you need to apply obtained transform to coordinates of the moving image prior to using it.

from math import cos, sin
import numpy as np

def matrix_from_euler(x,y,z):
    return np.array([[cos(y)*cos(z),-cos(y)*sin(z),sin(y)],
    [cos(x)*sin(z) + cos(z)*sin(x)*sin(y), cos(x)*cos(z) - sin(x)*sin(y)*sin(z), -cos(y)*sin(x)],
    [sin(x)*sin(z) - cos(x)*cos(z)*sin(y), cos(z)*sin(x) + cos(x)*sin(y)*sin(z),  cos(x)*cos(y)]])

def euler_angles_to_matrix(arguments):
    params = []
    for i in range(len(arguments)):
    angles = params[:3]
    shifts = params[3:]
    rotation = matrix_from_euler(*angles)
    trans = translation(shifts)
    M = np.zeros((4,4))
    M[:3,:3] = rotation
    #adding translation
    for i in range(3):
        M[i,3] = shifts[i] 
    M[3,3] = 1
    return M

def apply_transform(points, M):
    points = np.hstack([points, np.ones((points.shape[0],1))])
    points_transformed =, points.T).T
    return points_transformed[:,:3]

def combine_two_images(img1, img2, coords1, coords2, interp_interval):
    #combine two coords arrays into one, such that points correspond to values in 'values'-array
    coords_comb = np.vstack([coords1,coords2])
    #combine two images into one array of values
    values_comb = np.hstack([img1.ravel(),img2.ravel()])
    #get min and max vals for x, y, and z across two images
    x_min, x_max = (np.min(coords_comb[:,0]), np.max(coords_comb[:,0]))
    y_min, y_max = (np.min(coords_comb[:,1]), np.max(coords_comb[:,1]))
    z_min, z_max = (np.min(coords_comb[:,2]), np.max(coords_comb[:,2]))   
    n_x_points = int((x_max-x_min)/interp_interval)
    n_y_points = int((y_max-y_min)/interp_interval)
    n_z_points = int((z_max-z_min)/interp_interval)
    grid_x, grid_y, grid_z = np.mgrid[x_min:x_max:eval(str(n_x_points) + 'j'),
                                      y_min:y_max:eval(str(n_y_points) + 'j'),
                                      z_min:z_max:eval(str(n_z_points) + 'j')]
    grid = griddata(coords_comb, values_comb, (grid_x, grid_y, grid_z), method='nearest')
    return grid
1 Like