Registration and segmentation from a variety of FOV and orientations

After changing both of the values you suggested.

In determining convergence, are we more looking for a value >100? or is it more the shape of the curve leveling off?

Looks strange. When registration is working correctly it most often looks like an exponential decay graph (leveling off).

Okay I’ve done some digging today, I’m not confident that the initialization that loops through the orientations (Metric Evaluate cell in the notebook) is finding the correct orientation.

When I run this code you supplied earlier followed by the initialization code, it finds the best orientation is (pi, 0, 0). when it should be (0, 0, 1.5 * pi). I beefed up the initialization code to test every orientation even if redundant.

# rotate atlas image to debug registration
fixed_center = fixed_image.TransformContinuousIndexToPhysicalPoint([(sz-1)/2 for sz in fixed_image.GetSize()])
angle_x = 0
angle_y = 0
angle_z = np.pi/2.0
translation = [0,0,0]
tx = sitk.Euler3DTransform (fixed_center,
moving_image = sitk.Resample(fixed_image,tx)

import itertools
transform_orientations = [[0,np.pi/2, np.pi, 1.5*np.pi],
                          [0,np.pi/2, np.pi, 1.5*np.pi],
                          [0,np.pi/2, np.pi, 1.5*np.pi]]
all_orientations = list(itertools.product(*transform_orientations))

from multiprocessing.pool import ThreadPool
from functools import partial

# This function evaluates the metric value in a thread safe manner
def evaluate_metric(current_rotation, tx, f_image, m_image):
    registration_method = sitk.ImageRegistrationMethod()
    current_transform = sitk.Euler3DTransform(tx)
    res = registration_method.MetricEvaluate(f_image, m_image)
    return res

p = ThreadPool(len(all_orientations) + 1)
orientations_list = [(0, 0, 0)] + all_orientations
all_metric_values =
        evaluate_metric, tx=initial_transform, f_image=fixed_image, m_image=moving_image
best_orientation = orientations_list[np.argmin(all_metric_values)]
print("best orientation is: " + str(best_orientation))
1 Like

@zivy I appreciate all your help and patience! In addition to the problem above, I’m also struggling with implementing manual initialization with images in different orientations. Identifying a corresponding structure from axial vs coronal views with precision can be difficult.

My intuition is to use DICOMOrientImageFilter() after ROI cropping to at least get the user image in the same view as the atlas image for the RegistrationPointDataAquisition() gui. I’m not exactly sure if this will give accurate results though (mostly because I don’t really understand what DICOMOrientImageFilter() is doing, this line from the documentation seems counter-intuitive-- The physical location of all pixels in the image remains the same, but the meta-data and the ordering of the stored pixels may change).

Hello @Mitchellb16,

No worries, the coordinate systems are confusing. I suggest reading this explanation about the world (x,y,z) patient (lps) and image (i,j,k) coordinate systems. The DICOMOrient filter changes the ordering of the pixel array (i,j,k) and the image direction matrix (x,y,z) so that the changes “cancel out” and the pixels remain in the same physical location even though their indexing has changed.

Your approach sounds reasonable. One thing to keep in mind, displays assume isotropic spacing, so you may need to make the image isotropic if the viewing plane isn’t isotropic (with CT axial - usually isotropic, sagittal - usually not-isotropic, coronal - usually not-isotropic).


Update, initialization was our culprit!

For those with similar issues with manual initialization (for head CTs at least), I found using 5 points to be very helpful! 2 at the ear canals, one at the top of the head, and 2 at the most anterior points of zygomatic bone (eye sockets). Now on to see if I can automate the finding of these points.

1 Like

Hello @Mitchellb16,

Initialization is most often the culprit. Congrats, and welcome to the wonderful world of optimization.

1 Like