How does the Image Registration or Image Alignment work on the pixel level?

I know the basic flow or process of the Image Registration/Alignment but what happens at the pixel level when 2 images are registered/aligned i.e. similar pixels of moving image which are transformed to the fixed image are kept intact but what happens to the pixels which are not matched, are they averaged or something else?

And how the correct transformation technique is estimated i.e. how will I know that whether to apply translation, scaling, rotation, etc and how much(i.e. what value of degrees for rotation, values for translation, etc.) to apply?

Also, in the initial step how the similar pixel values are identified and matched?

Welcome to the community Akshay!

Your question is so general, that the best answer is to read the Registration chapter of the software guide. The complete guide is also available as a PDF.

1 Like

Thank you for your response but I want to know what exactly happens when 2 images are registered, are they combined as one or remains separate, what happens to their pixels which are mapped and which aren’t, are they kept intact or averaged or something else. I read the document but I didn’t get a clear idea, I even asked this question to the author David Taiwie Chen, author of the paper “Design of the simpleITK” and he told me to post this question here on the website. Kindly help me as it’s very important for me to get the marks and pass my project viva.

For the benefit of others, here was part of my response to his query to me:

The ITK registration algorithms merely produce a transformation that converts from the moving image’s space to the fixed image’s space. It does not actually resample one image into the other nor does it combine the two images in any way.

It is up to the user to decide how to use the transformation. Typically you would resample the moving image using the transformation to produce an image in the same space as the fixed image.

1 Like

There is no magic of how similar pixel values are identified and matched. This thread might help. Direct link to documentation of CenteredTransformInitializer.

Registration “only” produces transform, it does not change the input images. If you want to “combine” them, you need to write the code for it. Here is an example of how you could use a transform once you have it.

1 Like

Thank you @dzenanz and @dchen for your help, I’m able to map the things and understand them clearly. So, the output of the registration process is the transformed moving image? And in the link in 3.3 Features of the Registration Framework, after 3.8 figure, it is mentioned as “the resampling would result in an image with holes and redundant or overlapping pixel values”, so, how to overcome this issue of redundancy or holes or overlapping?