I’ve only recently started working with image registration and I just don’t get the step by step process of how it works. Let’s say I have two grayscale images: 1st image, a circle (1) and a square (2) on left side and 2nd image, circle and square on right. I use the affine transformation and it gets the job done such that both images are now the same.
Moving
0 0 0 0 0
1 1 0 0 0
2 2 0 0 0
Fixed
0 0 0 2 2
0 0 0 0 1
0 0 0 1 0
My question is:
how did the registration work such that it can distinguish between the circle and square and move them separately to their desired parts? Or if anyone can point me in the right direction of where I should be learning this information?
To understand the classical approaches to registration see this paper, or this book chapter or this book. This is what is implemented in ITK/SimpleITK.
Two things to note: first, registration is a very broad subject and many people have spent years doing research on it so you may want to limit your reading to the methods relevant for your use case, second, current registration research methods have shifted to deep learning, so different from the classical approaches (see this paper).
Thanks for those useful references! I recognize understanding the classical approaches may take too much time so I was hoping to get some insight on the code implementation of the transformation. e.g. from the two arrays, do you use a built-in function to separate the two objects and from here do a separate registration task on both? or is this even the way to do it?
You really need to read a bit of the literature before trying to use registration. Based on your question you are missing the fundamentals, so requires some reading on your part.
If you are looking for a quick understanding of the ITK/SimpleITK registration framework implementation read the ITK book chapter on registration (ch.3) and possibly the SimpleITK registration overview.
Thanks again for the links. I guess currently my rough understanding of registration is that (provided conditions are met) there exists a matrix (which you solve with lots of math n physics) that you can apply to the moving image that will get you the fixed image. So with this understanding, I’m kind of confused how it does this with let’s say heart and breathing lungs (how the heart won’t just make itself bigger to fit the lungs of the fixed image). Is it really that the matrix just contains all this information? I will definitely keep reading more on the literature.
Hi. You really need to read the fine manuals (RTFM) and, possibly, some related literature. Your answer is there. The question that you are asking is like coming to a linear algebra class to ask the teacher “so: you have all these numbers arranged together and you can get all these maths in there, but how do you solve a problem with that?”. Disclaimer: I’m not part of ITK, just a regular user who wishes you luck and success.
Thanks for bumping the post. I feel as though you misunderstood my question with your analogy. In my case, I’ve understood that that there’s a way to map out the intensity of the voxels and from there use/minimize one of the metrics or similarity measures (like the mattes MI used here or even SSD) to arrive at a decent transformation.
What I do not get though is how ,if at all, the registration would take into account how in large deformations of tissue, bone near the soft tissue shouldn’t look funny or the heart won’t deform as much as the lungs. If the registration is agnostic to what is bone or what is soft tissue and only takes in the entire image’s position and intensity (or with sampling strategy), then I thought maybe there was a solution of separating them first using something in ITK (a bit naive but as im still a beginner, that’s what I thought). I’ve decided to keep on reading the fine manuals in search for more clarity.
Registration can employ normalization techniques such as “bending energy” and “volume preservation”. But to treat bone differently from lungs, we would need to know which part of the image are bones, which are lungs etc. That is called image segmentation, and is in itself a generally unsolved problem.
@Niels_Dekker can registration regularization techniques be applied selectively in elastix for some tissues, if image segmentations are provided?
Sorry for the late message. Yes, that’s what I mean! I found a way to match my datasets through sheer tuning of other parameters but it does sound useful to be able to select segmentations esp for those areas where matching is of utmost importance/or where behavior should be imposed.