I am new to ITK, and have been exploring the different registration algorithms available. (Kudos on the breadth and depth of examples provided, BTW.) Most of the illustrations I have seen deal with biomedical images, of course, where essentially the entire image content is available in the two images being registered, with the two images differing in various rigid/deformable transformations of the coordinate system.
Unlike these, the types of images I am dealing with may be viewed as variable rotations/crops of a much larger image. As a result, the two images in a pair may only share a limited amount of content, over a partial “overlap” region of the two image fields. I am interesting in registering such pairs, such that the shared-content overlap regions are brought into coincidence; while starting with rigid transformations, I would ultimately also like to bring in deformations as well. Obviously, the feasibility of any such registrations diminishes as the extent of overlap between the two images decreases.
What hasn’t been clear to me so far is how robust the various available registration algorithms are in the face of limited content overlap between the two images. I would guess that registration algorithms which rely on explicit displacement fields (e.g. the various demons algorithms) could in principle still work, but the initial displacement fields might have to be quite large in order to insure convergence, and would therefore have to be bootstrapped somehow.
I apologize that the question is somewhat imprecise. I am interested in any bumps to my intuition or information about prior experiences in this domain. Naturally, trial-and-error is always an option, but given the largest number of available algorithms, some initial hints would of course be welcome. Many thanks in advance.
In ITKMontage (GitHub, paper) we are dealing with partly overlapping tiles. We make our problem tractable by relying on initialization based on approximate location, which is available from microscope’s motors. Perhaps you could use some similar trick? Otherwise your problem is a lot harder.
Thanks for the helpful reply, and for the reference, which I will study carefully. My tiles (to adopt your nomenclature) can have relative rotations between them, whereas I think yours appear to have parallel edges. I have been able to use a variant of a phase correlation method to estimate relative rotations (and the corollary translations), although the range of applicability of the method depends quite a bit on the overlap, and relative deformations between images tend to squash the method a bit (perhaps unsurprising for a phase-based method). I guess what I am seeking is some confidence that, given a reasonable starting configuration, these methods won’t “break” when the two images have incomplete overlap. I suspect your work will provide some insight into that. I notice that you use a phase-correlation method yourself, and I am perhaps hoping that another algorithm might prove more resistant to deformations.
To clarify, strictly speaking my “tiles” are not precise extracts from the larger image, but may have individual deformations applied to them as well. In the absence of deformations, it is quite likely that a phase-correlation method might suffice overall.
My main point is to crop the images to the estimated overlap (with some safety redundancy). This will make registration both faster and more reliable, no matter which algorithm you use. For microscopy tiles, phase correlation method (PCM) was appropriate due to absence of deformations and rotations, so you should definitely change that. The code there might provide you a little bit of inspiration, though I doubt you will be able to reuse much.
Thanks again for engaging with this. It turns out that there are adaptations of the phase-correlation method which may be used to estimate relative rotations between images. The key is to perform the FFTs in a polar setting (r,theta) rather than a Cartesian setting; under the right conditions, the relative angle between two images pops out as a peak in the phase correlation along the theta axis in Fourier space. This does not always work with all images, and its applicability is of course limited as the overlap between the images decreases–but if it works, then one just needs to rotate the reference image by the deduced angle and perform a conventional phase-correlation analysis in order to determine the corresponding translation.
Your point about trying to crop the images to their respective overlap regions makes a lot of sense. Even if you don’t know the overlaps a priori, one can envision an approach whereby each image is divided into blocks, and you attempt to do pairwise registration of every block of one image with every block of the other image. If the respective blocks of the two images are in their corresponding slices of the overlap region, the “quality” of the registration of the two blocks, as measured by the peak value of the normalized phase correlation, should in principle be higher. As long as there is any overlap (otherwise the exercise is pointless), this exhaustive pairwise registration of sub-blocks should highlight the overlap region at least a little.
This is very much in the concept phase, although there are preliminary indications that it helps in test cases. Thanks again…