Hey there!
I am a young researcher-to-be and I am currently working on my bachelor’s thesis. The problem, because of which I reach out to you, boils down to a point set registration of one xy and one xyz image with little knowledge about the rough, relative orientation of those images. I have already tried to understand the relevant ITK docs and software guide chapters, and read some related open topics. I also tried some other python libraries like cv2 and some ImageJ plug-ins. Since I am very new to this area, I would be thankful if you could help me out. For starters, it would be sufficient to get an assessment of you experts, if this task seems feasible with ITK or if you have an idea for an alternative approach.
My data:
I have a 2d in vivo image of a murine kidney. The kidney is pressed to a window for imaging purposes leading to a deformation of the kidney. Due to the nature of intravital imaging, only the uppermost structures are visible and the quality is limited.
The organ is then removed and stained with antibodies. The removal leads to a relaxation of the squeezed tissue. The following procedure then results in a shrinkage. Using the same microscope (hybrid confocal/multiphoton), I get a 3d image of the top tissue layers. The orientation along the z-axis is somewhat known, while the orientation in the xy dimensions can only be guessed. As for my understanding, there are no unique structures, which could serve as a landmark and basis for coarse manual alignment. This is partly a result of different staining techniques in the corresponding images. Also, alignment of the overall volume seems not possible. There are, however, sphere-like structures (for those interested: glomeruli) in both images that - in my mind – should be able to be aligned.
The task
Find the corresponding spheres in both images and use the found matches for a transformation and following alignment. The exact transformation and alignment is not a must-have, “simple” matches would be sufficient.
My previous attempts:
From these sphere-like structures, I adjusted the scaling, extracted the coordinates and stored them in a numpy array / as an ITK point set (see the plots (svgs) of example 1 in the zip file).
Based on the point set registration tutorial, I ran tests with artificial 2d data (random point set + the same point set rotated), which did not succeed. I played with the available parameters and tried to wrap my head around the docs, but coming from python, the mostly C++ docs are quite overwhelming.
A further attempt using an ICP algorithm based on scipy and sklearn’s KNN failed because of different numbers of points in the numpy array. The ICP, in this case, minimized the distance to all points instead of looking for best matches only.
Feature based registration methods (using SIFT, ORB, FAST etc.) failed, I think, because the image quality, orientation and staining of the samples is too variant.
Other attempts included Slicer3d, Fiji-Plugins (e.g. Fijiyama), sitk,…
My Questions:
Do you think that finding corresponding spheres with this kind of data is possible in ITK or would you point me to e.g. point cloud libraries?
Based on this data, do you think a registration method other than point set registration should be considered?
If possible in ITK, what would be my next steps? (learn C++ ? )
I have now spent four weeks relentlessly trying to find a (coarse) alignment method for my kind of data. Before I “waste” anymore time trying, I would be truly glad if I could find working strategies for this problem and if I could get one working example after all.
That is it for now! Thanks for reading up to this point. I hope I was able to outline my problem sufficiently. If you need any more details in order to answer my pretty open questions to a yet specific problem, then please shoot.
Thanks a lot,
Johannes