I’m currenty working on a project but I’m stuck.
I have two images. Both are projected images from a sphere to a square image. One is just a grayscale image of a spherical camera and the other is an intensity image of same lidar points. I want to calculate the mutual information between both projected images and adjust the rotation and translation of the camera in order to maximize those mutual information.
I’ve found several examples which calculates those transformation parameters, but it was always for a 2D image. https://itk.org/Wiki/ITK/Examples/Registration/ImageRegistrationMethodAffine
I need the 3D transformation matrix in order to transform the lidar point cloud so i matches exactly the spherical images.
Thanks for the reply. If I understand this article, it uses as a fixed image 2D images and as moving image a 3D image. Is there a way to only use the projection of the images, so for a fixed and moving image a 2D image. So a transformation matrix can be determined to rotate the 3D cloud and get a new projection.
What you consider moving and what fixed is only a convention. The inverse matrix gives you the other convention.
Since 2D projection can’t be turned into a 3D image, the 3D image must be turned into a 2D projection to evaluate registration metric. That is what must happen, so it is only a matter of where that is happening.
But if you don’t have access to a 3D point cloud, and only have access to its 2D projection, then you might have to have a non-standard setup.