While near the centre of the image the distortion may be negligible, as we move away from the centre, the projection deformation increases, rendering the relative size/position of the features in the image inaccurate without correction.
What is the method to transform the image from the sensor coordinates into the original object plane/coordinates? Is this the itk::AzimuthElevationToCartesianTransform or is there no transform for this case in ITK?
The way you have drawn it, similarity (rigid + isotropic scaling) should suffice. Maybe there is some distortion component which you are not accounting for?
I agree, it is a simple linear projection transform, we don’t usually call this distortion. Radial distortion, S distortion, etc. were indeed problems for fluoroscopy systems that used image intensifier. Current fluoro systems use digital flat panel detectors that do not suffer from these issues.
The transformation from the object plane to the image plane is a simple uniform scaling. You can convert 3D coordinates to coordinates on the image plane by multiplying with a homogeneous transformation matrix.
Yes, that diagram doesn’t show the full picture. However, I have prepared a couple of phantom scans, which demonstrate the radial distortion. And yes, Andras, I am dealing with an image intensifier type of scanner.
The phantom I’ve put together is just a bunch of 10 mm ballbearing balls, spaced at 2 cm sitting on radial lines embedded in a sheet of 10 mm acrylic.
This image is the phantom sitting directly on the image intensifier:
And the following image is the same phantom, 20 cm away from the intensifier:
You may observe, that the further away from image centre we are, the more those balls start to resemble grapes, on their way towards the ladyfinger variety.
So, what sort of transform would fit here? It doesn’t seem that neither translation or anything istotropic would do, would it?
Image intensifier distortion correction in fluoro systems has extensive literature, so I would not go into details here. Just a few specific suggestions:
use a uniform grid pattern: the point arrangement in your image above is highly-non-uniform (there are many points near the center and huge gaps elsewhere), which would make it difficult to compute a smooth and accurate displacement field
you can use ITK bspline or thin-plate spline transform to get a smooth displacement field from a modest number of points; and you can then compute the distortion-corrected image by warp image filter
This subject brings back long lost memories, 22-23 years ago, we worked on this: Livyatan et al., “Robust Automatic C-Arm Calibration for Fluoroscopy-Based Navigation: A Practical Approach”, MICCAI 2002. The challenge there was detecting the pattern under occlusions as we were using an online calibration approach. Camera calibration is computed for each image because the C-arm deforms due to gravity so each orientation has different camera parameters.
The section on creating a Radial Distortion in this jupyter notebook may be of interest too. You are dealing with the inverse problem, but simulating radial distortion may help you in debugging.