Hi,
I do image registration using GradientDescentLineSearch, MultiScale and MattesMutualInformation.
The algorithm converges and gives “okayish” results. However, if I hand-tune the resulting transformation, I get even better results and a smaller metric value (= better coregistration).
I already played around with the learning rate but the algorithm always gets stuck in that local minimum. Do you have any suggestions for this case?
As the optimization is of a non-convex function, the optimizer ends in a local minimum. There is no way to guarantee convergence to a different, yet nearby minimum. Heuristic approaches are possible but there is always a chance of drifting far from the correct solution to a lower function value. Possible heuristic:
T2a…T2z: Explore parameter space using exhaustive optimizer near T1 values.
T3:
a. Select T2k: best value from all T2? [conservative approach which doesn’t allow significant drift from T1].
b. Run optimizer multiple times starting from all T2a…T2z and select the best result.
I would recommend to try the ITK-based Elastix and ANTs registration toolkits. While I always had to do parameter tuning when I used BRAINSTools (which is a thin wrapper over ITK’s registration framework), both Elastix and ANTs generally work well without any tuning using the default registration parameter sets. If you want to quickly compare the performance of these toolkits using a convenient GUI then you can use 3D Slicer and the SlicerElastix and SlicerANTs extensions (BRAINSTools is bundled with the application).