I am looking for a way to incorporate the results of image segmentation into an image registration framework and would love some feedback or advice on several possible avenues to pursue.
Similar to @dyoll’s question (Metrics for registration of labelmaps - Algorithms - ITK), consider a machine learning algorithm like a neural network that provides a multiple category dense segmentation of an image. Multiple category segmentation would typically be evaluated in ML-training with categorical crossentropy loss, but I do not think anything similar exists in ITK.
I am considering implementing my own categorical crossentropy image-to-image metric, but there are a few major aspects that have me hung up. First, the inputs to the metric would be asymmetric–in my case the fixed image would be the result of segmentation–a collection of probability maps–and the moving image would be a label map. Unfortunately I think this means the same metric could not be used with fixed and moving image types swapped. Second, I’m unsure how to handle the gradient computations required to make this a v4 metric, but I would be willing to try if someone could provide some guidance. My goal is to make this a v4 metric to enable combining image-based and segmentation-based metrics into registration using ObjectToObjectMultiMetricv4.
A second approach I am considering is to create an ObjectToObjectMultiMetricv4 specialization that sets up a metric for each category. The implementation would handle parcellating a label map into separate binary maps and configuring component MeanSquaredImageToImageMetricv4 instances that match up the binary map with the probability map for each category. Although this probably would not scale for large category sets, it might do for a small set of categories.
Does anyone have any advice of which approach is the most promising or likely to work?