I have a question. In the similarity measure, there is a “jointhistogrammutualinformation”. I want to know whether this is to calculate the mutual information value, rather than calculate the derived value of mutual information (such as mattes mutual information value, maximum mutual information value, etc.)?Any help would be appreciated.
This class computes the mutual information using the method described in P. Thevenaz and M. Unser, “Optimization of Mutual Information for MultiResolution Image Registration” IEEE Transactions on Image Processing, 9(12), 2000.
Thank you for your answer.
I have a doubt about this “jointhistogrammutualinformation”, because when I use it to calculate two identical graphs, the result is -0.8, but I think it should be -1. Do you know the reason?
Mutual Information is in the range [0, +inf). In ITK/SimpleITK we negate it because our optimizers are set to minimize. Various normalized mutual information versions have been developed, but they are not implemented in SimpleITK.
Yes, I understand that mutual information is greater than or equal to 0, and in simpleitk, to negate it means that the smaller the value, the better. I read the paper you introduced. There is a normalization factor in it, so I think the index is normalized, but these works still can’t solve my problem - the value of two identical images is - 0.8, not - 1. Do you have any opinion on this? Thank you in advance.
The SetMetricAsJointHistogramMutualInformation
is not computing Normalized Mutual Information, it’s just Mutual Information so can be any value in [0,-inf), there is no reason that it be -1 even when the images are aligned.
I get it. Since he is not normalized, it means that even two identical graphs will not be equal to 1. By the way, I wonder if you have a formula for mattes mutual information. I didn’t find it online. If you have it, I hope you can provide it. If you don’t have it, it doesn’t matter. Thank you for your help.
Is this mattes mutual information formula? It looks very similar to the formula of “JointHistogramMutualInformation”.By the way, is there only a negative sign difference between mutual information and mattes mutual information?
The differences are not in the formula, they are in the implementation. Options for estimating the probability density function, Parzen windowing (Mattes) vs. JointHistogram (Histogram).
If you are interested in the theory underlying the algorithms in ITK/SimpleITK you will have to study on your own or take a course on medical image analysis.
This forum is not the venue for learning the theory behind the algorithms, it is intended to help you use the toolkits.