Alpha-Mutual Information for multimodal images with different number of feature channels

Hi,

There have been some studies done on high dimensional image registration using \alpha-MI (Staring et al, 2009). The implementation of alpha-Mutual Information is available in elastix: http://elastix.isi.uu.nl/doxygen/classitk_1_1KNNGraphAlphaMutualInformationImageToImageMetric.html. However, is it possible to employ \alpha-MI to register multimodal images which have a different number of feature components? Is it the limitation of \alpha-MI which requires images to have the same number of channels? Can a new metric be developed for such case?

Best regards,
Stas

@mstaring and @Niels_Dekker can probably respond the best.

Dear Stats,

I had to check the equations in the paper, see http://elastix.isi.uu.nl/marius/publications/2009_j_TMIb.php

It seems it is not needed at all that the number of features are the same for the two images. The joint entropy is computed from the simple concatenation of the two feature vectors which therefore do not need to be of the same size. But please check the equations in some detail.

Did you try elastix with differently sized feature vectors? Did you run into a problem? The class elastix\Components\Metrics\KNNGraphAlphaMutualInformation\itkKNNGraphAlphaMutualInformationImageToImageMetric.h seems quite general, just scrolling through it.

Best, Marius

2 Likes

Dear Marius,

It looks promising! I will check the equations and I’ll try Elastix. Thank you a lot for the clarification.

Currently, I’m happy to use SimpleITK which allows me to track the cost function and gives me direct access to image data, and which is apparently lacking in a pythonic wrapper of Elastix.

Best regards,
Stas