I’m evaluating the predictions of a lung lobe segmentation model using the HausdorffDistanceImageFilter.
Thus, I’m passing the ground truth and the prediction (both have equal size with 6 labels) into the image filter.
I’m a bit puzzled about the result, that is the results make sense that worse models have a higher distance and better models have a smaller distance. But I’m a bit puzzled about the unit of the results and how multiple labels change the results.
In the documentation, I only find the explanation for sets.
Additionally, what’s the difference between GetHausdorffDistance and GetAverageHausdorffDistance?
Can you also explain how this works with multiple labels? Is each label considered as a set and then the hausdorff distance is calculated for the given images per label/set and then the maximum of those is returned?
From the documentation, I understood that all non-zero pixels are considered to belong to the same label. So if you want per label measures, you would use threshold filter to extract the appropriate label before measuring distance.
I would be interested to understand the GetAverageHausdorffDistance function more in detail. Is it calculating the average distance for all points (pixels/voxels) in both masks, directly? Or is it first calculating separate (directed) average distances for each of the two masks to the respectively other mask and then calculating in a second step the average across these two averages? The code suggests the second procedure, but I am not a C++ programmer and not sure. And is GetAverageHausdorffDistance also working on the point clouds (all pixels/voxels) as the GetHausdorffDistance function does, or is it working the surface pixels/voxels instead?