CurvatureAnisotropicDiff creates different result relative to volume size

Hi, in an attempt to get a preview of what CurvatureAnisotropicDiffusion will produce with less processing time i tried to run it in on a cropped volume to serve as a sample, however the final result on the complete volume (with the same parameters) is a lot stronger and produces a blurry image.
is there any way to predict the target parameters(conductance and time step) relative to those of a smaller volume (based on their size ratio)? or is there any way to preview what a filter will produce?
The only way i could produce a sufficiently close preview was by setting the cropping roi to cover the full axial plane and a thick (40+ voxel) vertical axis, wich still takes a lot of time to compute
Thanks for any inputs.

Here is what a cropped volume and the complete one look like with the same conductance/iterations/timestep

Hi @Amine,

It is worth investigating whether the cropping operation is preserving image pixel spacing and whether the curvature anisotropic diffusion filter is also using image spacing.

1 Like

Hi, Thanks for your answer,
The cropping operation preserves pixel spacing and is not isotropic,
as for the filter itself it seems to get affected by axis size differences (e.g. if x and y axes are the same size in the cropped and full volume then the XY plane will not be affected by the blur difference // on the opposite, if the z axis is small then XZ and YZ planes will show blur difference)

I will try to investigate the use of pixel spacing in the filter.

To predict 3D curvature at a voxel position, you need to know the neighbors of that voxel, so it is expected that you cannot crop tightly to the region of interest but you need to add some padding (crop less). If you perform multiple iterations then you may need neighbors of neighbors, too, which means even wider thicker padding.

Therefore, it is expected that you need add padding (thick slice, larger around the region you are interested in. The more iterations you do, the larger padding is needed. 20 voxel padding on each slice does not seem excessive, so what you experience may be normal.


There are a couple of parameters that by default are estimated from the image’s properties and gradients. Two items to investigate are:

1 Like

@blowekamp Thanks, I will try to see if i can get the parameters estimated with the cropped volume and transfer them to the filter for the full volume, this looks promising.

@lassoan indeed, i use 60 iterations so that is a minus, i understand more padding means a better result but in this case it means unpredictable behavior? even with a full slice and fair thickness the difference between cropped preview and full is still significant

Wow, this number of iterations is excessive. During this many iterations, the image may become very uniform and due to limited number of gray levels in an integer-type image, gradient estimation can become very inaccurate. Using floating-point voxel type could help better preserving information during this processing.

Why do you need 60 iterations? What are you trying to achieve?

After experimenting a lot of combinations i found that using a very high iteration- very low (0.1) conductance produced excellent and controllable (using conductance) uniformisation of intensity across structures, allowing easier separation for large structures that can tolerate millimetric loss in detail, wich can be interesting for some structures, as you said it induces limited gradient and that is the goal. though i did not have problems with the scalar range. i made a demonstration previewing many 60 iteration effects with varying conductance (link) as you can see there is a “Pastel” effect wich is bad for nuanced structures but will help quickly isolate larger ones(liver parenchyma, tumors…)

This effect is commonly referred to as “Posterization” and it is a side-effect of limited gray levels in integer-type images. It is not good, as it violates assumptions that voxels can represent details of the image with sufficient accuracy (which may have caused the behavior of the filter that surprised you).

The image should not be filtered to death, even if in certain circumstances they happen to lead to visually appealing results. Instead, you can use anisotropic filtering as a pre-processing step (e.g., to reduce noise) and then use various segmentation methods to delineate anatomical structures.

1 Like

i actually make various filters as the ones you said to isolate other structures
this one just proved to be more effective for (less frequent) tasks where there are very large structures and the detail will not matter ( the intensities get so narrow that it’s almost like converting the volume to a label map with same-intensity fields, so thresholding/ region growing become very easy, it would be unusable without the preview technique since the results can be very unpredictable

yes it is exacty that

Unpredictability is one reason to not misuse this filter for segmentation. Segmentation filters should provide better quality and more predictable results.

For example, using “grow from seeds” effect in Slicer you should be able to segment large-structures very robustly and very importantly, you have control over reassigning regions between segments if by default they got clustered incorrectly.

1 Like

For better behavior, use the ITKAnisotropicDiffusionLBR filters instead.

Another related option is the ITKTotalVariation filters.

1 Like