I’m having troubles to reproduce exactly the same results with the N4BiasFieldCorrection in Windows vs. Linux. Even more strange it does work for certain cases, but not for others. I’ve tried already many things and checked previous posts related to multithreading and others, but now I’m running our of ideas. I describe all the details here:
The code is available in https://github.com/rcorredorj/ITK_Sandbox.git. II’m compiling with ITK v4.11.0. The repo contains as well two sample cases:
- 01 : Produces exactly the same results in Windows and Linux
- 02 : Does not produce exactly the same results in Windows and Linux. After doing the substraction of the two resulting [float] images, the maximum difference is 0.00268555. It sounds negligeable but might have an effect in posterior processing steps.
The machines I’m using to test are:
- Windows Machine: 4 Cores, 8 Logical processors. Windows 7
- Linux Machine: 48 Cores, 2 Threads per core. Ubuntu 18.04.1 LTS
An example of the command line used to run these tests:
N4BiasFieldCorrectionApp.exe -i inputImage.nii.gz -m mask.nii.gz -o out_linux.nii --num-threads 8
I included the last parameter in my executable because I noticed that the only way to obtain same results in different machines with different cores was to assign the number of threads to a specific value:
When this is not set in the code, the results varies according to the number of cores in the machine, i.e. two machines with different number of cores (e.g. two Windows machines 4 cores and 8 cores) give different results if these options are not added. Usually the differences are in the 3rd or 4th decimal if float images are used.
Setting these options work to reproduce same results for some cases (e.g. the case 01), but strangely not for others (e.g. case 02). Although, the input images and the compiled code are equivalent.
Do you have any idea of what could be inducing a different result in certain cases?
Thanks in advance!!