The current way to get the default number of threads ( if not envs are set) is to look at the hardware:
This has not been a good default when ITK is used in a docker container ( where the default become the number of cores on the physical system, and not what is allocated to the container), and with certain distributed clustering environments where only a certain number of core/processors are allocated to the task. The latter was supported with some scheduler with support for “NSLOTS” environment variable.
Looking into this issue from Python I found the following:
I am fine with this change, as long as it works across multiple platforms. A Windows equivalent might be GetProcessAffinityMask(). Also possibly relevant is GetSystemInfo(). This discussion might be relevant too, if someone is looking into implementing it.
The np in pthread_getaffinity_np means ‘non portable’ and indeed this function does not seem to exist on the BSDs, including macOS.
But anyway, thread affinity seems orthogonal to what you’re talking about, which is the number of threads available for use by ITK.
(Really the whole question is sorta ill-posed, because you don’t know what other processes on the system are doing, and how many threads they need. That’s a nice thing with GCD on macOS, is that you have shared global thread pools, but I digress…)
This is currently used in some cases. Again it queries the hardware not what the OS has allocated to the processes.
I don’t think threads are shared between processes. The size of the thread pools in each process may be determined by the OS, and what other tasks it’s currently/dynamically doing.
Yes, the OS functions are not portable.
I would likely defer that to someone more invested or able to test in windows than my self.
Well, this is getting off-topic for ITK, but: GCD’s model is less about directly using threads and more about work pools. You put items in a work pool and the OS decides how/when to schedule them.
Trying to answer the question of ‘how many threads are available for ITK’ is still impossible and at best you can guess. The answer changes over time. CPU cores can be powered off/on, process come and go, and other processes may have lots or little work to do (that your process doesn’t know about), etc. Only the OS knows all these things.
But does it? My understanding of those affinity API is that they are more about getting two threads running on the same CPU, so that if they are doing something like working on the same data they can share L1 cache and the like.
They don’t seem related to answering “how many threads can I use?”.