Why does my worker memory limit not increase?

I am a Dask newbie trying out the Dask.distributed library, and I would like to create a Client with the worker memory limit set to 4 GB. After some Googling around, I wrote the following code in a Jupyter notebook:

from dask.distributed import Client, LocalCluster

local_cluster = LocalCluster(n_workers=5, processes=False, memory_limit='4GB', threads_per_worker=4)
client = Client(local_cluster)
client

When I run this code, I get the following output:

Client
   Scheduler: inproc://172.17.0.2/3319/1
   Dashboard: http://172.17.0.2:8787/status
Cluster
   Workers: 5
   Cores: 20
   Memory: 10.43 GB

Since I set the worker memory limit to 4 GB, I would expect the total cluster memory to be at least 20 GB since I have 5 workers. In fact, if I change the memory limit to any number at all, I still get the same 10.43 GB total. What am I doing wrong?

This sounds like a question for the Dask community.

Oh yeah, sorry! I’m working on a project that involves both Dask and ITK, and I accidentally posted the question on the wrong board.

1 Like