ITK in production

Hi everyone

I would like to start a discussion concerning deployment of a ITK pipeline in production.

How do you deploy your ITK pipeline to scale, parallelize, automatize jobs/tasks?

I am doing segmentation using a pipeline composed of filters based on ITK, home made numpy filters, skimage and opencv. I started to use prefect.io for execution, monitoring and scheduling of jobs. And I would like to know which data flow engine do you use for monitoring/scheduling and automation?
@matt.mccormick

ITK will not be a limiting factor when you choose your pipeline framework, but cause ITK can be used in a wide variety of environments.

You can narrow down the choice by looking at tools that are developed specifically for your application area. For example, for implementing medical image computing pipelines, you could have a look at the Joint Imaging Platform or the ecosystem building up around MONAI and Clara.

@Stephan_Hahn1 I have used Dask, for scaling, and cloud tools like Coiled. It looks like Coiled also has Prefect support. I have not used Prefect, but heard good things. What has been your experiences with Prefect for imaging tasks?

@lassoan Thanks for the sharing. I did not know JIP. Are you a member of the JIP? For MONAI and Clara, I have play a little bit with them in the past. I think it is mostly for ML and it is clearly my second step: use machine learning to reproduce segmentation results from an itk pipeline.

@matt.mccormick Prefect is very easy to start. I just tested one pipeline as a task. I don’t know for example if it interface well with chunked dask array. And don’t know if there is an interest to execute each itk filter as a prefect task. But for small itk pipeline that can be used as a Lambda AWS function, prefect can be useful to orchestrate and scale the runs.
Once I have done more tests, I will update the discussion.

1 Like

I haven’t used JIP, just know the group at DKFZ who develops it.

Machine learning is the main focus of interest everywhere nowadays that’s where all the funding are directed to. You can leverage all these and run “classic” processing on infrastructure that is advertised for “AI” (machine learning systems usually include some classing pre/postprocessing steps anyway).