Hi,
I’ve searched but couldn’t find any answer matching what I’m after.
I’ve created a suite of docker containers, dash web apps, back-end processing services, clara framework workflows, jupyterlab servers which all use source builds of ITK combining some remote modules in addition to some of my own custom written external modules. Some of the containers also require python wrapping. The configuration is a bit non-standard as I make use of some GPU-enabled filters, Intel TBB and I need to python wrap a few more types and dimensions than the default.
Currently, I build and install everything fine in my “builder” container and then deploy the “build” to the other containers by copying the required files (headers, libs, help, etc) via dockerfile commands.
However, I would much prefer to build an internal use “wheel” (I only need a single flavor) in my builder container and then deploy this to all my containers that need a python enabled version of my full ITK environment. This way I would have a “properly” installed ITK package that pip knows about which doesn’t get clobbered by the public package when other packages get installed that also depend on ITK.
I’ve spent a couple of days fiddling around with ITKPythonPackage, cloning the repo into my builder container and trying to configure it to do what I want, but aren’t getting there.
Firstly, is it possible to get ITKPythonPackage to do what I want? I’m assuming it’s pretty much the analog of what you are doing for the nightly builds.
If so, can someone give me some pointers as to what I need to set, configure and execute to build a wheel and bundle the libs etc.
FYI, I’ve tried using the superbuild option and setting ITK_SOURCE_DIR and ITK_BINARY_DIR to a successfully built source tree but I’m not sure what to do next or to be honest what it is actually trying to do
Any help achieving this would be much appreciated. If there is another way, I’d be interested in that too!
Regards,
Darren Thompson