Advice on a workflow for copying of a region-of-interest across different DICOM series

Hello everyone,

I’m working on a medical image processing workflow in Python using SimpleITK and could benefit from the experience of others.

The goal: To copy a segment from a “Source” DICOM SEG to a “Target” DICOM SEG, preserving the location of the region-of-interest (ROI) in 3D space whilst accounting for their different voxel spacings.

I came up with a workflow and I’m in the process of working my way through it. Although I’m happy to share the steps (if anyone might be interested in providing specific feedback, for example), I’ve decided to leave it out for now in the interest of keeping this question as concise as possible.

I have unfortunately made a few mistakes along the way (trying to solve the wrong problems) that cost me dearly in time. So to try to avoid this going forward I wanted to reach out to the community to avoid “re-inventing the wheel” or going off-piste.

My question: Can anyone provide references (e.g. journal publications) to any established methods/workflows for copying and manipulating ROIs?

You could look into dcmqi, as well as Slicer ProjectWeek pages 1, 2.

Workflow could be simple: read SEG into physical space, translate into the other image using ITK’s TransformPhysicalPointToIndex().

If you run your code in 3D Slicer’s Python environment then this requires probably 10-15 lines of code in total (import segmentations from DICOM, copy/move segments between segmentations, export to DICOM). DICOM references, terminologies, UIDs, etc. are all properly taken care of. If you have any questions, you can ask on the Slicer forum.

There are many other useful features in this Python virtual environment, for example, you can get your data directly from a DICOMweb server, process it, and push the new segmentation back to the server. You can do all these from the GUI, from the Python console, scripts, or Jupyter notebooks (Slicer provides a Python3 Jupyter kernel that gives access to all data that is loaded into the scene, so you can jump between application GUI for debugging, 3D visualization, etc. while working on your notebook).

1 Like

Thanks to both @dzenanz and @lassoan for your helpful suggestions.

I briefly explored dcmqi and 3D Slicer previously, and although I can see that they are very powerful tools that would help me to get the result I’m looking for, eventually my Python code will have to be converted to JavaScript so it can be integrated into a wider project.

So although I’m not 100% certain that going down the route of dcmqi + Slicer is the “wrong” approach for me, I’m doubtful that it is. Please feel free to disagree. :slight_smile:

If you agree that sticking with Python + SimpleITK is a better approach in the long run, and if you would be interested/happy to provide any comments to my proposed workflow I can share it here.

dcmqi+Slicer route is basically the same as Python+SimpleITK, as Slicer = Python + SimpleITK + a number of additional libraries (many of them optional). These are all desktop/server-side technologies that nicely work together. The barrier is between the web browser and desktop/server (and JavaScript and Python/C++).

We collaborate closely with web platform developer groups in medical image computing (OHIF, dcm.js, Kheops, etc.), so we are well aware of the current status, challenges, and trends. Web platforms started from scratch and in a few years they can now do basic (2D-oriented) image viewing and annotations, volume rendering (the feature set equivalent to desktop platforms of about 10-15 years ago). There is now a rethinking/retooling in progress to break into true 3D processing (e.g., moving to 3D libraries, such as itk.js, vtk.js). With this, there is a convergence of web and desktop platforms, especially at lower level libraries. We’ll probably end up having all basic features (3D oriented image viewing, annotation, and processing) in web platforms in a couple of years and desktop/server will remain for high-end, specialty applications, and large/complex data processing.

Since your problem is already solved on desktop/server-side, one option is to leverage that (e.g., using 3D Slicer features via web services). This is the approach taken by several web applications today.

Another option is to wait a few years for web platforms to mature and have stronger 3D support (adopt vtk.js, have segment editing infrastructure, etc.). You can speed this up by contributing to these web platforms and pushing development of specific features that you need (e.g., segmentation data management).

A third option is to develop solutions that only work for your project and not aim for general usability, stability, sustainability, etc. Although this is not a long-term solution and does not move the field of medical image computing forward, you may fulfill your project’s needs much faster than developing/waiting for general solutions, so it may be a valid business strategy, too.

1 Like

Thanks very much @lassoan for your insights. You’ve given me much to think about.