itk js readImageDICOMFileSeries Problem to read big series

Hi everyone,

I try itk.js using this example (ITK Read Dicom).
So I have a NodeJS server and launch a simple webpage to open DICOM Series.
When I try to load big Dicom Series (more than 700 images), then the loader does not work.

On Chrome console, I obtain this error:

"Error: Could not read file: /work/11.dcm
    at readImageEmscriptenFSDICOMFileSeries (http://localhost:8081/itk/WebWorkers/ImageIO.worker.js:59:356)
    at readDICOMImageSeries (http://localhost:8081/itk/WebWorkers/ImageIO.worker.js:32:3478)
    at http://localhost:8081/itk/WebWorkers/ImageIO.worker.js:32:3827
    at o (http://localhost:8081/itk/WebWorkers/ImageIO.worker.js:2:2754)
    at http://localhost:8081/itk/WebWorkers/ImageIO.worker.js:2:3342
    at http://localhost:8081/itk/WebWorkers/ImageIO.worker.js:2:3404"

I try to uptade the version of itk to 10.2.2 but nothing change. I tried also to open other dicoms with the same error. I open those dicom using python or C++ with no problem at all.

I also tried to use readImageLocalDICOMFileSeries function but I have another error:

TypeError: fs.readdirSync is not a function
    at eval (/myapp/node_modules/itk/readImageLocalDICOMFileSeries.js:22:21)
    at new Promise (<anonymous>)
    at readImageLocalDICOMFileSeries (/myapp/node_modules/itk/readImageLocalDICOMFileSeries.js:18:11)
    at HTMLInputElement.eval (/myapp/src/index.js:260:78)

Is it possible to load those “big” dicoms?
Thank you

Hi Romain,

Are the 700 files from the same volume?

Thanks,
Matt

Yes. There are from the same Series

You may be running out of memory. Performance and memory optimizations are underway for itk.js and the DICOM series reader. I will follow-up when they are released.

Ok great thank you. Do you think if I load the series image by image and send it server side for each slices it will work? The problem of this solution is that on server side I must reconstruct the volume and sort all slices before creating the volume.

Possibly. This approach is what I am working towards in the next revision.

I continue to understand this error and I found that the problem occurs also with just two images of my dicom series. I reproduce this error with the itk-vtk-viewer example. Is it possible that the pixel data encoding is at fault?
I can send you some slices of this series for debugging purpose.

If one slice is loaded in the itk-vtk-viewer example, does it load correctly?

No I have the same problem with only one slice (I test different slices from this series with the same error)

Which scanner generated these images and what is the modality?

I guess sharing a few slices would be easier.

If they are not already annonymized, see whether annonymization preserves problem-causing “quality”.

It’s a CT scan generated by a GE scanner. I attach you one slice anonymized :slice_ano.dcm (201.9 KB)

Maybe the problem is that it has 32-bit pixel intensities? And that is not needed since the pixel intensities range from -3024 to 1308. Just converting the pixel type to short would save half the memory, even if that is not the problem for loading.

1 Like

I change the pixel type to short but nothing change unfortunately.

I continue to investigate on this problem and found something weird on the code of itk-js in itkDICOMImageSeriesReaderJSBinding.cxx file:

Line 487: 
case itk::CommonEnums::IOComponent::LONGLONG:
    {
    typedef itk::VectorImage< signed long, ImageDimension > ImageType;

If the IOComponent is a long long, why the ImageType is set to signed long instead of long long?

I think you found a bug. PR is here. @matt.mccormick

This is intentional since JavaScript/WebAssembly/Emscripten does not have full 64-bit integer support.

Hi @Athius ,

itk.js 12.2.0 was recently released with improved DICOM support. Please give it a try. Note that the arguments to readImageDICOMFileSeries changed recently. Your slice_ano.dcm reads fine now:

Thanks,
Matt

Hi Matt,
Sorry for this late answer. The reading work fine now! Thank you so much.

For the speed issue, we read the full series client side then split the image data buffer in multiple chunks, compress each chunk then do a base64 and send each chunk (with the chunk size, and the type of the pixels, the orientation matrix, the origin and the spacing) on server side using a websocket to do some treatments (segmentation and other “heavy” operations).

1 Like