SimpleITK incorrectly identifies DICOM pixel type

I’m a developer on a software package that has started using SimpleITK to read in various image types. We have an explicit list of pixel types that we support, and if, after reading an image using SimpleITK, we find that the image is of a pixel type that we do not support, we throw an error.

A user is attempting to read a DICOM image set with 16-bit unsigned integer pixels, but our software is throwing an error, telling them that we don’t support 32-bit signed integer images.

I stripped the problem down to a very small bit of code found here in this GitHub gist:

When this program is given the directory containing DICOM images, it simply prints out the pixel type. This behaves as expected for all of the DICOM image sets that I have, except for the image set provided for me by this user. For that set, it says that the image’s pixel type is 32-bit signed integer, when in reality it’s 16-bit unsigned integer.

I have verified that the image data the user provided is in fact 16-bit. ImageJ identifies it as 16-bit. The DICOM headers contain the following:

0028,0100 Bits Allocated: 16
0028,0101 Bits Stored: 16
0028,0102 High Bit: 15

And the files could not possibly hold 32-bit data due to the number of pixels, and the files’ size.

With the user’s permission, I’ve anonymized the data and included a few of the slices here. (1.2 MB)

I’ve confirmed that this issue occurs with the entire DICOM set, with these few slices that I’ve included, and with a single slice.

I’m using SimpleITK 2.2, and I’m compiling on Ubuntu 22.04 with g++ 11.4.0.

Thank you for any help you can provide!

Michael Herron

1 Like

Re-scale intercept and slope are applied, s. GDCM gdcmRescaler.cxx ComputeBestFit function.

For this image:
pf is PixelFormat::UINT16 (with Bits Allocated 16, Bits Stored 16),
slope is 1,
intercept is -1024

  const double pfmin = slope >= 0. ? (double)pf.GetMin() : (double)pf.GetMax();
  const double pfmax = slope >= 0. ? (double)pf.GetMax() : (double)pf.GetMin();
  const double min = slope * pfmin + intercept;
  const double max = slope * pfmax + intercept;
pfmin = 0
pfmax = 65535
min = -1024
max = 64511

It is the reason for signed int (int32_t) output type.

GDCM is a “third-party” library, BTW.


Hello @mherron,

Just to add to @mihail.isakov’s answer, an accessible discussion on how to interpret the pixel data in DICOM is available here, it is part of a very well written DICOM tutorial.


@zivy Thanks for the info! I’ll check that out.

Thank you for the explanation. This makes sense now.

The actual data in this image set only has a range of ~21,000, and so after the rescale, the data could in theory be represented by 16-bit signed integer, and so it is a bit disappointing that GDCM defaults to making it 32-bit, but there’s not much that I can do about that.

I was looking to see if there is a way to read this image using SimpleITK without having it do the rescale, but I’m unable to find any way to do so. Do you happen to know if there is a way to read this pixel data directly, without applying the slope and intercept?

Thank you,

Michael Herron

Hello @mherron,

Unfortunately, no way to circumvent the application of slope+intercept, it is part of the image reading that is not exposed to the user. This is intentional.

Most ITK/SimpleITK users are concerned with image analysis and do not want to deal with all the intricacies of reading DICOM data from file and applying the relevant transformations to get the actual pixel values (this is not trivial).


@zivy Thanks again for the prompt response.

I figured that was probably the case, and I was hesitant to try an approach that would circumvent the calibration function anyway, for the same reasons that you described. I just wanted to be aware of any options that I had available to me.

Thank you again!