Get Image Metadata of the whole DICOM series [SimpleITK]


Is there a way to adapt the example at Read Image Meta-Data Dictionary and Print so that it shows image metadata of the whole DICOM series?

Specifically, with the above example it is possible to read a single dicom slice and get its image metadata which, of course, won’t reflect the image metadata of the whole series. For example, GetSize() would show (512,512,1), whereas I would like it to show the size of the whole series, e.g. (512, 512, 120).

I tried the following, but the ImageSeriesReader does not have ReadImageInformation() so only by running Execute() am I able to get the image metadata, but at the cost of loading the series images.

path = # path to DICOM folder
reader = sitk.ImageSeriesReader()
dicom_paths = reader.GetGDCMSeriesFileNames(path)
reader.ReadImageInformation()  # AttributeError: 'ImageSeriesReader' object has no attribute 'ReadImageInformation'
1 Like

I don’t think there is faster convenient way, unless you want to redo part of the logic from ImageSeriesReader. If you really want, take first and last slice from dicom_paths, read their position, and that will allow you to deduce the spacing.

dicom_paths.size() is the size along 3rd dimension - maybe you only need this?

1 Like

In short, your options are A. report number of instances that belong to the series; or B. analyze every file in the series (for example, ITK’s image series reader) and see what you can make out of them.


As far as DICOM is concerned, each slice is a completely independent image (unless your data is in the rarely-used DICOM enhanced multi-frame format) and it is up to the application to interpret them, for example to reconstruct a 3D volume. Note that reconstructing a volume is extremely difficult, as frames can be arbitrarily distributed in time and space (you need to figure out how to group them), manage non-orthogonal axes, varying slice spacing, non-parallel slices, overlaps, vendor-specific implementation differences, and there may be multiple valid interpretations of the same set of frames, with appropriate choice depending on the clinical purpose.

ITK’s image series reader can read the most commonly used image variants (3D volume with parallel slices, orthogonal axes, uniform image spacing and extents across all slices) and for these it can tell you the “size of the whole series”. But there is no way around it, you need to run this rather costly analysis.

If you want to implement a series reader that can deal with less commonly used image types as well (4D volumes, non-uniform slice spacing, etc.) then you need more sophisticated frame sorters, see for example what is implemented in dcm2niix or 3D Slicer.


Thank you @dzenanz @lassoan for suggestions and explanations!
I’m trying to write a small library for preprocessing datasets for deep learning, so I need a way to fetch image metadata conveniently for any format, including DICOM.
Would have been interesting to dive into the ITK source code, but since I’m not proficient with C++, I went for this workaround:

from pathlib import Path
import SimpleITK as sitk

def read_metadata_sitk(path):
    if Path(path).is_dir(): # assumes it is DICOM
        reader = DicomSeriesMetadataReader()
        reader = sitk.ImageFileReader()
    return reader

class DicomSeriesMetadataReader(sitk.ImageFileReader):
    """Workaround for `sitk.ImageFileReader` to read image metadata of DICOM Series.
    All attributes are fetched from the first slice, except for Size and Spacing:
    - Size is obtained by replacing the z-axis value with the number of DICOM files in Series.
    - Spacing is calculated by dividing the distance between the first and last slices with 
    the number of slices in Series.

    This class does not allow reading of PixelData, `sitk.ImageSeriesReader` should be used instead.
    def __init__(self):

    def SetFolderName(self, dicom_folder):
        self.dicom_paths = sitk.ImageSeriesReader.GetGDCMSeriesFileNames(dicom_folder)
        self.num_slices = len(self.dicom_paths)

    def ReadImageInformation(self):
    def GetSpacing(self):
        return tuple(self.spacing)

    def GetSize(self):
        return tuple(self.size)
    def Execute(self):
        raise AttributeError("Please use `sitk.ImageSeriesReader` for reading DICOM images. \
                              DicomImageMetadataReader is used only for image metadata reading")
    def _set_volume_size(self):
        self.size = list(super().GetSize())
        self.size[-1] = self.num_slices
    def _set_volume_spacing(self):
        first_slice_pos = super().GetOrigin()
        last_slice = read_metadata_sitk(self.dicom_paths[-1])
        last_slice_pos = last_slice.GetOrigin()

        z_distance = abs(first_slice_pos[-1] - last_slice_pos[-1])
        z_spacing = z_distance / (self.num_slices - 1)

        self.spacing = list(super().GetSpacing())
        self.spacing[-1] = z_spacing

It provides the metadata from the first slice that should be equal across slices, like PixelValueID while also giving accurate Size and Spacing for the volume. I imagine it probably isn’t perfect, if you have any concern, please let me know. Thank you!

This should work well with mainstream DICOM files that ITK’s frame sorter can handle (all necessary files are in the same folder and the image meets all requirements for slice orientation, spacing, and size, single time point, etc).

There are so many libraries for preprocessing medical images for deep learning. If you start a new library from scratch then it means that you will spend 95% of your time with developing, testing, optimizing, maintaining, documenting features that are already out there, available for free for everyone, and you will have very little time left for working on your new, unique, valuable ideas. If you want your work to make a difference then you need people to find and use your library. To achieve this you need to be a near-super-human (there are a few), because you would also need to spend time with advertising your tool at various forums, support users, etc. It would be so much better and easier for everyone if you could join any of the ongoing efforts in this field. For example, check out and


Completely agree @lassoan, both have been very useful in my projects. “Data preprocessing” might be a poor choice of word for what I’m trying to do, “dataset preparation” would probably do it more justice.

So, instead of replicating what’s done in torchio and monai, I want to go one step before and develop something that will reduce the boilerplate when getting a dataset into a structure and format suitable for your project.

Few use-cases I have in mind:

  • Bringing the dataset to the same spacing. Requires fetching the spacing frequencies to decide to which spacing to resample the dataset (my question above was related to this) as well as the resampling itself.
  • Converting the images to a more convenient format. It can be much simpler (and I believe quicker) to use e.g. NRRD instead of DICOM in your DL projects, so I usually preprocess the DICOM datasets to NRRD before I use them in my projects. Providing reliable functions that convert from source to target format without having to rewrite it for each project would save a lot of time.
  • Metadata insight. For example, it has happened to me several times that I would do 3D patch-based learning, but there would be one volume in the dataset whose size is actually smaller than the defined patch size, which would break the training. Having a simple way to check the distribution of sizes in the dataset would help with that. Another example, you might want to dump just part of the metadata that you’d like to use in the project, e.g. metadata to know which rescanned CT and CBCT were acquired at the same fraction.

I still have to think through what’s the best way to do it and what other use-cases are interesting, but the goal is not to have something end-to-end, as the real life datasets can vary a lot and will still need some manual work, but rather to alleviate some of that work with these “block” functionalities that can be used in your dataset preparation flow accordingly.

I haven’t found something that tries to deal with “dataset preparation” in a somewhat systematic way, please do tell me if I’m just ignorant of it.
Right now I’m focusing on developing something that my team and I will be able to rely on, but, of course, if we develop it well and people find it useful, it would be great see it being used by others too.

1 Like

Hello @ibro45,

Based on your interest you may want to take a look at this Jupyter notebook and associated script.

1 Like

This is great @zivy! Thanks a lot :smiley: