I’ve written a script that iterates over some images, and analyses those images with different parameter sets, saving the output to file. When running the code the output saved to file is inconsistent, and sometimes looks like it has been truncated at a certain slice height.
The images I am working with are 3D nifi files, that are in the order of 2 GB. I have been running the script on my local computer, RAM 32 GB, while doing other things on the computer which are sometimes memory intensive (for example, image visualization). Regardless of the output, the script runs with no errors. A simplified code block is shown below, using dummy functions in place of the actual analysis. Running with a break point on the line … analysed_img . . . shows that the content of the analysis is always correct. The analysis of these images is highly memory intensive.
for im_path in im_paths:
img = sitk.ReadImage(im_path)
for param in parameters:
out_path = out_path_generator(im_path, param)
analysed_img = im_analyse(img, param) # content of <analysed_img> always fine at this line
sitk.WriteImage(analysed_img, out_path)
I have run the code remotely on a high performance computing node and it works and writes out data correctly. I am wondering if this behavior is a known issue, and what I could do to prevent it from happening when I am running code with limited memory availability (that is, if memory availability is the problem). If not, I’d love to get some ideas on ways to assess this problem.