How can I keep the association between input image coordinate (z-axis) and segmentation mask?

Hello,

I am working with 3D CT images which are consists of a series of 2D slices. I resized them into (128 x 128 x 128)
My segmentation masks are provided as xml files and have segmentations only for like 3 or 4 slices (where the tumor is visible). I am trying to construct a segmentation volume as same size as the input image (128 x 128 x 128). My question is how I can keep the same z position in my segmentation volume as the input image?

I create the numpy image array form xml file as follows:

Bs_data = BeautifulSoup(data, "lxml") 

# Finding all instances of tag
# `unique`
b_unique_X = Bs_data.find_all('x')
b_unique_Y = Bs_data.find_all('y')
#print(b_unique)


#Extract the X, Y co-ordinates of contours from the xml file
X_coord = []
Y_coord = []
for i,j in zip(range(0,len(b_unique_X)),range(0,len(b_unique_Y))):
    X_coord.append(b_unique_X[i].get('value'))
    Y_coord.append(b_unique_Y[i].get('value'))

final =  np.column_stack((X_coord, Y_coord))
np.set_printoptions(threshold=sys.maxsize)
pixelCoords = final.astype(float).astype(int)
arr = np.zeros((512,512))
poly = pixelCoords[:,:2].copy()
cv2.fillPoly(arr, pts=[poly], color = 255)
mask1 = arr.astype(int)

I can get the slice number from the xml file as follows.

description_tag = Bs_data.description

# Get the attribute 
attribute = description_tag['value'] 

My input images dicom and are named based on the slice number as well (e.g., here the slice number is 215):
(1.2.826.0.1.3680043.2.133.1.3.49.1.124.27456-3-215-1fx0wj3.dcm)

You have all the necessary information. Use TransformPhysicalPointToIndex() and TransformIndexToPhysicalPoint to go between index space (pixel indices) and physical point coordinates. Read more about it in ITKSoftwareGuide, section 4.1.4 Defining Origin and Spacing.

I am not sure I understand. Could you please explain it little bit more?

You seem to be discarding Z (slice index) when reading points from XML file. Without that, it is impossible to put those contours into 3D space.

In a better case, the point coordinates are in physical 3D space already, and you only need to call index = image128.TransformPhysicalPointToIndex(point3), where point3 is X_coord[i], Y_coord[i], Z_coord[i]. Then use index in a call to cv2.fillPoly.

In a worse case, the coordinates are in index space of the corresponding image, and we need to convert them to physical first. I assume those contours refer to slices in the original image grid, not your new 128^3 grid. If so, you need to keep the original grid information, and use it to go from original index space to physical space, via point3 = original.TransformIndexToPhysicalPoint([X_coord[i], Y_coord[i], Z_coord[i]]) or something similar.

1 Like

I don’t have a slice index in my xml file. I have a separate xml file for each slice.

Then get slice index from the file name.

@dzenanz Thank You so much! This worked.

1 Like

If you share your code, or the main part of your code, it might be useful to someone in the future.