Hello all,
I am trying to further understand how itk handles sort.
It seems like the foundation is calculating a normal vector in:
Common/ExtendedGDCMSerieHelper.cxx
cosines = ImageHelper::GetDirectionCosinesValue( **it );
// You only have to do this once for all slices in the volume. Next,
// for each slice, calculate the distance along the slice normal
// using the IPP ("Image Position Patient") tag.
// ("dist" is initialized to zero before reading the first slice) :
normal[0] = cosines[1]*cosines[5] - cosines[2]*cosines[4];
normal[1] = cosines[2]*cosines[3] - cosines[0]*cosines[5];
normal[2] = cosines[0]*cosines[4] - cosi*emphasized text*nes[1]*cosines[3];
Is there documentation in dicom docs or somewhere else on how this is working? I have seen many people ask a similar question on sorting so I would love to understand this solution as it seems robust against all orientations including obliques.
Thanks for any insights!
normal[0] = cosines[1]*cosines[5] - cosines[2]*cosines[4];
normal[1] = cosines[2]*cosines[3] - cosines[0]*cosines[5];
normal[2] = cosines[0]*cosines[4] - cosines[1]*cosines[3];
It is cross product, calculated using 6 values in “Image Orientation (Patient)”.
S. C.7.6.2.1.1
Your code seems to be from ITK-Snap, but that part with plane’s normal is everywhere the same.
Thanks. That’s really helpful @mihail.isakov.
To confirm my understanding – the normal a vector in x, y, z expressing the vector across which the imaging plane is changing then? For an axial stack this would point purely in the z direction?
Some Google searching gave me the answer, from the original author I believe!
https://itk.org/pipermail/insight-users/2003-September/004762.html
The steps I take when reconstructing a volume are these: First,
calculate the slice normal from IOP:
normal[0] = cosines[1]*cosines[5] - cosines[2]*cosines[4];
normal[1] = cosines[2]*cosines[3] - cosines[0]*cosines[5];
normal[2] = cosines[0]*cosines[4] - cosines[1]*cosines[3];
You only have to do this once for all slices in the volume. Next, for
each slice, calculate the distance along the slice normal using the IPP
tag ("dist" is initialized to zero before reading the first slice) :
for (int i = 0; i < 3; ++i) dist += normal[i]*ipp[i];
Then I order the slices according to the value "dist". Finally, once
I've read in all the slices, I calculate the z-spacing as the difference
between the "dist" values for the first two slices.
Yes, but good idea is not to forget that prerequisite for this is that all slices in series have same “Image Orientation (Patient)”, are equidistant and there is no tilt. It is not always true. But it is rather advanced topic, different applications handle non-uniform series in different ways (or don’t handle).
1 Like
@mihail.isakov
Thank you! Yes my objective is to have a continuous volume of the images if the IOP information is there. Small non-uniformity is not a major issue for my purposes.