Hi Simon,

The DRR projection code that I have implemented actually works for the projection angle zero, but if I project the same point cloud with a different angle say for example 90, then these projected points not aligned with the correct location in the corresponding DRR image.

In order to test this algorithm, I have first created a volumetric liver mesh and then voxelized it as a .mhd file to get the DRR images using SiddonJacobRay tracing algorithm for different gantry rotations. Then I attempt to project the points of the liver mesh to each DRR images by providing the corresponding projection angle to the algorithm.

**I suspect the issue is with m_Direction matrix (initially it is assigned to an identity matrix). But I do not know how to change it according to the projection angle. If that is the case can anyone help me to resolve this issue please ?**

This code actually works for only when the projection angle zero, otherwise not.

As you can see in the attached gif animation, the point cloud (see red dots) is rotate around wrong direction compared to the DRR images. How can I fix it ?

Another example with a volumetric ellipsoid.

```
def getDRRProjectionCoords(self, points=None, m_ProjectionAngle=0., m_FocalPointToIsocenterDistance=1000., m_Threshold=0.,
source_to_detector_distance=1536., im_sx=1., im_sy=1., dx=1024, dy=1024):
# Use the image center as the central axis position.
# refer https://github.com/InsightSoftwareConsortium/ITKTwoProjectionRegistration/blob/master/test/GetDRRSiddonJacobsRayTracing.cxx#L506-L510
o2Dx = (dx - 1.) / 2.
o2Dy = (dy - 1.) / 2.
# constant for converting degrees into radians
dtr = (math.atan(1.0) * 4.0) / 180.0
Dimension = 3
PointType = itk.Point[itk.D, Dimension]
VectorType = itk.Vector[itk.D, Dimension]
MatrixType = itk.Matrix[itk.D, Dimension, Dimension]
TransformType = itk.Euler3DTransform[itk.D]
# Compute the origin (in mm) of the 2D Image
m_Origin = PointType()
m_Origin[0] = -im_sx * o2Dx
m_Origin[1] = -im_sy * o2Dy
m_Origin[2] = m_FocalPointToIsocenterDistance - source_to_detector_distance
m_Spacing = VectorType()
m_Spacing[0] = im_sx # pixel spacing along X of the 2D DRR image [mm]
m_Spacing[1] = im_sy # pixel spacing along Y of the 2D DRR image [mm]
m_Spacing[2] = 1.0 # slice thickness of the 2D DRR image [mm]
# inverse trasnform to rotate the vertex into DRR detector coordinate system
# user can set an arbitrary transform for the volume. If not explicitly set, it should be identity.
# Here I'm not setting the isocenter location and assume it is in (0, 0, 0) by default,
# but in the DRR algorithm that we used to generate DRR images, Set the center of the 3D image volume (CT) as the isocenter
# refer https://github.com/InsightSoftwareConsortium/ITKTwoProjectionRegistration/blob/master/test/GetDRRSiddonJacobsRayTracing.cxx#L452-L454
m_Transform = TransformType.New()
m_Transform.SetComputeZYX(True)
m_Transform.SetIdentity()
m_InverseTransform = TransformType.New()
m_InverseTransform.SetComputeZYX(True)
m_ComposedTransform = TransformType.New()
m_ComposedTransform.SetComputeZYX(True)
m_GantryRotTransform = TransformType.New()
m_GantryRotTransform.SetComputeZYX(True)
m_GantryRotTransform.SetIdentity()
m_CamShiftTransform = TransformType.New()
m_CamShiftTransform.SetComputeZYX(True)
m_CamShiftTransform.SetIdentity()
m_CamRotTransform = TransformType.New()
m_CamRotTransform.SetComputeZYX(True)
m_CamRotTransform.SetIdentity()
m_CamRotTransform.SetRotation(dtr * (-90.0), 0.0, 0.0)
m_ComposedTransform.SetIdentity()
m_ComposedTransform.Compose(m_Transform, False)
# isocenter set to (0, 0, 0)
# but in the DRR algorithm that we used to generate DRR images, Set the center of the 3D image volume (CT) as the isocenter
isocenter = m_Transform.GetCenter()
# An Euler 3D transform is used to rotate the volume to simulate the roation of the linac gantry.
# The rotation is about z-axis. After the transform, a AP projection geometry (projecting
# towards positive y direction) is established.
m_GantryRotTransform.SetRotation(0.0, 0.0, dtr * (-m_ProjectionAngle))
m_GantryRotTransform.SetCenter(isocenter)
m_ComposedTransform.Compose(m_GantryRotTransform, False)
# An Euler 3D transfrom is used to shift the camera to the detection plane
camtranslation = VectorType()
camtranslation[0] = -isocenter[0]
camtranslation[1] = -isocenter[1]
camtranslation[2] = source_to_detector_distance - m_FocalPointToIsocenterDistance - isocenter[2]
#print(camtranslation)
m_CamShiftTransform.SetTranslation(camtranslation)
m_ComposedTransform.Compose(m_CamShiftTransform, False)
# An Euler 3D transform is used to establish the standard negative z-axis projection geometry. (By
# default, the camera is situated at the origin, points down the negative z-axis, and has an up-
# vector of (0, 1, 0).)
m_ComposedTransform.Compose(m_CamRotTransform, False)
# The overall inverse transform is computed. The inverse transform will be used by the interpolation
# procedure.
m_ComposedTransform.GetInverse(m_InverseTransform)
MatrixType = itk.Matrix[itk.D, Dimension, Dimension]
m_Direction = MatrixType()
m_Direction.SetIdentity()
m_IndexToPhysicalPoint = MatrixType()
m_IndexToPhysicalPoint.SetIdentity()
m_PhysicalPointToIndex = MatrixType()
m_PhysicalPointToIndex.SetIdentity()
scale = MatrixType()
# Compute helper matrices used to transform Index coordinates to
# PhysicalPoint coordinates and back. This method is virtual and will be
# overloaded in derived classes in order to provide backward compatibility
# behaviour in classes that did not used to take image orientation into
# account.
scale = itk.GetMatrixFromArray(np.diag(m_Spacing))
m_IndexToPhysicalPoint = m_Direction * scale
m_PhysicalPointToIndex = m_IndexToPhysicalPoint.GetInverse()
# this entire bottom part should be iterated over all the points
tesnor_to_numpy_ = points.cpu().detach().numpy()
point_list = tesnor_to_numpy_.tolist()
# hold the pixel cooridnates as separate numpy arrays for x and y coordinates
u = np.array([])
v = np.array([])
for point in point_list:
# coordinate to the coordinate system defined by the detector coordinate system
drrPixelWorld = m_InverseTransform.TransformPoint(point)
#print(drrPixelWorld)
# 2D coordinates on the detector plane (in physical units, not in pixels)
# calculate the physical cooridante in DRR image plane
# i.e. multiply the x and y coordinates by the ratio source-to-detector distance over z
drrPixelWorld[0] = drrPixelWorld[0] * (source_to_detector_distance/(source_to_detector_distance+drrPixelWorld[2]))
drrPixelWorld[1] = drrPixelWorld[1] * (source_to_detector_distance/(source_to_detector_distance+drrPixelWorld[2]))
drrPixelWorld[2] = -536
#print(drrPixelWorld)
index = VectorType()
for i in range(Dimension):
sum = itk.NumericTraits[itk.D].ZeroValue()
for j in range(Dimension):
sum += m_PhysicalPointToIndex(i, j) * (drrPixelWorld[j] - m_Origin[j])
index[i] = sum
u = np.append(u, index[0])
v = np.append(v, index[1])
return u, v
```