Abstract:
In the field of radiotherapy, although the image guidance technique based on cone-beam computed
tomography (CBCT) can effectively correct patient setup errors and monitor lesion volume changes, its inherent
scattering noise and reconstruction artifacts result in distorted image grayscale values, which limits its clinical
application. Due to achieve fast calibration of CBCT gray values in intra-fraction adaptive radiotherapy, in this study,
we innovatively propose an adversarial generative network model (Registration-Enhanced Generative Adversarial
Network, Reg-GAN) based on the deformation registration mechanism, which realizes efficient calibration of CBCT
images to the radiotherapy dosage by efficiently mapping the unpaired medical image data to the radiotherapy
dosage. mapping to achieve fast grayscale calibration of CBCT images to pseudo-CT (synthetic CT, sCT).
The study included paired simulated CT (planning CT, pCT) and CBCT image data from 46 head and neck tumor
patients (acquisition interval <24 hours), and stratified random sampling was used to divide the dataset into a training
group (38 cases) and a validation group (8 cases). In the preprocessing stage, a rigid registration algorithm was
applied to spatially align pCT to the CBCT coordinate system, and voxel resampling was used to achieve spatial
pixel standardization. The Reg-GAN network architecture is based on Cycle-Consistent Adversarial Network
(Cycle-GAN), and innovatively integrates deep learning-based multimodal alignment module to optimize the image
quality through joint optimization. Based on Cycle-GAN, the Reg-GAN architecture innovatively integrates a deep
learning-based multimodal alignment module, which significantly improves the robustness of the model to noise
and artifacts by jointly optimizing the image generation loss and the spatial deformation field constraints.
Quantitative evaluation shows that by comparing the gray values of corresponding voxels in the spatial coordinate
system, the difference in gray values between CBCT and pCT is between 0-250 HU within anatomical structures,
the difference in gray values between sCT and pCT is between -50-50 HU, and the difference in gray values for soft
tissues and brain tissues is 0 HU, on the other hand, the sCT generated by Reg-GAN obtains significant improvement
in image quality metrics over the original CBCT: Mean Absolute Error (MAE) decreased from 52.5±26.6 HU to
36.6±11.6 HU (P=0.041<0.05), Peak Signal-to-Noise Ratio (PSNR) increased from 25.1±3.1 dB to 27.1±2.4 dB
(P=0.006<0.05), and Structural Similarity Index (SSIM) was optimized from 0.82±0.03 to 0.84±0.02
(P=0.022<0.05). Dosimetric validation was performed using a multimodal image fusion strategy, in which pCT was
used as a baseline image and sCT was rigidly aligned to map the target volume and the organs at risk through
deformation contouring. The dose calculation results of the Treatment Planning System (TPS) showed that the dose
distributions and Dose-Volume Histogram (DVH) generated by sCT and pCT maintained high consistency, and the
P-values of the pivotal dosimetric parameters were all >0.05, with no statistically significant difference between
them. validating the dosimetric accuracy of sCT in adaptive radiotherapy.
In this study, the limitation of CBCT image grayscale distortion on dose calculation was effectively solved by the
synergistic optimization of deep alignment and generative adversarial network. The proposed Reg-GAN model not
only enhances the workflow efficiency of image-guided radiotherapy, but also its excellent performance of the
generated sCT in terms of image quality and dosimetric properties provides a reliable technical support for the
clinical implementation of online adaptive radiotherapy
.