基于配准对抗生成网络的CBCT生成伪CT图像研究

Research on synthetic-CT generation from CBCT using Reg-GAN

  • 摘要: 在肿瘤放射治疗领域,基于锥形束CT(Cone-beam computed tomography, CBCT)的图像引导技术能有效校正患者摆位误差并监测病灶体积变化,但因图像固有的散射噪声和重建伪影,限制了其在临床上的应用,如何实现CBCT图像的CT值快速校准,对提升诊疗效率具有重要意义. 本研究提出一种基于融合配准机制的改进型对抗生成网络模型(Registration adversarial generative network, Reg-GAN),通过非配对医学影像数据的高效映射,实现CBCT图像向伪CT(synthetic CT, sCT)的CT值修正. 研究46例头颈部肿瘤患者的定位CT(Planning CT, pCT)与CBCT影像(采集间隔<24 h),38例患者用于模型训练,8例患者用于验证与测试. 预处理阶段,将pCT配准于CBCT图像,配准后pCT作为参考图像进行sCT的图像质量评估. 结果显示,CBCT与pCT的CT值差异在0~250 HU(Hounsfield unit)之间,sCT与pCT的CT值差异在−50~50 HU之间;对于软组织和脑组织,CT差异为0 HU. 与原CBCT相比,sCT图像的平均绝对误差(Mean absolute error, MAE)从(52.5±26.6) HU降至(36.6±11.6) HU(P=0.041<0.05),峰值信噪比(Peak signal-to-noise ratio, PSNR)由(25.1±3.1) dB提升至(27.1±2.4) dB(P=0.006<0.05),结构相似性指数(Structural similarity index, SSIM)从0.82±0.03优化至0.84±0.02(P=0.022<0.05). sCT与pCT图像相比,关键剂量学参数之间的P值均小于0.05,差异无统计学意义. 经配准对抗网络模型生成的sCT图像质量显著提升,并在剂量学特性上与pCT一致性较高,为在线自适应放疗的临床实施提供了可靠的技术支撑.

     

    Abstract: In radiotherapy, although the image-guidance technique based on cone-beam computed tomography (CBCT) can effectively correct patient setup errors and monitor lesion volume changes, its inherent scattering noise and reconstruction artifacts result in distorted image grayscale values, which limits its clinical application. To achieve fast calibration of CBCT HU(Hounsfield unit) values in intra-fraction adaptive radiotherapy, we propose an adversarial generative network model called registration-enhanced generative adversarial network (Reg-GAN) based on the deformation registration mechanism, which realizes efficient calibration of CBCT images to the radiotherapy dosage by efficiently mapping unpaired medical image data to the radiotherapy dosage. Mapping achieves fast grayscale calibration of CBCT images to pseudo-CT, synthetic CT (sCT). The study included paired simulated CT, also known as planning CT (pCT), and CBCT image data from 46 patients with head and neck tumors (acquisition interval <24 h). Stratified random sampling was used to divide the dataset into training (38 cases) and validation (eight cases) groups. In the preprocessing stage, a rigid registration algorithm was applied to spatially align the pCT with the CBCT coordinate system, and voxel resampling was used to achieve spatial pixel standardization. The Reg-GAN network architecture is based on a cycle-consistent adversarial network (Cycle-GAN) and innovatively integrates a deep-learning-based multimodal alignment module to optimize the image quality through joint optimization. The Reg-GAN architecture significantly improves the robustness of the model to noise and artifacts by jointly optimizing the image generation loss and spatial deformation field constraints. Quantitative evaluation showed that by comparing the HU values of the corresponding voxels in the spatial coordinate system, the difference in HU values between CBCT and pCT was between 0 and 250 HU within anatomical structures, the difference in HU values between sCT and pCT was between −50 and 50 HU, and the difference in HU values for soft and brain tissues was 0 HU. In contrast, the sCT generated by Reg-GAN showed significant improvement in image quality metrics over the original CBCT: Mean absolute error (MAE) decreased from (52.5±26.6) HU to (36.6±11.6) HU (P=0.041<0.05), peak signal-to-noise ratio (PSNR) increased from (25.1±3.1) dB to (27.1±2.4) dB (P=0.006<0.05), and structural similarity index (SSIM) was optimized from 0.82±0.03 to 0.84±0.02 (P=0.022<0.05). Dosimetric validation was performed using a multimodal image fusion strategy, in which pCT was used as a baseline image and sCT was rigidly aligned to map the target volume and organs at risk through deformation contouring. The dose calculation results of the treatment planning system (TPS) showed that the dose distributions and dose–volume histogram (DVH) generated by sCT and pCT maintained high consistency, and the P-values of the pivotal dosimetric parameters were all >0.05, with no statistically significant difference between them, validating the dosimetric accuracy of sCT in adaptive radiotherapy. In this study, the limitation of CBCT image grayscale distortion on dose calculation was effectively solved by the synergistic optimization of deep alignment and the generative adversarial network. The proposed Reg-GAN model not only enhances the workflow efficiency of image-guided radiotherapy but also exhibits excellent performance of the generated sCT in terms of image quality and dosimetric properties, providing reliable technical support for the clinical implementation of online adaptive radiotherapy.

     

/

返回文章
返回