基于天气退化感知的动态自适应图像复原方法

Dynamically Adaptive Image Restoration based on Perceived Weather-related Degradation

  • 摘要: 在雨、雾、雪和低照度等复杂天气条件下,图像和视频质量显著降低,进而影响视觉感知效果和视觉信息利用价值. 因此,如何有效去除复杂天气对图像质量的影响一直是研究的热点问题. 为了解决这些问题,本文提出了基于天气退化感知的一体化动态自适应图像复原网络DegRestorNet. 该方法通过天气退化感知机制生成包含多维度信息的退化感知场景描述符,进一步引导模型基于动态卷积自适应调整复原策略,从而动态适应雨、雾、雪、低照度等不同类型及程度的质量退化,实现复杂天气条件下的一体化图像复原. 首先,设计了面向退化感知的场景描述符生成器(Degradation-aware scene descriptor generator, DASDG),包含退化类型识别和退化程度评估模块,用以生成退化感知场景描述符. 其次,设计了基于动态卷积的退化自适应复原网络(Dynamic convolution-based degradation-adaptive restoration network, DCDRN),在U-Net结构基础上引入了退化感知动态卷积Transformer模块,并根据退化感知场景描述符动态调整参数,从而适配不同天气图像退化特性,实现对复杂天气退化图像的自适应复原. 最后,本文在复杂天气公开数据集CDD和以RAISE数据库为基础、采用CDD图像集生成策略构建的复杂天气数据集DSD上进行了全面的实验评估. 与OKNet、RestorNet等主流算法的主客观对比实验表明,在雨、雾、雪、低照度等多种天气导致的退化图像复原任务中,DegRestorNet的综合表现更优. 消融实验结果进一步验证了所设计的场景描述符生成器和退化自适应复原网络的有效性,显示了该方法的优越性.

     

    Abstract: Images and videos often suffer from a severe degradation in quality in complex weather conditions such as rain, haze, snow, and low illumination. In addition, this degradation significantly reduces the utility and reliability of visual information and impairs human understanding of scenes. More critically, it poses a substantial challenge to intelligent vision systems used in applications such as autonomous driving, surveillance, and robot perception that rely heavily on high-quality visual input for accurate decision-making. Therefore, robust and generalizable image restoration techniques are urgently needed to ensure reliable visual understanding under adverse environmental conditions. Recently, all-in-one image restoration approaches have attracted considerable attention as a topic of active research due to their ability to handle various types of degradation within a single unified framework. These methods aim to reduce redundancy and improve generalization under diverse and complex conditions. Existing all-in-one restoration approaches can be broadly categorized into two types. The first category employs a unified network architecture to process multiple types of degradation simultaneously. Although architecturally simple, these methods typically use fixed parameters, which limits their adaptability to dynamically changing degradation patterns in real-world weather. The second category adopts prompt-based learning mechanisms for degradation-aware restoration. Although they can be more flexible, such methods often fail to effectively model the complex and nonlinear relationships among different types of degradation, different levels of severity, and content-specific features. To overcome these limitations, we propose DegRestorNet as a novel all-in-one dynamically adaptive image restoration model designed for complex and mixed-weather scenarios. DegRestorNet introduces a weather degradation-aware mechanism that generates multidimensional scene descriptors to capture the type and severity of degradation observed in a given image. These descriptors guide the restoration strategy by dynamically adjusting convolutional operations in real time to enable the model to adapt flexibly to different visual degradations such as streaks of rain, fog, low light, and snowfall. The proposed DegRestorNet consists of two major modules, including a degradation-aware scene descriptor generator (DASDG) and a dynamic convolution-based degradation-adaptive restoration network (DCDRN). The DASDG contains two sub-modules designed to recognize different types and severities of degradation. The former identifies various types of degradation (e.g., rain, haze, snow, and low illumination), whereas the latter quantifies the severity of each type. These outputs are fused into a unified degradation-aware scene descriptor to establish a hierarchical representation of the characteristics of the degradation through a decoupled and layered parsing mechanism. This descriptor offers fine-grained prior guidance for the restoration process to improve the quality and accuracy of the restoration. The DCDRN module adopts an encoder-decoder architecture. In the encoder, a cross-attention mechanism is introduced to integrate scene descriptor semantics with visual features at the feature extraction stage. For the decoder, we designed a degradation-aware dynamic convolutional transformer module that adaptively generates convolution kernels and attention weights conditioned on the scene descriptor. This allows the network to adapt dynamically to various degradation scenarios throughout the restoration process. Finally, comprehensive experimental evaluations were conducted on the public CDD dataset of weather images and a synthesized dataset of complex weather conditions named DSD, which was constructed based on the public RAISE dataset using the CDD image generation strategy. Compared with state-of-the-art image restoration methods, DegRestorNet achieved superior performance in restoring images affected by rain, haze, snow, and low-light conditions with significantly fewer parameters. The results of ablation studies further verify the effectiveness of the proposed degradation-aware scene descriptor and dynamic convolutional architecture. Overall, our results demonstrate the superiority and practical applicability of the proposed method in real-world scenarios.

     

/

返回文章
返回