基于天气退化感知的动态自适应图像复原方法

Weather Degradation Perception-based Dynamically Adaptive Image Restoration

  • 摘要: 在雨、雾、雪和低照度等复杂天气条件下,图像和视频质量显著降低,进而影响视觉感知效果和视觉信息利用价值。因此,如何有效去除复杂天气对图像质量的影响一直是研究的热点问题。为了解决这些问题,本文提出了基于天气退化感知的一体化动态自适应图像复原网络DegRestorNet。该方法通过天气退化感知机制生成包含多维度信息的退化感知场景描述符,进一步引导模型基于动态卷积自适应调整复原策略,从而动态适应雨、雾、雪、低照度等不同类型及程度的质量退化,实现复杂天气条件下的一体化图像复原。首先,设计了面向退化感知的场景描述符生成器(Degradation-Aware Scene Descriptor Generator, DASDG),包含退化类型识别和退化程度评估模块,用以生成退化感知场景描述符。其次,设计了基于动态卷积的退化自适应复原网络(Dynamic Convolution-based Degradation-Adaptive Restoration Network, DARN),在U-Net结构基础上引入了退化感知动态卷积Transformer模块,并根据退化感知场景描述符动态调整参数,从而适配不同天气图像退化特性,实现对复杂天气退化图像的自适应复原。最后,本文在以RAISE数据库为基础、采用CDD图像集生成策略构建的复杂天气数据集DSD上进行了全面的实验评估。与OKNet、RestorNet等主流算法的主客观对比实验表明,在雨、雾、雪、低照度等多种天气导致的退化图像复原任务中,DegRestorNet的综合表现更优。消融实验结果进一步验证了所设计的场景描述符生成器和退化自适应复原网络的有效性,显示了该方法的优越性。

     

    Abstract: In complex weather conditions such as rain, haze, snow, and low illumination, captured images and videos often suffer from severe quality degradation. This degradation not only diminishes visual perception but also significantly reduces the utility and reliability of visual information. Such degradations impair human understanding of scenes and, more critically, pose substantial challenges to intelligent vision systems—such as autonomous driving, surveillance, and robotic perception—which rely heavily on high-quality visual input for accurate decision-making. Therefore, robust and generalizable image restoration techniques are urgently needed to ensure reliable visual understanding under adverse environmental conditions.Recently, all-in-one image restoration approaches have attracted growing attention due to their ability to handle various degradation types within a single unified framework. These methods aim to reduce model redundancy and improve generalization under diverse and complex conditions. Existing all-in-one restoration approaches can be broadly categorized into two types. The first category employs a unified network architecture to process multiple degradation types simultaneously. Although architecturally simple, these methods typically use fixed parameters, limiting their adaptability to dynamically changing degradation patterns in real-world weather. The second category adopts prompt-based learning mechanisms for degradation-aware restoration. While more flexible, such methods often fail to effectively model the complex and nonlinear relationships among degradation types, severity levels, and content-specific features. To overcome these limitations, we propose DegRestorNet, a novel all-in-one dynamically adaptive image restoration network tailored for complex and mixed weather scenarios. DegRestorNet introduces a weather degradation-aware mechanism that generates multidimensional scene descriptors capturing both degradation types and severities. These descriptors guide the restoration strategy by dynamically adjusting the convolutional operations in real time, enabling the model to flexibly adapt to degradations such as rain streaks, fog, low-light, and snowfall. The proposed DegRestorNet consists of two major modules: the Degradation-Aware Scene Descriptor Generator (DASDG) and the Dynamic Convolution-based Degradation-Adaptive Restoration Network(DARN). DASDG contains two sub-modules: a degradation type recognition module and a degradation severity estimation module. The former identifies various degradation types (e.g., rain, haze, snow, and low illumination), while the latter quantifies the severity of each type. These outputs are fused into a unified degradation-aware scene descriptor, establishing a hierarchical representation of the degradation characteristics through a decoupled and layered parsing mechanism. This descriptor offers fine-grained prior guidance for the restoration process, thereby improving both restoration quality and accuracy. The DARN module adopts an encoder-decoder architecture. In the encoder, a cross-attention mechanism is introduced to integrate scene descriptor semantics with visual features at the feature extraction stage. In the decoder, we design a degradation-aware dynamic convolutional Transformer module, which adaptively generates convolution kernels and attention weights conditioned on the scene descriptor. This allows the network to dynamically adapt to various degradation scenarios throughout the restoration process. Finally, comprehensive experimental evaluations are conducted on a complex weather degradation dataset named DSD, which is constructed based on the RAISE dataset and synthesized using the CDD image generation strategy. Compared with state-of-the-art image restoration methods, DegRestorNet achieves superior performance in restoring images affected by rain, haze, snow, and low-light conditions, while significantly reducing the number of model parameters. Ablation studies further verify the effectiveness of the proposed degradation-aware scene descriptor and the dynamic convolutional architecture, demonstrating the superiority and practical applicability of our method in real-world scenarios.

     

/

返回文章
返回