分布式一致性最优化的梯度算法与收敛分析

Distributed gradient-based consensus optimization algorithm and convergence analysis

  • 摘要: 研究了多智能体网络中受集合约束的一致性最优化问题,提出了基于原始–对偶梯度的定步长分布式算法。算法中包括步长在内的参数会影响收敛性,需要先进行收敛分析,再根据收敛条件设置合适的参数。本文首先针对一般的定步长迭代格式,提出一种基于李雅普诺夫函数的收敛分析范式,它类似于一般微分方程关于李雅普诺夫稳定的分析方法。然后,针对所考虑的分布式梯度算法,构造了合适的李雅普诺夫函数,并根据收敛条件得到了算法参数设定范围,避免了繁冗复杂的分析论证。本文提出的理论与方法也为其他类型的分布式算法提供了一个框架性、系统性的论证方法。

     

    Abstract: A distributed optimization problem is cooperatively solved by a network of agents, which have significant applications in many science and engineering fields, such as metallurgical engineering. For complex industrial processes with multiple-level characteristics, varying working conditions, and long processes, numerous optimization decision-making micro and macro control problems, such as product quality control, production planning, scheduling, and energy comprehensive deployment, are encountered. The theory and method of distributed optimization are keys to promoting the strategic decision-making of the integration of industrialization and new-generation industrial revolution. Their development enhances the ability to deal with large-scale and complex problems of big data, which have important practical value and economic benefits. In this study, consensus optimization with set constraints in multi-agent networks was explored. A distributed algorithm with a fixed step size was proposed on the basis of a primal-dual gradient scheme. Parameters such as step size affect the convergence of the algorithm. As such, convergence should be analyzed first, and appropriate parameters should be subsequently set in accordance with convergence conditions. Existing works have constructed different Lyapunov functions by exploiting the specific iteration scheme of this algorithm and analyzing convergence. Conversely, a convergence analysis paradigm based on a Lyapunov function was proposed in this study for general fixed step size iteration schemes, which were similar to the analysis method of Lyapunov convergence for general differential equations. A suitable Lyapunov function was constructed for the distributed gradient algorithm, and a parameter setting range was obtained in accordance with the convergence conditions. The proposed method avoids the tedious and complicated analysis of algorithm convergence and parameter assignment. The theory and method presented in this study also provide a framework and systematic demonstration method for other types of distributed algorithms and may be regarded as future directions of distributed optimization.

     

/

返回文章
返回