Abstract:
To address the tendency of permutation flow-shop scheduling problem (PFSP) solvers to become trapped in local optima and their insufficient balance between global exploration and local exploitation when minimizing makespan, this study proposes an enhanced dung beetle optimization algorithm. The PFSP is widely recognized as a strongly NP-hard combinatorial optimization problem, and traditional swarm intelligence algorithms often struggle to maintain an appropriate balance between exploration and exploitation, particularly when dealing with large-scale instances and complex search landscapes. Motivated by these limitations, the proposed algorithm introduces several improvements that collectively enhance overall search efficiency and robustness. First, an optimized Chebyshev chaotic map is incorporated into the population initialization process. By leveraging the nonlinear and ergodic characteristics of chaotic sequences, the initial population can achieve greater diversity and a more uniform distribution across the search space. This not only broadens the initial exploration scope but also helps the algorithm avoid premature convergence caused by highly clustered initial solutions, thereby improving overall optimization performance from the outset. Second, during the early stages of the search process, an adaptive convergence factor strategy is introduced to guide dynamic individual movement. This strategy adjusts the convergence factor based on the iteration progress, allowing dung beetle individuals to switch flexibly between extensive exploration and focused exploitation. Consequently, the algorithm enhances information sharing among individuals, accelerates search speed, and increases its ability to traverse complex solution landscapes. This dynamic adjustment mechanism ensures that population diversity is preserved in early iterations while gradually strengthening convergence behavior as the algorithm approaches later stages. In the later stages of the iteration, a fused strategy combining an improved lens-imaging reverse learning mechanism and a greedy selection mechanism is adopted to further elevate both the depth and accuracy of the search. The improved lens-imaging strategy enables individuals to generate high-quality candidate solutions around promising regions, thereby reinforcing local exploitation. Meanwhile, the inverse learning mechanism provides additional global exploration capability by generating symmetric solutions that may lie closer to the global optimum. Combined with a greedy selection rule, the algorithm can effectively retain superior solutions and prevent the search process from stagnating in local optima. Furthermore, an orthogonal experimental design method is employed to systematically determine the key parameters of the algorithm. Orthogonal experiments help quantify the influence of different parameter levels, enabling the selection of parameter combinations that yield stable and high-quality performance while reducing empirical randomness in parameter tuning. To evaluate the effectiveness of the proposed algorithm, extensive simulations were performed using the widely adopted Car, Rec, and Taillard benchmark instances. Results demonstrate that the improved algorithm significantly outperforms several classical swarm intelligence optimization algorithms in terms of solution quality, convergence behavior, and robustness across different problem scales. Finally, the practical applicability of the proposed method was validated through its deployment in optimizing production scheduling at a steel-pipe manufacturing enterprise, where it achieved substantial improvements in makespan reduction and operational efficiency.