Abstract:
Low-altitude logistics unmanned aerial vehicles (UAVs) integrated with hierarchical collaborative technology mark a significant breakthrough in modern logistics systems. This technology effectively addresses persistent challenges in logistics distribution, particularly in improving operational efficiency, scalability, and environmental adaptability. The system is built upon a three-layer architecture—cooperative task allocation, cooperative trajectory planning, and dynamic trajectory re-planning—that enables UAVs to function in a coordinated, intelligent, and responsive manner, thereby enhancing the overall performance of aerial delivery networks. The cooperative task allocation layer serves as the foundation for distributing delivery orders among multiple UAVs. It tackles complex multi-constraint coupling problems involving payload capacity, battery life, delivery deadlines, and airspace regulations. To address these challenges, various algorithmic approaches have been developed. Optimization-based methods such as mixed integer linear programming and dynamic hierarchical planning offer mathematically sound solutions. Market-inspired mechanisms like auction bidding and game alliance optimization introduce economic principles to improve fairness and efficiency. Swarm intelligence algorithms, including ant colony and genetic algorithms, provide robust solutions inspired by natural behaviors. Reinforcement learning techniques, such as deep and dynamic reinforcement learning, enable UAVs to adapt to dynamic environments through continuous learning. These approaches collectively enhance the system’s efficiency and flexibility in task allocation. The cooperative trajectory planning layer focuses on generating safe and efficient three-dimensional flight paths. It balances key objectives such as obstacle avoidance, energy consumption, and timely delivery. Fine optimization techniques ensure path feasibility and optimality under real-world constraints. Swarm intelligence and evolutionary algorithms support decentralized path exploration and refinement. Reinforcement learning models, enhanced with deep learning and transfer learning, allow UAVs to adapt flight strategies based on historical and environmental data. Hybrid frameworks integrate multiple methodologies to achieve robust and generalizable trajectory planning, particularly in complex urban environments. The dynamic trajectory re-planning layer ensures real-time adaptability to environmental changes such as weather shifts, new obstacles, or mission adjustments. It employs search-based methods like random sampling for rapid route exploration, optimization algorithms for trajectory feasibility, and intelligent agent-based learning for adaptive decision-making. Physical models, such as artificial potential fields, simulate forces to guide UAVs around obstacles. These techniques collectively enhance the system’s responsiveness and robustness, ensuring mission continuity under unpredictable conditions. Despite these advancements, several technical challenges persist. Strong coupling among multiple constraints complicates both task allocation and trajectory planning. Limited dynamic adaptability hinders responsiveness to rapidly changing environments. Large-scale coordination remains inefficient due to communication delays and computational complexity. Additionally, many current solutions lack integration with real-world operational scenarios. To overcome these limitations and enable widespread deployment, future research should focus on cross-layer collaborative optimization, scenario-specific integration, large-scale swarm intelligence, dynamic robust design, and energy-efficient strategies.