Abstract
This article presents a motion planning and control framework for flexible robotic manipulators, integrating deep reinforcement learning (DRL) with a nonlinear partial differential equation (PDE) controller. Unlike conventional approaches that focus solely on control, we demonstrate that the desired trajectory significantly influences endpoint vibrations. To address this, a DRL motion planner, trained using the soft actor-critic (SAC) algorithm, generates optimized trajectories that inherently minimize vibrations. The PDE nonlinear controller then computes the required torques to track the planned trajectory while ensuring closed-loop stability using Lyapunov analysis. The proposed methodology is validated through both simulations and real-world experiments, demonstrating superior vibration suppression and tracking accuracy compared to traditional methods. The results underscore the potential of combining learning-based motion planning with model-based control for enhancing the precision and stability of flexible robotic manipulators.
| Original language | English |
|---|---|
| Pages (from-to) | 8634-8641 |
| Journal | IEEE Robotics and Automation Letters |
| Volume | 10 |
| Issue number | 9 |
| DOIs | |
| Publication status | Published - Sept 2025 |
| Publication type | A1 Journal article-refereed |
Publication forum classification
- Publication forum level 2