Abstract
Loader cranes with multiple actuated joints are complex systems to be operated by humans. Development of advanced assistance functions, such as end-effector velocity control in Cartesian space allows for utilizing the machine to its full speed and potential, wherein actuator limits, load balance, and singularities, as well as other complicated effects are handled by the automated function. To this end, this paper provides a reinforcement learning-based policy optimization workflow for training and evaluating controllers using large-scale, parallelized invocations of forward kinematics. Monte Carlo evaluations of the closed-loop model are performed to inspect the stability and performance in the whole operational envelope of the loader crane for safe deployment on real machines. Our approach does not require any explicit inverse-kinematics model and is free from complex or hard-coded actuator limits or objectives. Results of simulations and experiments on a real loader crane are provided to showcase the performance of our approach in comparison to Jacobian inverse-based methods.
Original language | English |
---|---|
Pages (from-to) | 484-496 |
Number of pages | 13 |
Journal | IEEE Transactions on Robotics |
Volume | 41 |
Early online date | 2024 |
DOIs | |
Publication status | Published - 2025 |
Publication type | A1 Journal article-refereed |
Keywords
- Loader crane
- motion control
- operator assistance function
- policy optimization
- redundancy resolution
- reinforcement learning
Publication forum classification
- Publication forum level 3
ASJC Scopus subject areas
- Control and Systems Engineering
- Computer Science Applications
- Electrical and Electronic Engineering