TY - GEN
T1 - SingleDemoGrasp
T2 - IEEE International Conference on Automation Science and Engineering
AU - Sefat, Amir Mehman
AU - Angleraud, Alexandre
AU - Rahtu, Esa
AU - Pieters, Roel
N1 - Funding Information:
Project funding was received from European Union’s Horizon 2020 research and innovation programme, grant no. 871449 (OpenDR) and no. 825196 (TRINITY).
Publisher Copyright:
© 2022 IEEE.
Jufoid=73680
PY - 2022
Y1 - 2022
N2 - Learning-based grasping models typically require a large amount of training data and training time to generate an effective grasping model. Alternatively, small non-generic grasp models have been proposed that are tailored to specific objects by, for example, directly predicting the object's location in 2/3D space, and determining suitable grasp poses by post processing. In both cases, data generation is a bottleneck, as this needs to be separately collected and annotated for each individual object and image. In this work, we tackle these issues and propose a grasping model that is developed in four main steps: 1. Visual object grasp demonstration, 2. Data augmentation, 3. Grasp detection model training and 4. Robot grasping action. Four different vision-based grasp models are evaluated with industrial and 3D printed objects, robot and standard gripper, in both simulation and real environments. The grasping model is implemented in the OpenDR toolkit at: https://github.com/opendr-eu/opendr/tree/master/projects/control/single_demo_grasp.
AB - Learning-based grasping models typically require a large amount of training data and training time to generate an effective grasping model. Alternatively, small non-generic grasp models have been proposed that are tailored to specific objects by, for example, directly predicting the object's location in 2/3D space, and determining suitable grasp poses by post processing. In both cases, data generation is a bottleneck, as this needs to be separately collected and annotated for each individual object and image. In this work, we tackle these issues and propose a grasping model that is developed in four main steps: 1. Visual object grasp demonstration, 2. Data augmentation, 3. Grasp detection model training and 4. Robot grasping action. Four different vision-based grasp models are evaluated with industrial and 3D printed objects, robot and standard gripper, in both simulation and real environments. The grasping model is implemented in the OpenDR toolkit at: https://github.com/opendr-eu/opendr/tree/master/projects/control/single_demo_grasp.
KW - Deep Learning in Grasping and Manipulation
KW - Grasping
KW - Perception for Grasping and Manipulation
U2 - 10.1109/CASE49997.2022.9926463
DO - 10.1109/CASE49997.2022.9926463
M3 - Conference contribution
AN - SCOPUS:85141695477
T3 - IEEE International Conference on Automation Science and Engineering
SP - 390
EP - 396
BT - 2022 IEEE 18th International Conference on Automation Science and Engineering, CASE 2022
PB - IEEE
Y2 - 20 August 2022 through 24 August 2022
ER -