Robotic grasping in agile production

Research output: Chapter in Book/Report/Conference proceedingChapterScientificpeer-review

Abstract

Recent developments in robotics and deep learning have enabled high-level robotic tasks to be learned from simulated or real data. In this chapter, the task of robot grasping is covered, where a robot manipulator learns a grasping model from perceptual data, such as RGB-D or point clouds. The chapter is presented in context of robotics for agile production, thereby providing requirements and limitations that are relevant for deep learning in robotics. An overview of different approaches is given with special attention to the evaluation of robotic object grasping and the potential follow-step of object manipulation. In addition, a list of data sets is provided that utilize simulation to generate training data for object grasping.

Original languageEnglish
Title of host publicationDeep Learning for Robot Perception and Cognition
EditorsAlexandros Iosifidis, Anastasios Tefas
PublisherAcademic Press
Pages407-433
Number of pages27
ISBN (Electronic)9780323857871
ISBN (Print)9780323885720
DOIs
Publication statusPublished - 2022
Publication typeA3 Book chapter

Keywords

  • Agile production
  • Deep learning
  • Grasp representation
  • Object pose estimation
  • Robot object grasping and manipulation

Publication forum classification

  • Publication forum level 2

ASJC Scopus subject areas

  • Computer Science(all)

Fingerprint

Dive into the research topics of 'Robotic grasping in agile production'. Together they form a unique fingerprint.

Cite this