Grasping virtual fish: A step towards deep learning from demonstration in virtual reality
Journal article, Peer reviewed
Published version
Permanent lenke
http://hdl.handle.net/11250/2496162Utgivelsesdato
2018-03-26Metadata
Vis full innførselSamlinger
- Publikasjoner fra CRIStin - SINTEF Ocean [1369]
- SINTEF Ocean [1443]
Originalversjon
Robotics and Biomimetics (ROBIO), 2017 IEEE International Conference on 10.1109/ROBIO.2017.8324578Sammendrag
We present an approach to robotic deep learning from demonstration in virtual reality, which combines a deep 3D convolutional neural network, for grasp detection from 3D point clouds, with domain randomization to generate a large training data set. The use of virtual reality (VR) enables robot learning from demonstration in a virtual environment. In this environment, a human user can easily and intuitively demonstrate examples of how to grasp an object, such as a fish. From a few dozen of these demonstrations, we use domain randomization to generate a large synthetic training data set consisting of 76 000 example grasps of fish. After training the network using this data set, the network is able to guide a gripper to grasp virtual fish with good success rates. Our domain randomization approach is a step towards an efficient way to perform robotic deep learning from demonstration in virtual reality.