Vision-Based Methods for Joint State Estimation of Robotic Manipulators
Date
Author
Institution
Degree Level
Degree
Department
Supervisor / Co-Supervisor and Their Department(s)
Citation for Previous Publication
Link to Related Item
Abstract
This thesis applied a combination of machine learning and computer vision to an engineering research project, using a two-armed Baxter robot hardware platform. The challenge was estimating the robot arm’s joint angles from monocular camera images. After evaluating several methods from traditional computer vision, we settled on the method of convolutional neural networks, which provided better accuracy and outlier rejection performance. A simulation environment toolchain was developed to generate automatically labelled training images for the neural network in order to eliminate the tedious manual labelling usually required for these methods. This brought the challenge of the domain gap between simulation and real-world images, which was solved using a generative adversarial network for transferring image textures. A hardware evaluation was performed for both joint keypoint detection and joint angle estimation performance, whose ground-truth values were accurately captured in the laboratory environment.
