Visual Task Specification User Interface for Uncalibrated Visual Servoing
Date
Author
Institution
Degree Level
Degree
Department
Supervisor / Co-Supervisor and Their Department(s)
Examining Committee Member(s) and Their Department(s)
Citation for Previous Publication
Link to Related Item
Abstract
In today's world robots work well in structured environments, where they complete tasks autonomously and accurately. This is evident from industrial robotics. However, in unstructured and dynamic environments such as for instance homes, hospitals or areas affected by disasters, robots are still not able to be of much assistance. Moreover, robotics research has focused on topics such as mechatronics design, control and autonomy, while fewer works pay attention to human-robot interfacing. This results in an increasing gap between expectations of robotics technology and its real world capabilities.
In this work we present a human robot interface for semi-autonomous human-in-the-loop control, that aims to tackle some of the challenges for robotics in unstructured environments. The interface lets a user specify tasks for a robot to complete using uncalibrated visual servoing. Visual servoing is a technique that uses visual input to control a robot. In particular, uncalibrated visual servoing works well in unstructured environments since we do not rely on calibration or other modelling.
The user can visually specify high-level tasks by combining a set of geometric constraints. Our interface offers a versatile set of tasks that span both coarse and fine manipulation. The main contribution of this thesis is twofold. First of all we have developed an interface for visual task specification. Second we complete experiments to explore the visual task specification technique and find how to best use it in practice. Finally, we complete experiments to asses the performance of the system.
