Abstract
- In this contribution, two different extended reality-based approaches to intuitive robot control are presented. The first approach is a mixed reality-based method that uses a head-mounted display, while the second approach is an augmented reality-based method that utilizes gesture control through the Microsoft Kinect camera. Both methods are utilized with a collaborative robot. For each approach, a human-robot interface is created to make the interaction with the robot more intuitive and straightforward. The mixed reality-based method focuses on intuitive path planning of the collaborative robot and provides full motion control over the robot manipulator. The head-mounted display serves as the operator's interface, displaying virtual content that enables interaction with the robotic system. The user interface provides information about the robot's state, including joint position, velocity, and force exerted on the robot flange. Additionally, force-controlled motion can be executed by specifying control points around an object and motion commands in joint and Cartesian space can be sent to the robot controller. On the other hand, the augmented reality-based approach deals with intuitive robot control through gestures made by a human operator, supported by augmented reality. A set of gestures is developed to control the robot's movement in Cartesian space and operate the gripper. This paper presents both approaches in detail, followed by a discussion of the advantages and drawbacks of each method.