A vision-based teleoperation method for a robotic arm with 4 degrees of freedom

      

ABSTARCT :

The ability to remotely control a robotic arm through a human one is essential where human involvement is needed but physical presence is not possible. Control provided through vision-based approaches comes with advantage over non-vision schemes, as vision-based approaches are less intrusive. On the other hand, the problem of estimating the hand pose comes with numerous difficulties due to the nature of the hand itself. These difficulties include the high complexity of the hand and the presence of self-occlusions. In this paper, we provide a method for controlling a 4 degrees of freedom robotic arm. The arm is composed of 3 segments connected and controlled via servo motors. The end effector of the arm (the 3rd segment) is a gripper that simulates the human hand opening and closing movements. Features necessary to control the arm are 2D coordinates of the center of the human hand, its orientation, and its open/closed state. The results are reported and analyzed, limitations of the scheme are discussed, and possible future work is proposed.

EXISTING SYSTEM :

? A synchronized human-robot training set is generated from an existing dataset of labeled depth images of the human hand and simulated depth images of a robotic hand. ? The robot state is more accessible and is relatively stable concerning the human hand, and there are many existing human hand datasets. ? We propose a novel criterion of generating human-robot pairing from these results by using an existing dataset of labeled human hand depth images, manipulating the robot and recording corresponding joint angles and images in simulation, and performing extensive evaluations on a physical robot.

DISADVANTAGE :

? One problem of current vision-based methods is that the hand of the teleoperator should stay inside the limited view range of the camera system. ? Motion capture systems provided accurate tracking data but can be expensive, and the correspondence problem between markers on the fingers and cameras still needs to be solved. ? However, the hand of the teleoperator easily disappears from the field of view of the camera if the arm movement is relatively large. ? Even though a multicamera system could be one of the solutions for this, we solve this problem by a cheap 3D-printed camera holder, which can be mounted on the forearm of the teleoperator.

PROPOSED SYSTEM :

• We propose a novel vision-based teleoperation method called Transteleop, which extracts coherent pose features between the paired human and robot hand based on image-to-image translation methods. • We proposed an end-to-end network TeachNet, which was used with a consistency loss function to control a five-fingered robotic hand based on simulated data. • They constructed a reward function for a reinforcement learning algorithm through translated instructions and evaluated the proposed method in a coffee machine operation task.

ADVANTAGE :

? It is essential to design an efficient network which could learn the corresponding robot pose feature in human pose space. ? The end-to-end method depends on massive human-robot teleoperation pairings, we aim to explore an efficient method which collects synchronized hand data both for the robot and the human. ? This dataset generation method, which maps the keypoints position and link direction of the robot from the human hand by an improved mapping method, and manipulates the robot and records the robot state in simulation, is efficient and reliable. ? We used time to complete an in-hand grasp and release task as a metric for usability.

Download DOC Download PPT

We have more than 145000 Documents , PPT and Research Papers

Have a question ?

Chat on WhatsApp