Vision and force based autonomous coating with rollers

Abstract

Coating rollers are widely popular in structural painting, in comparison with brushes and sprayers, due to thicker paint layer, better color consistency, and effortless customizability of holder frame and naps. In this paper, we introduce a cost-effective method to employ a general purpose robot (Sawyer, Rethink Robotics) for autonomous coating. To sense the position and the shape of the target object to be coated, the robot is combined with an RGB-Depth camera. The combined system autonomously recognizes the number of faces of the object as well as their position and surface normal. Unlike related work based on two-dimensional RGB-based image processing, all the analyses and algorithms here employ three-dimensional point cloud data (PCD). The object model learned from the PCD is then autonomously analyzed to achieve optimal motion planning to avoid collision between the robot arm and the object. To achieve human-level performance in terms of the quality of coating using the bare minimum ingredients, a combination of our own passive and builtin active impedance control is implemented. The former is realized by installing an ultrasonic sensor at the end-effector of robot working with a customized compliant mass-spring-damper roller to keep a precise distance between the end-effector and surface to be coated, maintaining a fixed force. Altogether, the control approach mimics human painting as evidenced by experimental measurements on the thickness of the coating. Coating on two different polyhedral objects is also demonstrated to test the overall method.

Publication
In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems
Supplementary notes can be added here, including code and math.