How 3D Sensors Are Helping Pick-And-Place Robots

Louise Davis

Terry Arden reports on empowering robots with 3D smart sensors

T he use of robots for industrial automation applications is growing at a considerable pace. More and more manufacturers are streamlining their production lines by taking simple, repetitive tasks out of the hands of workers and instead using small- to medium-sized collaborative robots that can perform more accurately and efficiently.
Although robots can perform elaborate tasks such as visual guidance or motion-based scanning, the majority of today’s robot automation applications are pick-and-place – which requires the robot system to locate and move parts from one cell to another. Parts can be positioned systematically or randomly on moving conveyors, stacked bins or pallets.
Such systems usually involve a robotic arm equipped with a vacuum- or pneumatic-based grip that allows the robot to contact the part on a variety of surfaces and effectively transport the part (while avoiding collisions) to a target destination. Some specialised applications require mechanical grips that have ‘fingers’ to pick up, manipulate and place the part.

Making robots smart

Robots are not ‘smart’ enough to perform pick-and-place applications on their own. This is because they can’t ‘see’ or ‘think’. As a result, robots require machine vision to visualise the scene, process information to make control decisions and execute precision-based mechanical movements.

To provide these critical functions, manufacturers can pair 3D smart sensors with robots to create a complete automation solution.

2D-driven systems can only locate parts on a flat plane relative to the robot. Robotic systems equipped with 3D vision, on the other hand, can identify parts randomly posed in three dimensions (i.e. X, Y, Z), and accurately discover each part’s 3D orientation. This is a key capability for effective robotic pick-and-place – 3D delivers both position and orientation.

Smart 3D snapshot sensors

Snapshot sensors use structured light and a stereo camera design to capture high-density 3D point clouds of a part or feature from two different vantage points – all in a single ‘snapshot’ scan.

Each 3D point cloud consists of millions of data points that accurately represent the true 3D shape of the surface of a target. Rich 3D point clouds are especially useful in robotic applications because the scan data identifies both the position and orientation of a part necessary in pick-and-place processes.

A smart 3D snapshot sensor not only acquires 3D data but also analyses the scan to identify discrete parts and their location and orientation. The sensor communicates this X, Y, Z and angle information to an industrial robot. The smart sensor requires no additional external software or PC to interface with the robot in this manner. Everything is onboard the sensor.

For example, Gocator can be mounted on a robot effector and hand-eye calibration can be performed to determine the snapshot coordinate transformation to robot coordinates. Gocator offers built-in tools that locate parts and communicate these measurements over strings to a robot controller using a TCP/IP socket.

Engineers using Universal Robots, a leading solution in collaborative robot arms, can now easily integrate Gocator smart 3D snapshot sensors into their applications. The sensor is able to provide the UR robot not only with the ‘eyes’ it needs in to ‘see’, but also the ‘mind’ it needs to ‘think’ and ‘do’.

UR and smart sensor integration

Sensor-robot integration is achieved through the Gocator URCap plug-in, which is an application users install on the UR robot. This plug-in triggers scans on the sensor and retrieves the positional information of the calibration target in the sensor’s field of view. (Note: Ball-bars are the most commonly used targets in sensor-robot calibration.)
After a sufficient number of scans, the calibration is saved to the UR robot, and hand-eye calibration (between the sensor and the robot flange) is complete.

After the user has performed the hand-eye calibration, they can add programming nodes in the UR robot’s interface to tell the robot to connect to the sensor, load a job on the sensor, trigger a scan and return positional measurements in the X, Y and Z axes.

In summary

Automation of repetitive tasks (such as pick-and-place) delivers many benefits
to manufacturing, such as increasing accuracy and minimising waste. And now with easy UR integration provided by the Gocator URCap, users can get a complete vision-guided robotic solution up and running with minimal cost and development time.

Terry Arden is CEO of LMI Technologies

Recent Issues