As part of a research project, scientists at Nanyang Technological University, Singapore (NTU Singapore) have developed a robot that can independently assemble the individual components of an Ikea chair without interruption in less than ten minutes. The robot consists of an Ensenso N35 3D camera from IDS and two robot arms equipped with grippers to pick up objects.
The robot hardware is designed to simulate how people mount objects: the eyes are replaced by a 3D camera and the arms by industrial robot arms capable of moving in six axes. Each arm is equipped with parallel grippers for picking up objects. Force sensors are attached to the wrists to determine how strongly the fingers grip and how strongly they bring objects into contact with each other. The robot starts the assembly process by taking 3D images of the parts lying on the ground to create a map of the estimated positions of the various components. This task is performed by an Ensenso 3D camera. The camera works according to the projected texture stereo vision principle (stereo vision), which imitates human vision. Two cameras acquire images from the same scene from two different positions. Although the cameras see the same scene content, there are different object positions according to the cameras’ projection rays. Special matching algorithms compare the two images, search for corresponding points and visualise all point displacements in a disparity map. The Ensenso software can determine the 3D coordination for each individual image pixel or object point, in this case the chair components.
The challenge is to locate the components as precisely, quickly and reliably as possible in a confusing environment. This is ensured by the camera’s light-intensive projector. This produces a high-contrast texture on the object surface by using a pattern mask, even under difficult light conditions. The projected texture supplements the weak or non-existent object surface structure found on the components of the chair.
Although not required for this application, the N35 model used here even goes one step further: thanks to the integrated FlexView projector technology the pattern projected on the object surface of the components can be shifted to vary the texture on the surface. Acquiring multiple image pairs with different textures of the same object scene produces a lot more image points. Thus the components of the chair are displayed in 3D in a much higher resolution to make them easier for the robot to recognise. Another advantage is the robot hand-eye calibration function of the Ensenso software. Using a calibration plate, it ensures that the position of the camera coordinate system (in this case the stationary camera) is determined with respect to the base coordinate system (position of the component). This enables the robot's hand to react precisely to the image information and reaches exactly its destination.
Furniture assembly in less than ten minutes
"For a robot, putting together an Ikea chair with such precision is more complex than it looks," explained Professor Pham Quang Cuong of NTU. "The job of assembly, which may come naturally to humans, has to be broken down into different steps, such as identifying where the different chair parts are, the force required to grip the parts, and making sure the robotic arms move without colliding into each other. Through considerable engineering effort, we developed algorithms that will enable the robot to take the necessary steps to assemble the chair on its own." The result: the NTU robot installs the Stefan chair from Ikea in just 8 minutes and 55 seconds. According to Professor Pham Quang Cuong, artificial intelligence will make the application even more independent and promising in the future: "We are looking to integrate more artificial intelligence into this approach to make the robot more autonomous so it can learn the different steps of assembling a chair through human demonstration or by reading the instruction manual, or even from an image of the assembled product.”