Social Science Study
  • HOME
  • IRT Study
  • Cleaning and tidying technology for home-assistant robots

Cleaning and tidying technology for home-assistant robots

In everyday life, human beings use many types of appliances and tools to perform household tasks; obviously, such objects are designed for human use. The IRT project has developed a robot that is capable of cleaning and tidying up rooms. Our robot can perform other routine tasks such as (1) carrying a tray from a table to the kitchen, (2) gathering clothes from rooms and putting them into a washing machine, and (3) cleaning the floor with a broom. We have also developed the technology necessary for a single robot to perform each of these tasks in succession. The home-assistant robot that we are announcing at this time has been a platform for developing and providing proof-of-concept for the various technical elements and the design of eventual robot products will incorporate changes based on further study of the needs of society, and society's acceptance of such a robot.

Characteristics of recognition and behavior system used for performing daily tasks

1. Recognition Home assistant robots use cameras and LRFs (laser range finders) as sensors.
Our robot can perform object recognition using such sensors. It is capable of recognizing objects such as trays, chairs, and washing machines by matching the image data obtained using the sensors with stored 3D geometric models. The recognition result provides object pose information to the robot; the robot uses this information when handling these objects. This approach enables the robot to recognize appliances and tools even if they do not have textures on their surface. In addition, although it had previously been difficult for robots to recognize flexible objects as clothes, we have developed a method for extracting and learning features such as wrinkles on clothes from images. This enables the robot to search for clothes that need to be washed.
2. Behavior generation We develop a motion generation system based on 3D geometric models.
Appliances, tools, and the robot are modeled as 3D solid shapes with handling points on their surfaces. This approach can be used when the robot recognizes a target object by means of external sensing, as described in section (1) The robot’s behavior is generated from this result. Although this process may cause some errors in recognition and robot motion, our approach allows re-planning so that errors can be avoided. This motion generation also includes self-collision avoidance and angle limits avoidance.
3. Finding failures and generating recovery behaviors Human beings always use their senses for observing the conditions when handling a target object.
It is necessary for home assistant robots to have a similar ability. In particular, such robots should not give up when several tasks are to be performed sequentially. Vision sensors, force sensors, and pose comparison between the planned state and present state are used to judge failure conditions. For example, during the execution of some task, the following steps may be performed: (1) target clothes are picked up, (2) a button on the washing machine is pushed, and (3) a broom is held in the correct position. If failures are observed, the robot recognizes the failure condition, generates a new behavior plan, and attempts to perform the task again. This ability enables our robot to perform several daily tasks sequentially.

Hardware configuration of the IRT home assistant robot

  • ・Dual arms (7 DOFs in each arm) and a waist joint
  • ・Three fingers (2 DOFs in each finger) are mounted on each arm
  • ・PWS-type wheelbase to move freely in a room
  • ・External sensors such as a stereo camera, LRFs (laser range finders), and force sensors