Mitsubishi Electric develops teach-less robot system technology
Mitsubishi Electric Corporation has announced that it has developed a teaching-less robot system technology to enable robots to perform tasks, such as sorting and arrangement as fast as humans without having to be taught by specialists. The system incorporates Mitsubishi Electric’s Maisart AI technologies including high-precision speech recognition, which allows operators to issue voice instructions to initiate work tasks and then fine-tune robot movements as required.
The technology is expected to be applied in facilities such as food processing factories where items change frequently, which has made it difficult until now to introduce robots. Mitsubishi Electric aims to commercialise the technology in or after 2023 following further performance enhancements and extensive verifications.
The key feature of the system is that robot movements are self-programmed or adjusted based on simple commands from operator, which can be by voice or via a device menu by a non-specialist operator. Proprietary voice-recognition AI accurately recognises voice instructions even in noisy environments for the first time, according to the company. Sensors detect 3D information (images and distances) about the work area, which are processed with augmented reality (AR) technology for simulations that allow the operator to visualise expected results.
The company says that programming and adjustments require just one-tenth or less time than conventional systems.
The system, responding to voice or menu instructions, scans the work surroundings with a three-dimensional sensor and then automatically programs the robot’s movements. Movements can be fine-tuned via further commands from the operator.
Mitsubishi Electric says that its voice-recognition AI offers the first voice-instruction user interface deployed by industrial robot manufacturers.
As an example, in a food-processing factory, a non-specialist could instruct a robot simply by saying something like “Pack three pieces of chicken in the first section of the lunch box.” The AI can infer implied meanings if a voice instruction is ambiguous, such as determining how much motion compensation is required if instructed “A little more to the right.” Alternatively, a tablet equipped with menus can be used to issue instructions or to select categories such as ‘where’, ‘what’ and ‘how many’ to generate simple programs.
The tablet can also be used to view stereographic AR simulations that allow the operator to confirm that instructions will have the intended results. For added convenience, the system can also recommend the ideal positioning of a robot in an AR virtual space without requiring a dedicated marker.
By enabling the self-programming of robot movements, including obstacle avoidance, the system reduces the workload associated with gathering environmental information, inputting data and confirming operations using simulators or actual equipment. As a result, the company says the system can complete these cumulative processes in just one-tenth or less time compared to conventional methods.
Emerson offers solution to reduce energy costs and emissions
Energy Manager is designed to simplify electricity monitoring, tracking real-time use to identify...
New robotics and automation precinct opens in WA
The WA Government has officially opened what it says will be Australia's largest robotics and...
International robot federated learning project a success
The FLAIROP international research project has shown AI federated learning across multiple...