Researchers unveil automated robot visual mapping technique

QUT

Wednesday, 10 July, 2024

Researchers unveil automated robot visual mapping technique

Researchers at QUT have developed an automated system that improves how robots map and navigate the world by making vision-based mapping systems more adaptable to different environments.

Lead researcher Dr Alejandro Fontan Villacampa from the QUT School of Electrical Engineering & Robotics said Visual SLAM is the technology that helps devices like drones, autonomous vehicles and robots navigate.

“It enables them to create a map of their surroundings and keep track of their location within that map simultaneously,” he said.

“SLAM systems traditionally rely on specific types of visual features: distinctive patterns within images used to match and map the environment. Different features work better in different conditions, so switching between them is often necessary, but this switching has been a manual and cumbersome process, requiring a lot of parameter tuning and expert knowledge.”

Fontan said QUT’s new system, AnyFeature-VSLAM, adds automation into the ORB-SLAM2 system that is widely used around the world.

“It enables a user to seamlessly switch between different visual features without laborious manual intervention,” he said. “This automation improves the system's adaptability and performance across various benchmarks and challenging environments.”

Research supervisor Professor Michael Milford, Director of the QUT Centre for Robotics, said the key innovation of AnyFeature-VSLAM was its automated tuning mechanism.

“By integrating an automated parameter tuning process, the system optimises the use of any chosen visual feature, ensuring optimal performance without manual adjustments,” he said. “Extensive experiments have shown the system’s robustness and efficiency, outperforming existing methods in many benchmark datasets.”

Fontan said the development was a promising step forward in visual SLAM technology.

“By automating this tuning process, we are not only improving performance but also making these systems more accessible and easier to deploy in real-world scenarios.”

The new development will be presented to the Robotics Science and Systems (RSS) 2024 conference in Delft, the Netherlands. Milford said the RSS conference was one of the most prestigious events in the field, attracting the world’s leading robotics researchers.

“The presentation of AnyFeature-VSLAM at RSS 2024 highlights the importance and impact of this research. The conference will provide a platform for showcasing this breakthrough to an international audience.”

“Having our research accepted for presentation at RSS 2024 is a great honour,” Milford added. “It shows the significance of our work and the potential it has to advance the field of robotics.”

The project was partially funded by an Australian Research Council Laureate Fellowship and the QUT Centre for Robotics.

Image caption: Dr Alejandro Fontan Villacampa and Professor Michael Milford.

Related News

New model offers robots precise pick-and-place solutions

SimPLE learns to pick, regrasp and place objects using the objects' computer-aided design model.

Researchers develop camera inspired by the human eye

Camera mimics the involuntary movements of the human eye to create sharper, more accurate images...

Nozomi Networks and Mitsubishi release PLC-integrated security sensor

Arc Embedded is designed to provide extended, real-time visibility of internal operations of...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd