Vision-based collaborative robots for exploration in uneven terrains

Bookmark (0)
Please login to bookmark Close

Exploring tasks in unknown environments has become a relevant search and rescue robotics approach. Ground robots are a better alternative to rescuers for first exploration. However, exploration progress is often limited by uneven terrains that exceed the kinematic capabilities of robots, including those with complex locomotion systems. This work proposes an innovative solution based on collaborative behaviours to overcome even terrains. A method employing two collaborative robots designed to operate in a marsupial configuration to surmount uneven terrains has been implemented. These robots, denoted as R1 (enhanced with a mobile ramp) and R2 (serving as an explorer), interact synergistically to expand the explored area autonomously. A state machine has been implemented to manage the progression of the mission, based on a perception (RGB-D) system, for both decision-making and autonomous execution of the process. In the initial stage, the terrain and ascent zones to be explored are characterized using point clouds and unsupervised learning. Subsequently, the second stage manages the interaction between the robots by controlling the R2 ascent through the R1 ramp using artificial vision algorithms and beacons. Outdoor tests have been performed to validate the method. The main results show an effectiveness of 95% in automatically identifying access zones.

​Exploring tasks in unknown environments has become a relevant search and rescue robotics approach. Ground robots are a better alternative to rescuers for first exploration. However, exploration progress is often limited by uneven terrains that exceed the kinematic capabilities of robots, including those with complex locomotion systems. This work proposes an innovative solution based on collaborative behaviours to overcome even terrains. A method employing two collaborative robots designed to operate in a marsupial configuration to surmount uneven terrains has been implemented. These robots, denoted as R1 (enhanced with a mobile ramp) and R2 (serving as an explorer), interact synergistically to expand the explored area autonomously. A state machine has been implemented to manage the progression of the mission, based on a perception (RGB-D) system, for both decision-making and autonomous execution of the process. In the initial stage, the terrain and ascent zones to be explored are characterized using point clouds and unsupervised learning. Subsequently, the second stage manages the interaction between the robots by controlling the R2 ascent through the R1 ramp using artificial vision algorithms and beacons. Outdoor tests have been performed to validate the method. The main results show an effectiveness of 95% in automatically identifying access zones. Read More