Browsing from:

EngNet - Engineering Network

Contact Details:

EngNet - Engineering Network

11121 Carmel Commons Blvd.
Charlotte
NC
28226
United States of America

Tel: +01 704 5413311
Fax: +01 704 9430560

Send Enquiry | Company Information

Robot Aids Military With 3D Maps

Robot Aids Military With 3D Maps

Product News Monday, August 27, 2012: EngNet - Engineering Network

Robot Aids Military With 3D Maps

Ann R. Thryft, Senior Technical Editor, Materials & Assembly for Design News.
Original article found here

Image courtesy of Hordur Johannsson, Massachusetts Institute of Technology.

 

A robot system being built by the Massachusetts Institute of Technology will help robots autonomously navigate a constantly changing environment by making three-dimensional maps that they continuously update.

MIT's system, a project of its Computer Science and Artificial Intelligence Laboratory (CSAIL), is built to navigate entirely on land. The system uses a low-cost camera such as the one in Microsoft’s Kinect motion sensing input device, originally built for the Xbox 360 game platform.

The researchers are developing the navigation system for robots that will be able to move through a constantly changing and new environment with little or no input from humans. Robots with this ability could be used for exploring unknown environments, such as for the military. Or they could help blind people move through public places, such as shopping malls and hospitals, without human aid, said Seth Teller, head of CSAIL's Robotics, Vision, and Sensor Networks group and principal investigator of the human-portable mapping project, in a press release.

Military applications could include mapping a bunker or cave network to enable a quick exit or re-entry. "Or a hazmat team could enter a biological or chemical weapons site and quickly map it on foot, while marking any hazardous spots or objects for handling by a remediation team coming later," said Teller.

The research team is developing algorithms based on Simultaneous Localization and Mapping (SLAM), said Maurice Fallon, CSAIL research scientist, in the press release. The SLAM-based technique will let robots constantly update maps of their environment and keep track of their own location in it as they learn new information.

A lot of research has already been done to create maps robots can use to navigate a given area, such as estimating the distance between themselves and nearby walls, and planning routes around obstacles, said Fallon. But these maps are developed mostly for a single, one-time use, and can't be adjusted to changing surroundings over time. "If you see objects that were not there previously, it is difficult for a robot to incorporate that into its map," he said.

The team also includes John J. Leonard, professor of mechanical and ocean engineering, and graduate student Hordur Johannsson.

The team previously tested the approach on robots that were equipped with expensive laser scanners, but have since implemented it with a Kinect-type camera in a robotic wheelchair, a portable sensor suit, and a PR2 robot developed by Willow Garage. On these devices, the system can continuously locate the robotic hardware within a 3D map of its surroundings while traveling at speeds of up to 1.5 meters per second.

The Kinect sensor's visible-light video camera and infrared depth sensor scan the robot's surroundings as it moves through a new, unexplored area, while the robot builds up a 3D model of the walls of a room and the objects within it. Map details can include location information about the edges of walls and objects within the walls.

When the robot visits the same area again, the system compares the previous images it has taken with the features of the new image it creates until it detects a match. Once the system has decided on its location, any new features it encounters since it took the previous picture of that location are incorporated into the map by combining new and old images.

While the system is making and updating maps, it is also continuously estimating the robot’s motion by measuring the distance its wheels have rotated, with onboard sensors. The system can determine the robot's position within a building by combining the motion data with visual information from the camera and depth sensor, which also serves as a form of error correction, said Fallon.

 

Original article found here

Engineered Media
 
Engineered Media - Google AdWords Partner | Digital Marketing Agency