Teaching Flying Robots Their Environment
![]() |
Image Source: Popular Science |
Flying robot drones are learning the differences between cars, trees, and other potential obstacles via the work of a substantial interdisciplinary project sponsored by the US Office of Naval Research. |
Working with $7.5 million from the Office of Naval Research, the scientists aim to build an autonomous, fixed-wing surveillance drone that can navigate through an unfamiliar city or forest at 35 miles an hour.
The group’s inspiration is the flight of pigeons. In flight, pigeons estimate the distance between themselves and objects ahead by quickly processing blurry, low-resolution images. These are the same capabilities a robot drone will need as well. Furthermore, they have a tendency to make decisions at the last moment—within five feet of an obstacle.
The first step is to teach robots to differentiate between obstacles and empty space. Engineers have already figured out how to train point-and-shoot cameras to spot faces in a photo: In a process called supervised learning, a technician feeds millions of images into a computer and tells it to output a “1” when the image contains a human face and a “0” when it does not.
This style of supervised learning would be an impossibly labor-intensive way to train a drone. A human would have to label not just faces but every possible object the robot might encounter. Instead, Yann LeCun, a professor of computer and neural science at New York University who leads the drone’s vision team, is developing software that will allow the drone to draw conclusions about what it’s seeing with much less human coaching.
Mimicking this efficiencies of parallel processing method the brain’s visual cortex utilizes to classify objects, the software enables features from the raw video frame to get extracted faster. With this, the drone’s human instructors have to show it just a few hundred to a few thousand examples of each and every category of subject (“car, ” “tree, ” “grass”) before it could possibly begin to classify those objects by itself.
Step one will be to teach robots to differentiate between obstacles and empty space. Once the scientists have taught the drone to see properly, they will have to teach it to generate decisions. That involves grappling with all the inherent ambiguity connected with visual data—with selecting whether that pattern of pixels ahead can be a tree branch or a shadow.
Drew Bagnell and Martial Hebert, roboticists from Carnegie Mellon University, are developing algorithms that will aid the robot take care of visual ambiguity how humans do: by means of making educated guesses. “They can say, ‘I’m 99 percent sure there’s a tree between 12 meters and 13 meters away,’ and make a decision anyway,” Bagnell says.
It will take lots of computing power to generate those decisions. The drone will need to process 30 images per second though contemplating its subsequent move. LeCun says that a processor that may run his algorithms with a trillion operations every second would do the trick, but the challenge should be to build all that power right into a computer light and efficient enough to fly. The best candidate can be a processor that LeCun designed with Eugenio Culurciello connected with Purdue University: a low-power computer the size of a DVD circumstance called NeuFlow, which LeCun is confident he’ll have the capacity to speed up to some trillion operations per second because of the group’s 2015 deadline.
Once they’ve built a robot that can learn, see and make decisions fast enough to avoid obstacles, they still have to teach it to fly. Russ Tedrake, an MIT roboticist, is already using motion-capture cameras and a full-scale prototype of the final drone to model the maneuvers it will need to perform. If the team succeeds, the result will be a robot that can descend into a forest and lose today’s drones in the trees.
As the drone flies, its onboard camera will feed video to software that applies a series of filters to each frame. The first filters pick up patterns among small groups of pixels that indicate simple features, like edges. Next, another series of filters looks for larger patterns, building upward from individual pixels to objects to complex visual scenes. Within hundredths of a second, the software builds a low-resolution map of the scene ahead. Finally, it will compare the objects in view to ones it has “seen” before, classifying them as soon as it has enough information to make an educated guess.
SOURCE Popular Science
By 33rd Square | Subscribe to 33rd Square |
Hi there mates, its fantastic post about tutoringand entirely
ReplyDeletedefined, keep it up all the time.
Here is my web-site - http://atlaslm.com
Yesterday, while I was at work, my cousin stole my iphone and tested to
ReplyDeletesee if it can survive a forty foot drop, just so she
can be a youtube sensation. My apple ipad is now broken and she has 83 views.
I know this is totally off topic but I had to share it with someone!
my homepage ... Ohio Moving
Hi there would you mind sharing which blog platform you're working with? I'm looking to start my own blog in the near
ReplyDeletefuture but I'm having a tough time choosing between BlogEngine/Wordpress/B2evolution and Drupal. The reason I ask is because your layout seems different then most blogs and I'm looking for something completely unique.
P.S My apologies for getting off-topic but I had to ask!
Feel free to visit my blog post :: simply click the following internet page
Hi, i feel that i noticed you visited my web site thus i got here to return the desire?
ReplyDelete.I'm trying to to find issues to improve my web site!I assume its good enough to make use of some of your ideas!!
my weblog outboundnews.com