by Shawn Ballard, Washington University in St. Louis Communications Specialist
In geospatial exploration, the quest for efficient identification of regions of interest has recently taken a leap forward with visual active search (VAS). This modeling framework uses visual cues to guide exploration with potential applications that range from wildlife poaching detection to search-and-rescue missions to the identification of illegal trafficking activities.
A new approach to VAS developed at the McKelvey School of Engineering at Washington University in St. Louis combines deep reinforcement learning, where a computer can learn to make better decisions through trial and error, with traditional active search, where human searchers go out and verify what’s in a selected region. The team that developed the novel VAS framework includes Yevgeniy Vorobeychik and Nathan Jacobs, professors of computer science and engineering, and Anindya Sarkar, a doctoral student in Vorobeychik’s lab. The team presented its work Dec. 13 at the Neural Information Processing Systems conference in New Orleans.
“VAS improves on traditional active search more or less depending on search task,” Jacobs said. “If a task is relatively easy, then improvements are modest. But if an object is very rare — for example, an endangered species that we want to locate for purposes of wildlife conservation — then improvements offered by VAS are substantial. Notably, this isn’t about finding things faster. It’s about finding as many things as possible given limited resources, especially limited human resources.”