Dispatch from the TGI Consortium: Interactive approach to geospatial search combines aerial imagery, reinforcement learning

Photo collage of 2023 TGI Fellows.

Comparison of search pathway using visual active search (VAS) (left) and the most competitive state-of-the-art approach, greedy selection (right). The VAS framework developed by McKelvey engineers quickly learns to take advantage of visual similarities between regions.

by Shawn Ballard, Washington University in St. Louis Communications Specialist

When combatting complex problems like illegal poaching and human trafficking, efficient yet broad geospatial search tools can provide critical assistance in finding and stopping the activity. A visual active search (VAS) framework for geospatial exploration developed by researchers in the McKelvey School of Engineering at Washington University in St. Louis uses a novel visual reasoning model and aerial imagery to learn how to search for objects more effectively.

The team led by Yevgeniy Vorobeychik and Nathan Jacobs, professors of computer science & engineering, aims to shift computer vision – a field typically concerned with how computers learn from visual information – toward real-world applications and impact. Their cutting-edge framework combines computer vision with adaptive learning to improve search techniques by using previous searches to inform future searches.

“This work is about how to guide physical search processes when you’re constrained in the number of times you can actually search locally,” Jacobs said. “For example, if you’re only allowed to open five boxes, which do you open first? Then, depending on what you found, where do you search next?”