TGI Spotlight: Rethinking Multimodal Localization

Photo collage of 2023 TGI Fellows.

Saint Louis University researchers Nan Cen, Flavio Esposito, and Wei Wang consider the autonomous vehicle they use to study multimodal localization optimization as part of a seed grant from the Taylor Geospatial Institute.

May 23, 2024

By Bob Grant

Localization, precisely pinpointing the position of a person, autonomous vehicle, or anything else on planet Earth, is one of the holy grails of geospatial technology. There are various localization methods, including GPS, beacon-based localization, LiDAR, Radar, Ultrasonic localization, optical signal-based localization, IoT-based localization (e.g., WiFI, BLE, ZigBee, LoRa, etc.), real-time kinematic (RTK) positioning, and more. 

The accuracy, robustness, and security of localization are three essential aspects to the successful functioning of myriad instruments, devices, and applications. And different modes of localization are deployed for different objectives. But merging these different systems and tasking them with cooperative localization in real-time at scale and in complex environments is still a grand challenge.

Saint Louis University researchers Flavio Esposito, Wei Wang, and Nan Cen aim to solve that problem by merging multiple localization systems to pinpoint objects in space at resolutions that are impossible when employing one system or another. They recieved a grant from the Taylor Geospatial Institute (TGI) in 2023 for their project, “Rethinking multimodal localization systems at scale in challenging scenarios,” with Esposito as the PI and Cen and Wang as co-PIs.

“Multimodal localization is a very hot topic right now,” said Wang. “All these sensors, they have different physical layers, they have different granularity, they have different time schedules, so the data is not synchronized. There are major challenges we want to solve.” 

Wang noted that pursuing multimodal localization systems could help autonomous vehicles—his area of research—navigate with increased accuracy in difficult scenarios, such as in crowded urban areas, during storms, and in GPS-denied areas like parking garages. “Due to the safety concern, autonomous vehicles require multiple localization systems (Radar, LiDAR, RTK-GPS, and vision-based SLAM), at the same time to avoid potential system failure,” he says. “And right now, in ideal scenario, the accuracy for localization and mapping for autonomous vehicles can only achieve 10 centimeters, or five centimeters at most. In practice, the localization accuracy will be much lower.”  

“People have been doing localization for years, but not at this scale,” added Esposito. “People have been localizing fairly well with Ultra Wide Band technologies, mostly for indoor applications, but scaling localization systems with challenging conditions, outdoor and without relying on GPS is an open challenge.” Esposito said that his group published a conference paper in 2022 that implemented a standard localization management function for 5G core networks for the first time ever. This could help lend another facet to localization that adds to existing methods. “Rain hinders visibility in a self-driving car, just like it does to human eyes,” he said. “It does not matter how many cameras we have, if they are all occluded by weather conditions. That’s why the multimodal is necessary” 

This foundational geospatial work could have applications beyond self-driving cars, according to the TGI-funded researchers. “Self-driving tractors are actually being used today. But they don’t navigate very well among crops,” Esposito said. “Robots can be used for agriculture as well, whether it’s a drone or a ground robot. And so being accurate about where they are, and where they are taking picture is going to help plant scientists as well.” National security is also likely to benefit from enhanced localization technologies. “Military and NGA [National Geospatial-Intelligence Agency] are going to be interested in this for sure,” he added. “Definitely the technology can be used elsewhere.” 

Research Outputs

Conference Proceedings: 

  • “LightThief : Your Optical Communication Information is Stolen behind the Wall” in USENIX Security Symposium 2023
    1. The goal of this paper is to explore the vulnerability of visible light communication techniques. We designed a very small, ultra-low-power backscatter tag to transfer the visible light signal to RF signal. This tag can effectively transfer the light signal to RF signal without user notice, and this tag does not require to use battery at all.
    2. VIDEO — https://www.youtube.com/watch?v=d0JTDeCj-us&t=35s 
    • “Key Establishment for Secure Asymmetric Cross-Technology Communication” in ACM ASIA Conference on Computer and Communications Security 2024
      1. The goal of this paper is to conduct asymmetric key establishment for heterogeneous IoT devices. Moreover, to our knowledge, this is the first work that explores the key establishment problem for heterogeneous IoT devices.