Member-only story
Cars Beyond Sensors: LLMs and SenseRAG Reduce Trajectory Prediction Errors by 70%
Teams building autonomous driving (AD) systems focus on better sensors and models, but in the next few years, LLM-based AD systems will understand the full context beyond what sensors can see directly.
In a recent study, SenseRAG showed that by combining LLMs with proactive information retrieval, we can reduce trajectory prediction errors by 70%+.
It’s a proactive approach that integrates real-time, multimodal sensor data into a unified knowledge base that LLMs can read and understand, and lets the AD system actively seek out relevant context.
The team tested two approaches:
1) GPT-4 with sensor data processing
2) GPT-4 with sensor data processing + active environmental querying
The second approach made far better predictions by understanding the broader situation.
Let’s dive into the details.
Importance of SenseRAG
Systems such as SenseRAG can fundamentally change the environmental awareness in self-driving vehicles.
Traditional computer vision approaches are strictly bound by their training labels and struggle with novel scenarios.