Modern machine perception is powerful yet brittle: failing in response to subtle data adversaries and lacking mechanisms to learn from their errors. We address this challenge by progressing from diagnosing such failures to developing a framework for persistent learning. We first investigate the sources of this fragility, demonstrating how both naturally occurring adversarial artifacts, like specular highlights, and conditions of data scarcity fundamentally limit model robustness. We then replace a passive reliance on quality data curation with active, multi-agent refinement: a Worker-Supervisor loop that iteratively critiques and corrects outputs to meet complex, rule-rich guidelines at inference time. While this system achieves dynamic error correction, it rarely remembers what was learned. We thus plan to tackle this problem of non-remembrance by proposing an experience memory that records validated fixes as reusable insights, retrieves them when similar contexts recur, and, where available, grounds them across viewpoints and time. Together, these components turn momentary fixes into long-term skills, paving the way for more capable and reliable perception in fields like augmented reality and robotics.
Event Host: Vanshika Vats, PhD Student, Computer Science & Engineering
Advisor: James Davis