BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Events - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://events.ucsc.edu
X-WR-CALDESC:Events for Events
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20250309T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20251102T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20260308T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20261101T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20270314T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20271107T090000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260225T090000
DTEND;TZID=America/Los_Angeles:20260225T120000
DTSTAMP:20260426T053601
CREATED:20260210T221905Z
LAST-MODIFIED:20260210T221905Z
UID:10009196-1772010000-1772020800@events.ucsc.edu
SUMMARY:Liu\, C. (CSE) - Enabling LLM Unlearning at Inference Time by Decomposing Detection and Intervention
DESCRIPTION:Machine unlearning addresses the “right to be forgotten” under GDPR and enables privacy\, copyright\, and safety compliance in large language models. Training-based unlearning can remove targeted behavior on benchmarks\, but it scales poorly\, can degrade utility\, and can fail under adversarial prompting that recovers supposedly forgotten content. This prospectus proposes inference-time behavioral unlearning: rather than modifying weights to “erase” knowledge\, we detect when a query targets forgotten content and intervene in generation so the system behaves like a model never trained on that content. We formalize this approach as Detect-Intervene Decomposition and instantiate it with three complementary methods operating at the embedding\, token\, and reasoning levels under different access capabilities. Comprehensive experiments across entity unlearning\, hazardous knowledge removal\, and copyright protection demonstrate that our methods match or exceed training-based approaches while being orders of magnitude faster and preserving model utility. As LLMs increasingly operate as services with restricted weight access\, inference-time unlearning provides the only practical path for responsible AI deployment that respects privacy\, safety\, and legal requirements. \nEvent Host: Chris Liu\, Ph.D. Student\, Computer Science and Engineering \nAdvisor: Yang Liu \nZoom – https://ucsc.zoom.us/j/94799852992?pwd=EBFQe4U2lRNro1oJ8F36bgORhT2xSv.1 \nPasscode –  242384
URL:https://events.ucsc.edu/event/liu-c-cse-enabling-llm-unlearning-at-inference-time-by-decomposing-detection-and-intervention/
LOCATION:
CATEGORIES:Ph.D. Presentations
ATTACH;FMTTYPE=image/jpeg:https://events.ucsc.edu/wp-content/uploads/2026/01/ph.d.-presentation-graphic-option-1-1.jpg
END:VEVENT
END:VCALENDAR