BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Events - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://events.ucsc.edu
X-WR-CALDESC:Events for Events
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20250309T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20251102T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20260308T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20261101T090000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20270314T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20271107T090000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260309T080000
DTEND;TZID=America/Los_Angeles:20260309T170000
DTSTAMP:20260516T152237
CREATED:20260225T190019Z
LAST-MODIFIED:20260225T190019Z
UID:10009358-1773043200-1773075600@events.ucsc.edu
SUMMARY:Statistics Seminar: Evaluating Predictive Algorithms Under Missing Data
DESCRIPTION:Presenter: Amanda Coston\, Assistant Professor\, University of California Berkeley \nDescription: Performance evaluation plays a central role in decisions about whether and how predictive algorithms should be deployed in high-stakes settings. Yet\, in many real-world domains\, evaluation is fundamentally difficult: the data available for assessment are often biased\, incomplete\, or noisy\, and the act of deploying a model can itself alter which outcomes are observed. As a result\, standard evaluation practices may substantially misrepresent both overall model performance and disparities across groups. In this talk\, we examine several common threats to valid evaluation—including measurement error\, selection bias\, and distribution shift—and present principled evaluation methods that enable valid performance assessment under these challenges when appropriate conditions are met. \nBio: From UC Berkeley website: Amanda Coston is an assistant professor of statistics at UC Berkeley. Her research addresses real-world data problems that challenge the validity\, reliability\, and equity of algorithmic decision support systems and data-driven policy-making. Her work draws on techniques from causal inference\, machine learning\, and nonparametric statistics. She earned her PhD in machine learning and public policy at Carnegie Mellon University and subsequently completed a postdoc at Microsoft Research on the Machine Learning and Statistics Team. She also holds a Bachelor of Science in Engineering from Princeton in computer science and a certificate in the Princeton School of Public and International Affairs. \nHosted by: Statistics Department
URL:https://events.ucsc.edu/event/statistics-seminar-evaluating-predictive-algorithms-under-missing-data/2026-03-09/1/
LOCATION:CA
CATEGORIES:Lectures & Presentations,Seminars
ATTACH;FMTTYPE=image/png:https://events.ucsc.edu/wp-content/uploads/2026/02/BElogoWHITE.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20260309T160000
DTEND;TZID=America/Los_Angeles:20260309T170000
DTSTAMP:20260516T152237
CREATED:20260225T190019Z
LAST-MODIFIED:20260225T190019Z
UID:10009357-1773072000-1773075600@events.ucsc.edu
SUMMARY:Statistics Seminar: Evaluating Predictive Algorithms Under Missing Data
DESCRIPTION:Presenter: Amanda Coston\, Assistant Professor\, University of California Berkeley \nDescription: Performance evaluation plays a central role in decisions about whether and how predictive algorithms should be deployed in high-stakes settings. Yet\, in many real-world domains\, evaluation is fundamentally difficult: the data available for assessment are often biased\, incomplete\, or noisy\, and the act of deploying a model can itself alter which outcomes are observed. As a result\, standard evaluation practices may substantially misrepresent both overall model performance and disparities across groups. In this talk\, we examine several common threats to valid evaluation—including measurement error\, selection bias\, and distribution shift—and present principled evaluation methods that enable valid performance assessment under these challenges when appropriate conditions are met. \nBio: From UC Berkeley website: Amanda Coston is an assistant professor of statistics at UC Berkeley. Her research addresses real-world data problems that challenge the validity\, reliability\, and equity of algorithmic decision support systems and data-driven policy-making. Her work draws on techniques from causal inference\, machine learning\, and nonparametric statistics. She earned her PhD in machine learning and public policy at Carnegie Mellon University and subsequently completed a postdoc at Microsoft Research on the Machine Learning and Statistics Team. She also holds a Bachelor of Science in Engineering from Princeton in computer science and a certificate in the Princeton School of Public and International Affairs. \nHosted by: Statistics Department
URL:https://events.ucsc.edu/event/statistics-seminar-evaluating-predictive-algorithms-under-missing-data/2026-03-09/2/
LOCATION:CA
CATEGORIES:Lectures & Presentations,Seminars
ATTACH;FMTTYPE=image/png:https://events.ucsc.edu/wp-content/uploads/2026/02/BElogoWHITE.png
END:VEVENT
END:VCALENDAR