Presenter: Jason Yik, PhD Candidate, Harvard SEAS
Description: Recent research on neuromorphic accelerators has investigated their efficiency and performance benefits for machine learning (ML) inference at the edge. This talk will focus on the performance implications of the fully-on-chip, manycore-distributed memory architecture used by current neuromorphic accelerators. In conventional architectures, the roofline model is a well-known performance model for denoting performance bounds and bottlenecks. For neuromorphics, we show that bounds create a different shape, a floorline, and we demonstrate how to optimize ML deployment using the floorline as a performance guide.
Bio: Jason Yik is a PhD candidate at Harvard SEAS, with a research focus in neuromorphic computing architectures. His prior work includes designing benchmark frameworks and tools for neuromorphic research, and modeling and optimizing neuromorphic system performance. Currently, he is an intern with the ASIC architecture team at Cerebras Systems.
Hosted by: Professor Soumya Bose, ECE Department
Zoom Link: https://ucsc.zoom.us/j/97975378707?pwd=ljcgaCfhMmhZ88Vt5dqQUBVQRjehOx.1
Room: E2-192