-
GRASS: Compute Efficient Low-Memory LLM Training with Structured Sparse Gradients
Aashiq Muhamed, Oscar Li, David Woodruff, Mona Diab, Virginia Smith
EMNLP 2024
[arxiv] -
OmniPred: Language Models as Universal Regressors
Xingyou Song*, Oscar Li*, Chansoo Lee, Bangding Yang, Daiyi Peng, Sagi Perel, Yutian Chen
Under review at TMLR
[arxiv] 2024 -
Noise-Reuse in Online Evolution Strategies
Oscar Li, James Harrison, Jascha Sohl-Dickstein, Virigina Smith, Luke Metz
NeurIPS 2023
[arxiv][video][poster] -
Label Leakage and Protection in Two-party Split Learning
Oscar Li, Jiankai Sun, Xin Yang, Weihao Gao, Hongyi Zhang, Junyuan Xie, Virginia Smith, Chong Wang
ICLR 2022
[OpenReview][video] -
Two Sides of Meta-Learning Evaluation: In vs. Out of Distribution
Oscar Li*, Amrith Setlur*, Virginia Smith
NeurIPS 2021
[arxiv][video] -
Is Support Set Diversity Necessary for Meta-Learning?
Amrith Setlur*, Oscar Li*, Virginia Smith
Neurips 2020 MetaLearn Workshop
[arxiv][poster] -
This Looks Like That: Deep Learning for Interpretable Image Recognition
Chaofan Chen*, Oscar Li*, Daniel Tao, Alina Barnett, Cynthia Rudin, Jonathan Su
NeurIPS 2019 (spolight, top 3% papers)
[arxiv][talk (05:12)] -
Interpretable Image Recognition with Hierarchical Prototypes
Peter Hase, Chaofan Chen, Oscar Li, Cynthia Rudin
AAAI HCOMP 2019
[arxiv] -
Deep Learning for Case-based Reasoning through prototypes: A Neural Network that explains its predictions
Oscar Li*, Hao Liu*, Chaofan Chen, Cynthia Rudin
AAAI 2018
[arxiv]