me

I’m a final-year PhD student in Harvard’s Machine Learning Foundations and Theory of Computation groups. I’m advised by Sham Kakade and Leslie Valiant. Previously, I undergraduated in math at Princeton. In summer 2021, I interned with the ML group at Microsoft Research NYC, where I had the pleasure of working with Cyril Zhang and Surbhi Goel.

My research, which is currently focused on the scientific study of deep learning, is motivated by the following claims:

  • If we understand AI systems better, we will have a better shot at making them safer and foreseeing future technological developments.
  • It is crucial to build understanding of cutting-edge methods, and for our insights to generalize across changes in algorithms and scale.
  • The shortest path to scientific understanding involves a blend of both theory and empirics, on both clean toy models and real messy systems.

In the past, I’ve also worked on game theory and computational complexity theory.

bedelman@g.harvard.edu | Google Scholar

Research

The Evolution of Statistical Induction Heads: In-Context Learning Markov Chains
with Ezra Edelman, Surbhi Goel, Eran Malach, and Nikolaos Tsilivis
preprint | Blog post

Distinguishing the Knowable from the Unknowable with Language Models
with Gustaf Ahdritz, Tian Qin, Nikhil Vyas, and Boaz Barak
preprint

Watermarks in the Sand: Impossibility of Strong Watermarking for Generative Models
with Hanlin Zhang, Danilo Francati, Daniele Venturi, Giuseppe Ateniese, and Boaz Barak
Secure and Trustworthy LLMs Workshop @ ICLR 2024 | Blog post

Feature emergence via margin maximization: case studies in algebraic tasks
with Depen Morwani, Costin-Andrei Oncescu, Rosie Zhao, and Sham Kakade
ICLR 2024 (spotlight) | Blog post

Pareto Frontiers in Neural Feature Learning: Data, Compute, Width, and Luck
with Surbhi Goel, Sham Kakade, Eran Malach, and Cyril Zhang
NeurIPS 2023 (spotlight)

Hidden Progress in Deep Learning: SGD Learns Parities Near the Computational Limit
with Boaz Barak, Surbhi Goel, Sham Kakade, Eran Malach, and Cyril Zhang
NeurIPS 2022

Inductive Biases and Variable Creation in Self-Attention Mechanisms
with Surbhi Goel, Sham Kakade, and Cyril Zhang
ICML 2022

The Multiplayer Colonel Blotto Game
with Enric Boix-Adserà and Siddhartha Jayanti
Games and Economic Behavior (full version), EC 2020 (extended abstract)

Causal Strategic Linear Regression
with Yonadav Shavit and Brian Axelrod
ICML 2020

SGD on Neural Networks Learns Functions of Increasing Complexity
with Preetum Nakkiran, Gal Kaplun, Dimitris Kalimeris, Tristan Yang, Fred Zhang, and Boaz Barak
NeurIPS 2019 (spotlight)

Matrix Rigidity and the Croot-Lev-Pach Lemma
with Zeev Dvir
Theory of Computing, 2019

Teaching

Spring 2021 Teaching fellow for CS 229br: Biology and Complexity
Received Certificate of Distinction in Teaching from Harvard University

Spring 2020 Teaching fellow for CS 228: Computational Learning Theory
Gave three lectures on “Mysteries of Generalization in Deep Learning”

Tutorials

How to Achieve Both Transparency and Accuracy in Predictive Decision Making: An Introduction to Strategic Prediction
with Chara Podimata and Yonadav Shavit
FAccT 2021

Recent talks

January 2024 Watermarks in the Sand: Impossibility of Strong Watermarking for Generative Models
NYC Crypto Day

February 2023 Studies in feature learning through the lens of sparse boolean functions
Seminar in Mathematics, Physics and Machine Learning, University of Lisbon

November 2022 Hidden progress in deep learning
Statistical Learning Theory and Applications, MIT course

September 2022 Sparse feature emergence in deep learning
Alg-ml seminar, Princeton University

May 2022 Towards demystifying the inductive bias of attention mechanisms
Collaboration on the Theoretical Foundations of Deep Learning

Feb 2022 Towards demystifying transformers & attention
New Technologies in Mathematics Seminar, Harvard Center of Mathematical Sciences and Applications

Miscellaneae