Stanford By the Bay
- Oli Dinov
- Aug 23, 2025
- 2 min read
Updated: Aug 30, 2025
Stanford has been a steady presence at our conference. Some of the most important talks we've hosted—on retrieval, optimization, and large-model training—have come from people working in and around its labs.
This November, this tradition continues as we welcome Peter Norvig, Education Fellow at Stanford, —AI pioneer, former Director of Research at Google, and co-author of Artificial Intelligence: A Modern Approach. His talk, “The Future of Programming with AI,” explores how AI is less about producing exact answers and more about operating effectively within uncertainty. Norvig’s presence reflects a broader shift in the field—from technical precision to system-level reasoning.
In 2023, we hosted Ommar Khattab, a Stanford PhD researcher whose work on ColBERT and DSPy demostrated how neural retrieval systems handle large-scale knowledge. His focus on lightweight, interpretable, and fast architectures challenges assumptions about what modern NLP systems should look like.
That same year, David Hall presented Levanter, a JAX-based framework for training foundation models. Developed at Stanford, Levanter is built to be scalable, legible, and reproducible—introducing innovations like named tensors and easy-to-integrate model parallelism without compromising efficiency.
In 2021, we convened a panel of technical leaders to discuss the future of hardware-software co-design, including Anna Goldie from Google Brain and Stanford. The discussion, which also featured voices from Oxide, Faros AI, MLCommons, and Autodesk, tackled how emerging compute platforms are changing software development.
Separately, Matei Zaharia shared his work on training optimization techniques like PipeDream and FlexFlow, and introduced retrieval-based NLP architectures such as ColBERT-QA and Baleen—models that run faster, cost less, and adapt more easily than traditional LLMs.
Going further back to 2019, Shoumik Palkar introduced Weld, a system that optimizes across data processing libraries like NumPy and Pandas by compiling them into a single efficient runtime. Weld’s approach—eliminating redundant computation across modular systems—made a strong case for rethinking the software stack in AI.
Our history with Stanford is rooted in a shared commitment to advancing ideas that are technically rigorous, open, and built for real-world impact. Stanford is renowned for producing some of the world’s most influential computer scientists, and over the years, many of its alumni have been frequent and valued speakers at our events, sharing groundbreaking research. This year, we are excited to continue that tradition by hosting two exceptional representatives from Stanford. Cameron Pfiffer — our PC member and speaker, presenting how Letta turns stateless AI into persistent, memory-first agents via MemGPT-inspired virtual context, with ADE debugging and self-editing memory.
Vinay Rao — discussing the challenges of deploying AI models safely and sharing strategies for responsible, secure AI development.
Stanford: Driving Innovation, Ethics, and Impact in AI
Within Stanford, the Stanford Institute for Human-Centered Artificial Intelligence (Stanford HAI)—founded in 2019—has become a global leader in responsible, cross-disciplinary AI research, education, and policy. HAI’s mission is simple yet profound: advance AI to serve humanity.
From launching the Center for Research on Foundation Models (CRFM) to publishing the globally recognized AI Index Report, Stanford HAI is helping build not only the technology but the world it enters. With $40M+ invested in interdisciplinary research, 300+ researchers supported, and 50+ industry collaborations, HAI continues to drive innovation while championing ethics, education, and policy..


