Laurence Aitchison
I’m passionate about pushing the boundaries of large language model (LLM) research, with work spanning multiple exciting directions:
- Pretraining Dynamics: Uncovering the fundamental principles of how LLMs learn, through groundbreaking work on hyperparameter transfer for weight decay and function-space learning rates
- Efficiency: Developing methods to make LLMs more computationally efficient and accessible
- Mechanistic Interpretability: Breaking open the “black box” of LLMs through innovative approaches like random baselines for SAEs, residual stream analysis, and Jacobian Sparse Autoencoders
- AI Agents: Exploring the frontier of self-improving LLM systems
As a Lecturer (US Assistant Professor) at the University of Bristol, I lead research at the intersection of machine learning and artificial intelligence. While my current focus is on LLMs, my academic journey includes significant contributions to probabilistic and Bayesian machine learning, as well as computational neuroscience (PhD at the prestigious Gatsby Unit, UCL). For a deeper dive into my research trajectory, please see my CV or Publications.
Let’s Connect!
I’m always excited to discuss:
- Potential research collaborations
- PhD opportunities for motivated candidates
- Industry consulting partnerships
Reach out via email (laurence.aitchison@gmail.com) to start a conversation!
News
- Jan-March 2025: New papers
- Jan 2025: One paper accepted at ICLR 2025:
- Nov 2024: One paper accepted at 3DV 2025:
- Sept 2024: Two papers accepted at NeurIPS 2024:
- May 2024: Three papers accepted at ICML 2024:
- May 2024: New paper: How to set AdamW’s weight decay as you scale model and dataset size