I am interested in developing safe and robust machine learning systems that behave in the way we expect, and thus able to work with humans effectively. My work has often explored inverse reinforcement learning and imitation learning in order to learn from humans, and Iām interested in using this knowledge and generative AI to create insight and develop personalised, human-centric, decision making systems.
My current work focusses mostly on applying reinforcement learning to Large Vision-Language(-Action) Models.
See some of my highlighted publications here: š Research
Recent News š°
- [Dec 2024] Our first agent Proxy has achieved state-of-the-art results (82%) on the WebVoyager benchmark! [post]
- [Sep 2024] Joined Convergence Labs Ltd. as a Member of Technical Staff! We're a seed-stage startup building a future of abundance by automating workflows with AI š
- [Sep 2024] New work accepted to NeurIPS 2024: Discovering Preference Optimization Algorithms with and for Large Language Models!
- [May 2024] Dense Reward for Free in Reinforcement Learning from Human Feedback accepted to ICML!
- [Mar 2024] I'm back at Spotify as a full-time Research Scientist working on RLHF for internal LLMs and products including AI Playlist.
- [Feb 2024] Offered an OpenAI Superalignment Fellowship to work on activation steering in LLMs.
- [Jan 2024] How to Catch an AI Liar: Lie Detection in Black-Box LLMs by Asking Unrelated Questions accepted to ICLR 2024!
- [Sep 2023] Two papers accepted to NeurIPS 2023: AllSim: Systematic Simulation and Benchmarking of Repeated Resource Allocation Policies in Multi-User Systems with Varying Resources and GAUCHE: A Library for Gaussian Processes in Chemistry! Also a fun workshop paper: Optimising Human-AI Collaboration by Finding Convincing Explanations!
- [Jun 2023] Starting as a Research Scientist Intern at Spotify as part of the Satisfaction, Interaction and Algorithms (SIA) team, working on integrating large language models into the content moderation pipeline using culturally finetuned models.
- [Jan 2023] Taking part as a SERI-MATS Scholar working with Owain Evans, investigating deception in large language models along with methods to detect a notion of lying using only black-box access.
- [Sep 2022] Accepted to NeurIPS 2022: Synthetic Model Combination: An Instance-wise Approach to Unsupervised Ensemble Learning!
- [Jan 2022] Again, two papers accepted to ICLR! Inverse Online Learning: Understanding Non-Stationary and Reactionary Policies and POETREE: Interpretable Policy Learning with Adaptive Decision Trees.
- [Oct 2021] The Medkit-learn(ing) Environment: Medical Decision Modelling through Simulation is accepted to the Neural Information Processing Systems (NeurIPS) main conference track on Datasets and Benchmarks.
- [Jan 2021] Two papers accepted to ICLR! Scalable Bayesian Inverse Reinforcement Learning and Generative Time-series Modeling with Fourier Flows.
- [Sep 2020] Graduated the MPhil in Machine Leaning and Machine Intelligence and started a PhD in Machine Learning at the University of Cambridge supervised by Miheala van der Schaar.
- [May 2020] My first ever paper is accepted to ICML 2020: Unlabelled Data Improves Bayesian Uncertainty Calibration under Covariate Shift.