Hi! I’m Liza 👋🏼
I am an AI Scientist + Social Scientist looking to build moral alignment into AI agents. Currently I am studying this in multi-agent simulations with independent learning agents, using systems based on Reinforcement Learning (RL) and foundation models (LLMs such as Gemma2), with inspiration from moral philosophy, psychology and game theory.
In my latest work I have developed a methodology for quantifying key moral philosophical frameworks in terms of actions and consequences on an environment, modeled these as intrinsic rewards for training RL agents playing social dilemma games (see IJCAI’23 paper), simulated societies of agents with diverse moral preferences, implemented multi-agent RL with partner selection, investigated coalition formation, the emergence of cooperation and exploitation in morally heterogeneous societies (see AIES’24 paper), and applied this framework to fine-tuning LLM agents to play more morally aligned policies in interactive game environments (see ICLR’25 paper).
I’m a final-year PhD candidate at the Machine Intelligence Lab, Department of Computer Science, University College London (UCL). I am funded by the Leverhulme Doctoral Training Programme for the Ecological Study of the Brain. Before my PhD I studied Psychology & Linguistics (@ UCL), conducted computational social science research in political psychology using Twitter data (@ University of Cambridge), and worked as an AI/Data Scientist and Behavioural Scientist at two start-ups and a large investment bank.
News
- [Jun 2025] Presenting a poster at the RLDM conference in Dublin.
- [Apr 2025] Presenting our paper at ICLR 2025 in Singapore, plus two workshop papers.
- [Mar 2025] Presenting a poster at the UK Multi-Agent Systems Symposium at the Turing Institute in London.
- [Jan 2025] Our paper “Moral Alignment for LLM Agents” (see preprint) has been accepted for the 13th International Conference on Learning Representations (ICLR’25) in Singapore.
- [Jan 2025] Gave an invited talk at the Political Psychology Lab at Cambridge on Moral Alignment for Agentic AI Systems.
- [Dec 2024] Presented my team’s work at the Concordia LLM Agent Competition at NeurIPS’24 (remotely).
- [Oct 2024] New preprint out: Moral Alignment for LLM Agents (see arXiv)
- [July 2024] Our paper “Dynamics of Moral Behavior in Heterogeneous Populations of Learning Agents” (see arXiv) has been accepted for The 7th AAAI/ACM Conference on AI, Ethics & Society, 2024 (San Jose, California).
- [July 2024] Attending at the Vienna Alignment Workshop and presenting our work at the Unconference which followed :)
- [July 2024] I’m at the Eastern European Machine Learning Summer School in Novi Sad, Serbia!
- [April 2024] I’m at the InterpViz Mechanistic Interpretability hackathon at LISA (The London Initiative for Safe AI)!
- [March 2024] New preprint out: Dynamics of Moral Behavior in Heterogeneous Populations of Learning Agents (see arXiv)
- [December 2023] New preprint out: Learning Machine Morality through Experience and Interaction (see arXiv)
- [November 2023] Gave a (long) student talk at LSE about how AI and Social Science can mutually benefit one another, and interesting questions emerging around aligning LLMs with human values.
- [November 2023] I’m at the Anthropic Hackathon in London!
- [November 2023] Presenting our IJCAI paper at two UCL Conferences - the Computer Science Student Conference (long talk) and the Neuro AI Conference (lightning talk).
- [August 2023] Presented our paper “Modeling Moral Choices in Social Dilemmas with Multi-Agent Reinforcement Learning” at the 32nd International Joint Conference on Artificial Intelligence (IJCAI 2023) (main track) in Macao. Links & materials here.
- [July 2023] I’m at the Cooperative AI Foundation Summer School in London!
- [May 2023] Spent a week volunteering at the 22nd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2023) in London.
Education
- PhD Computer Science (Leverhulme Doctoral Training Programme for the Ecological Study of the Brain), University College London, 2022-2025
- MPhil Biological Science (Psychology), University of Cambridge, 2019-2020. Full-time research, based at the Political Psychology Lab. In my research I studied political and linguistic behaviour on Twitter using Social Network Analysis and Natural Language Processing.
- BSc Psychology & Language Sciences, University College London, 2016-2019.
Previous experience
- Quantitative UX Researcher, JPMorgan (user experience analytics within the Corporate & Investment Bank, Markets business), 2021-2022.
- Data & Behavioural Scientist, Rooster Insurance (car insurance startup which based its pricing model on behavioural data rather than demographcis), 2021.
- Behavioural & AI Scientist, ProdX.ai (productivity startup which was building a product to make digital workers more productive thourgh automated, data-driven & psychology-inspired coaching), 2020.
Other
I’m in the process of changing my last name from Karmannaya to Tennant. If you see different names listed on different publications, don’t get confused :)