Publications
You can also find my articles on my Google Scholar profile.
Published in Proceedings of the 13th International Conference on Learning Representations (ICLR'25), 2024
We introduce the design of intrinsic reward functions for the moral alignment of LLM agents. We evaluate the robustness and generalization of the framework using Reinforcement Learning-based fine-tuning of agentic LLM systems in social dilemma environments. Thread on X
Recommended citation: Tennant, E., Hailes, S., Musolesi, M. (2025). "Moral Alignment for LLM Agents." Proceedings of the 13th International Conference on Learning Representations (ICLR'25). https://arxiv.org/abs/2410.01639
Published in Proceedings of the 7th AAAI/ACM Conference on AI, Ethics & Society (AIES'24), 2024
In this paper, we present a study of the learning dynamics of morally heterogeneous populations interacting in a social dilemma setting. We observe several types of non-trivial interactions between pro-social and anti-social agents, and find that certain classes of moral agents are able to steer selfish agents towards more cooperative behavior. Thread on X
Recommended citation: Tennant, E., Hailes, S., Musolesi, M. (2024). "Dynamics of Moral Behavior in Heterogeneous Populations of Learning Agents." Proceedings of the 7th AAAI/ACM Conference on AI, Ethics & Society (AIES'24). https://ojs.aaai.org/index.php/AIES/article/view/31736
Published in arXiv Preprint, 2023
How should we develop moral reasoning in artificial agents? Thread on X
Recommended citation: Tennant, E., Hailes, S., Musolesi, M. (2024). "Hybrid Approaches for Moral Value Alignment in AI Agents: a Manifesto " arXiv 2312.01818. https://arxiv.org/abs/2312.01818 https://arxiv.org/abs/2312.01818
Published in Proceedings of the 32nd International Joint Conference On Artificial Intelligence (IJCAI'23), 2023
We define (reinforcement) learning agents based on various classic moral philosophies, and study agent behaviours and emerging outcomes in (multi-agent) social dilemma settings. Thread on X
Recommended citation: Tennant, E., Hailes, S., Musolesi, M. (2023). "Modeling Moral Choices in Social Dilemmas with Multi-Agent Reinforcement Learning." Proceedings of the 32nd International Joint Conference On Artificial Intelligence (IJCAI'23) https://doi.org/10.24963/ijcai.2023/36
Published in PsyArXiv Preprint., 2021
This paper analyses language and network data from Twitter to test a hypothesis about Noun use by political Conservatives, and then compares the results against two survey studies.
Recommended citation: Karmannaya, E., & de-Wit, L. (2021, January 11). "The Grammar of Politics, through the lens of Surveys and Web-based Social Network methods." PsyArXiv . https://psyarxiv.com/v6qx5/
Published in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI’19), 2019
This paper is about the behaviours, experiences and preferences of crowdworkers, from a Human-Computer Interaction perspective.
Recommended citation: Lascău, L., Gould, S., Cox, A., Karmannaya, E., Brumby, D.. (2018). "Monotasking or Multitasking: Designing Tasks for Crowdworkers’ Preferences." CHI’19 Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, New York. https://doi.org/10.1145/3290605.3300649
Non-archival Conferences & Symposia:
Conference Workshops: