Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
This is a page not in th emain menu
Published:
This blog describes our latest paper, which can be viewed here.
Published:
This blog was published in BlueSci - the Cambridge University Science Magazine. The original can be viewed here.
Published:
This blog was originally published on the Bedford Bugle - the University College London Psychology Societyâs blog. The original can be viewed here.
Published:
This blog was originally published on the Bedford Bugle - the University College London Psychology Societyâs blog. The original can be viewed here.
Published in CHIâ19 Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 2019
This paper is about the behaviours, experiences and preferences of crowdworkers, from a Human-Computer Interaction perspective.
Recommended citation: LascÄu, L., Gould, S., Cox, A., Karmannaya, E., Brumby, D.. (2018). "Monotasking or Multitasking: Designing Tasks for Crowdworkersâ Preferences." CHIâ19 Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, New York. https://doi.org/10.1145/3290605.3300649
Published in PsyArXiv Preprint., 2021
This paper analyses language and network data from Twitter to test a hypothesis about Noun use by political Conservatives, and then compares the results against two survey studies.
Recommended citation: Karmannaya, E., & de-Wit, L. (2021, January 11). The Grammar of Politics, through the lens of Surveys and Web-based Social Network methods. PsyArXiv . https://psyarxiv.com/v6qx5/
Published in The 32nd International Joint Conference On Artificial Intelligence (IJCAI'23), 2023
We define (reinforcement) learning agents based on various classic moral philosophies, and study agent behaviours and emerging outcomes in (multi-agent) social dilemma settings.
Recommended citation: Tennant, E., Hailes, S., Musolesi, M. (2023). "Modeling Moral Choices in Social Dilemmas with Multi-Agent Reinforcement Learning." The 32nd International Joint Conference On Artificial Intelligence (IJCAI'23) https://doi.org/10.24963/ijcai.2023/36
Published in arXiv Preprint, 2023
How could - and how should we - develop moral reasoning in artificial agents?
Recommended citation: Tennant, E., Hailes, S., Musolesi, M. (2024). "Learning Machine Morality through Experience and Interaction " arXiv 2312.01818. https://arxiv.org/abs/2312.01818 https://arxiv.org/abs/2312.01818
Published in The Seventh AAAI/ACM Conference on AI, Ethics & Society (AIES'24)., 2024
In this paper, we present a study of the learning dynamics of morally heterogeneous populations interacting in a social dilemma setting. We observe several types of non-trivial interactions between pro-social and anti-social agents, and find that certain classes of moral agents are able to steer selfish agents towards more cooperative behavior.
Recommended citation: Tennant, E., Hailes, S., Musolesi, M. (2024). "Dynamics of Moral Behavior in Heterogeneous Populations of Learning Agents." The Seventh AAAI/ACM Conference on AI, Ethics & Society (AIES'24). https://ojs.aaai.org/index.php/AIES/article/view/31736
Published in arXiv Preprint., 2024
We introduce the design of intrinsic reward functions for the moral alignment of LLM agents. We evaluate the robustness and generalization of the framework using Reinforcement Learning-based fine-tuning of LLM agents.
Recommended citation: Tennant, E., Hailes, S., Musolesi, M. (2024). "Moral Alignment for LLM Agents." arXiv 2410.01639. https://arxiv.org/abs/2410.01639 https://arxiv.org/abs/2410.01639
Published:
I gave a talk at the Darwin College Science Seminar Series, Univerity of Cambridge, about computational social science, and specifically how I conducted my research around Social / Political Psychology using language and network data from Twitter. Slides from the talk available from the OSF.
Published:
Gave a 10-min talk presenting our paper âModeling Moral Choies in Social Dilemmas with Multi-Agent Reinforcement Learningâ. Paper, Poster & Slides linked here.
Published:
Gave a 20-min talk presenting our paper âModeling Moral Choies in Social Dilemmas with Multi-Agent Reinforcement Learningâ at the UCL Computer Science Student Conference. Paper, Poster & Slides linked here.
Published:
I was invited to give a long talk about the intersections of AI and Social Science to a group of students from societies at the LSE - Psychology Society, Philosophy Society, Effective Altruism Society & Google Developer Society. I talked about the methodologies behind NLP and how social scientstis can use them, and then about what social scientists can bring into AI Research in the domain of moral alignment, with specific examples from our paper âModeling Moral Choies in Social Dilemmas with Multi-Agent Reinforcement Learningâ.
Published:
Gave a 5-min Lightning talk presenting our paper âModeling Moral Choies in Social Dilemmas with Multi-Agent Reinforcement Learningâ at the UCL NueroAI Student Conference. Paper, Poster & Slides for the (full) talk linked here.
Published:
(Co-presented with Liam Barrett) Led a presentation and discussion about representations and processing happening in Transformer models to the AI Journal Club hosted at the UCL Department of Psychology and Language Sciences.
Published:
Led a presentation and discussion about my own research on Moral Agents and applications in LLMs to the AI Journal Club hosted at the UCL Department of Psychology and Language Sciences.
Undergraduate course (1-2 lectures), UCL, Department of Psychology & Language Sciences, 2024
I was invited to teach 1-2 lectures per year, for four consecutive years (2021 - 2024), on the âLanguage and Communicationâ module on the UCL BSc Psychology & Language Sciences programme. My lectures covered: