Carleton University - School of Computer Science Honours Project
Winter 2024
Editing Morals in Language Models
ABSTRACT
Recent advancements in language model technology have significantly enhanced
the ability to edit factual information. Yet, the modification of moral judgments—a
crucial aspect of aligning models with human values—has garnered less attention.
In this work, we introduce COUNTERMORAL, a novel dataset crafted to assess
how well current model editing techniques modify moral judgments across diverse
ethical frameworks. We apply various editing techniques to multiple language
models and evaluate their performance. Our findings illuminate significant insights
and challenges, paving the way for future research in developing ethically aligned
language models.