Navigating Social Dilemmas with LLM-based Agents via Consideration of Future Consequences
Published in IJCAI, 2025
Artificial agents with the aid of large language models (LLMs) are effective in various real-world scenarios but struggle to cooperate in social dilemmas. When making decisions under the strain of selecting between long-term consequences and short-term benefits in commonly shared resources, LLM-based agents often exploit the environment, leading to early depletion. Inspired by the well-known concept in social psychology, we propose a framework to enable the ability to consider future consequences for LLM-based agents, which results in a new kind of agent-CFC-Agent. Furthermore, we enable the CFC-Agent to act toward different levels of consideration for future consequences. Our first set of experiments where LLM is directly asked to make decisions shows that agents considering future consequences exhibit sustainable behaviour and achieve high common rewards for the population. Extensive experiments in complex environments showed that the CFC-Agent can manage a sequence of calls to LLM for reasoning and engaging in communication to cooperate with others to resolve the common dilemma better. Finally, our analysis showed that considering future consequences not only affects the final decision but also improves the conversations between LLM-based agents toward better resolving social dilemmas.
Link