Human Trust of AI Agents
嗯,用户让我帮忙总结一下这篇文章的内容,控制在100字以内,而且不需要特定的开头。首先,我需要仔细阅读文章的摘要和内容,理解主要研究点。 文章标题是“Humans expect rationality and cooperation from LLM opponents in strategic games”,看起来是关于人类在与大型语言模型(LLMs)对弈时的行为研究。摘要提到他们进行了一项实验室实验,比较人类在多人p-beauty contest游戏中面对人类对手和LLM对手时的行为差异。 实验结果显示,当面对LLMs时,人类选择的数字显著降低,尤其是选择零Nash均衡的情况增加。这主要是因为参与者认为LLMs具有较强的推理能力和合作倾向。研究还揭示了不同参与者行为和对LLMs信念的异质性,并对混合人机系统的机制设计有重要启示。 接下来,我需要将这些信息浓缩到100字以内。重点包括:研究对象是人类与LLMs在战略游戏中的互动;实验发现人类更倾向于选择较低的数字;原因是对LLMs理性与合作的预期;以及这些发现对机制设计的影响。 可能还需要注意关键词:学术论文、AI、游戏、LLM、信任等。确保总结涵盖主要发现和意义。 最后,组织语言,确保流畅且信息完整。比如:“研究表明,在多人战略游戏中,人类玩家在面对大型语言模型(LLMs)对手时会采取更低的数值策略。这主要源于参与者对LLMs理性推理能力及合作倾向的预期。研究揭示了人类行为与信念的多样性,并为混合人机系统的设计提供了重要启示。” 这样既简洁又全面地涵盖了文章的核心内容。 </think> 研究表明,在多人战略游戏中,人类玩家在面对大型语言模型(LLMs)对手时会采取更低的数值策略。这主要源于参与者对LLMs理性推理能力及合作倾向的预期。研究揭示了人类行为与信念的多样性,并为混合人机系统的设计提供了重要启示。 2026-4-16 09:41:24 Author: www.schneier.com(查看原文) 阅读量:9 收藏

Interesting research: “Humans expect rationality and cooperation from LLM opponents in strategic games.”

Abstract: As Large Language Models (LLMs) integrate into our social and economic interactions, we need to deepen our understanding of how humans respond to LLMs opponents in strategic settings. We present the results of the first controlled monetarily-incentivised laboratory experiment looking at differences in human behaviour in a multi-player p-beauty contest against other humans and LLMs. We use a within-subject design in order to compare behaviour at the individual level. We show that, in this environment, human subjects choose significantly lower numbers when playing against LLMs than humans, which is mainly driven by the increased prevalence of ‘zero’ Nash-equilibrium choices. This shift is mainly driven by subjects with high strategic reasoning ability. Subjects who play the zero Nash-equilibrium choice motivate their strategy by appealing to perceived LLM’s reasoning ability and, unexpectedly, propensity towards cooperation. Our findings provide foundational insights into the multi-player human-LLM interaction in simultaneous choice games, uncover heterogeneities in both subjects’ behaviour and beliefs about LLM’s play when playing against them, and suggest important implications for mechanism design in mixed human-LLM systems.

Tags: , , , ,

Posted on April 16, 2026 at 5:41 AM0 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.


文章来源: https://www.schneier.com/blog/archives/2026/04/human-trust-of-ai-agents.html
如有侵权请联系:admin#unsafe.sh