Knowing Isn't Understanding: Re-grounding Generative Proactivity with Epistemic and Behavioral Insight

每日信息看板 · 2026-02-16
研究/论文
Category
arxiv_search
Source
95
Score
2026-02-16T23:28:17Z
Published

AI 总结

该论文提出生成式智能体应从“回答已知问题”转向在用户认知不完备时进行受约束的主动干预,这对避免误导与提升人机协作质量至关重要。
#arXiv #paper #研究/论文

内容摘录

Generative AI agents equate understanding with resolving explicit queries, an assumption that confines interaction to what users can articulate. This assumption breaks down when users themselves lack awareness of what is missing, risky, or worth considering. In such conditions, proactivity is not merely an efficiency enhancement, but an epistemic necessity. We refer to this condition as epistemic incompleteness: where progress depends on engaging with unknown unknowns for effective partnership. Existing approaches to proactivity remain narrowly anticipatory, extrapolating from past behavior and presuming that goals are already well defined, thereby failing to support users meaningfully. However, surfacing possibilities beyond a user's current awareness is not inherently beneficial. Unconstrained proactive interventions can misdirect attention, overwhelm users, or introduce harm. Proactive agents, therefore, require behavioral grounding: principled constraints on when, how, and to what extent an agent should intervene. We advance the position that generative proactivity must be grounded both epistemically and behaviorally. Drawing on the philosophy of ignorance and research on proactive behavior, we argue that these theories offer critical guidance for designing agents that can engage responsibly and foster meaningful partnerships.