Enterprises increasingly deploy agentic AI for decisions ranging from highly structured operational choices to strategic judgments. Empirical evidence is limited on how decision rights should be allocated between agents and humans, what levels of autonomy are appropriate, and which governance mechanisms can both ensure trust and drive value.
Building on an AI decision rights matrix (decision impact × decision structure) we developed in 2025, we will create a framework of agentic decision contexts, governance design principles, and evidence of performance outcomes.
This study will use semi-structured interviews with senior executives in enterprises piloting or operating AI agents.
We will focus on the following research questions:
- In which decision classes should AI agents decide and act, and under what conditions?
- Which governance policies and practices effectively constrain risk while enabling learning and scale?
- How do enterprises embed agent decisions in workflows, and what performance changes do they observe?
SEEKING: We are seeking participation in the research from executives in enterprises that are using or piloting AI agents. We aim for three interviews per enterprise, with executive interviewee roles in strategy/monetization, technology/operations, and people/governance.
CONTACT: Ina M. Sebastian