
Abstract
Agentic AI systems, capable of pursuing complex goals with limited direct supervision, hold significant potential for societal benefit when integrated responsibly. While these systems can help individuals achieve their goals more efficiently, they also introduce risks. This white paper proposes a definition for agentic AI systems and identifies the various parties involved in their lifecycle. It emphasizes the importance of establishing baseline responsibilities and safety best practices for each party. A key contribution is an initial set of practices designed to ensure safe and accountable agent operations, intended as foundational elements for developing agreed-upon standards. The paper also outlines the questions and uncertainties that need resolution before these practices can be formalized. Furthermore, it highlights categories of indirect impacts from the widespread adoption of agentic AI systems, which will likely require additional governance frameworks.
Agentic AI research grant program
A program was launched to award grants ranging from $10,000 to $100,000. These grants were intended to fund research into the impacts of agentic AI systems and methods for ensuring their safety.
The grant program is now closed.

