Institutional Alignment
Defined roles, norms, and protocols that govern agent behavior -- like a courtroom has judge, attorney, jury regardless of who fills those slots.
RLHF (one human correcting one AI) doesn't scale. Google's research argues for institutional templates -- defined roles and protocols that persist regardless of which specific agent fills them. A courtroom works because the roles of judge, attorney, and jury are defined, not because of the specific people occupying those seats.
steadybase's AppManifest system is institutional alignment made real. Each app declares its agent's role, what services it uses, what signals it publishes and subscribes to. The identity of the specific AI model matters less than the role protocol it fulfills. Swap Claude for Gemini in the defaultModel field and the institutional structure holds.
This is why steadybase isn't locked to a single LLM provider. The institution is the product, not the model.
See this in production
These research concepts power real agent deployments today. See how organizations are using the Agent Operating System.
Agents as a Service →