Monday, March 2, 2026

Vitalik Buterin Proposes AI ‘stewards’ to Tackle Low DAO Turnout and Privacy Risks

Photorealistic header showing a crypto innovator silhouette guiding glowing AI agents over a sleek DAO network with ZK proofs and secure enclaves.

Vitalik Buterin Proposes AI ‘stewards’ to Tackle Low DAO Turnout and Privacy Risks

DAO voter participation often sits in the 15% to 25% range, a gap Vitalik Buterin highlighted in a February 2026 proposal that sketches a path toward automated governance. His core idea is to reduce the attention burden on token holders by letting personal AI agents cast routine votes while preserving privacy and eligibility guarantees through cryptography and secure compute.

The proposal matters for treasuries and market participants because governance throughput can shape how quickly DAOs move capital and respond to shocks. If voting becomes faster and less frictional, decision lag could shrink materially, changing the tempo of treasury reallocations and governance-driven market moves.

What Buterin is proposing

Buterin argues that low turnout and attention scarcity are preventing DAOs from governing effectively and that personal AI stewards could handle routine decisions. He frames the participation issue as structural rather than cultural, and treats automation as a practical mechanism to raise effective turnout without forcing users to read every proposal.

The design described in his notes and February 2026 coverage combines five components: personalized large language models to infer user intent, zero-knowledge proofs to prove eligibility without revealing votes, multi-party computation to compute outcomes over private inputs, trusted execution environments to host agents without leaking model data, and prediction-market-style incentives to improve proposal quality and reduce spam. The architecture is layered so that delegation is automated, voting is private, and the system can still provide verifiable legitimacy.

Buterin summarized the motivation directly by saying DAO voting has a serious participation problem and that AI stewards could automate routine governance while leaving strategic decisions to humans. The intended split is “automation for the mundane, humans for the consequential.”

Risks and constraints that come with automation

Commentators highlighted a first-order representation risk: AI agents may encode past preferences and miss nuance as values evolve, creating systematic misvotes over time. The failure mode is not a single wrong vote, but repeated automation of stale or biased interpretation.

The privacy-and-security stack introduces its own operational risk surface, because ZKPs, MPC, and TEEs add complexity and can create new attack vectors such as model poisoning or TEE exploits. In this model, privacy protection and compute isolation become critical dependencies rather than optional upgrades.

Legal uncertainty is another gating factor, especially where courts treat DAOs as general partnerships and liability can attach to decisions made through governance. If an AI steward makes a harmful or legally actionable choice, unclear liability frameworks could deter institutional treasuries from delegating voting authority.

Cost and latency constraints also matter because advanced cryptography at scale is not free, and smaller DAOs may be priced out of sophisticated governance tooling. That would concentrate high-grade governance infrastructure among large communities, introducing a centralization pressure even while the proposal aims to improve decentralization.

Finally, critics flagged the “dumb human” problem: automating voting does not automatically improve proposal quality. If incentives and proposal design remain weak, AI-driven voting may simply accelerate existing inefficiencies rather than correct them.

What this could change for markets and treasuries

For traders and treasury managers, the material impact is speed: governance actions could happen faster, and that can move liquidity around tokens where proposals affect emissions, incentives, or treasury policy. At the same time, automation introduces model risk and new counterparty-style dependencies that would need auditability, fallback human controls, and clear liability paths before institutional adoption.

The most plausible trajectory described in the text is cautious deployment through pilots, emphasizing verifiable controls and human override rather than full autonomy. The near-term story is experimentation with guardrails, not a blanket shift to AI-run DAOs.

Shatoshi Pick
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.