For the last decade, "AI in finance" meant automation. Rule-based systems that route invoices based on vendor codes. Machine learning models that flag anomalous transactions for human review. Bots that execute trades based on predefined strategies within parameters set by human portfolio managers. All of these are genuinely useful. All of them are, fundamentally, faster versions of things humans were already doing — with a human in the loop for anything consequential.
Autonomous AI agents are a qualitatively different category. The difference isn't about capability or intelligence. It's about the relationship between the system and its objectives. An automation follows instructions — it executes a defined process. An autonomous agent pursues objectives — it figures out what actions are required and takes them. That distinction, seemingly abstract, has concrete and significant financial implications.
The financial implications of pursuing objectives
Consider three representative agent deployments that enterprises are running in 2026:
An infrastructure management agent with the objective "keep API response latency below 200ms." When latency degrades, the agent diagnoses the cause. If additional compute resources would solve the problem, it provisions them — spending money. If a different data provider would be faster, it switches — terminating one contract and beginning another. The objective requires financial action. A human in the loop for every spending decision defeats the purpose of the automation.
A treasury management agent with the objective "maximise yield on idle cash." The agent monitors yield rates across eligible instruments in real time. When rates shift, it rebalances — selling one position and buying another. These are financial transactions, initiated by the agent, in response to market conditions that change continuously. Human approval at each rebalancing decision would introduce lag that makes the optimization worthless.
An accounts payable agent with the objective "ensure approved invoices are paid within 24 hours of approval." The agent monitors the approval queue, validates payment details, checks budget availability, and initiates payment transfers. Every action is a financial action. The entire value proposition is that payments happen at machine speed without human initiation at each step.
In each case, the agent needs to transact — not just recommend. Not just queue a request for human review. Actually move money. And the financial infrastructure under it needs to be designed for that use case, not adapted from infrastructure designed for human-initiated transactions.
Why the proxy model breaks at scale
The dominant pattern for agent payments today is the proxy model: the agent is granted access to a human's financial account (or a company's payment processor integration) and executes transactions on behalf of that account. The human retains legal ownership; the agent borrows authority through API credentials or delegated access.
This model is understandable as a starting point — it reuses existing financial infrastructure without requiring new legal or compliance frameworks. But it breaks under pressure in specific ways that matter for production deployment:
Attribution and liability are misaligned. When an agent transacts under a human's financial identity, the legal attribution of the transaction is to the human — but the human didn't make the decision. If the transaction causes loss, the human is liable for a decision they didn't take. This attribution mismatch creates legal risk that scales with the authority granted to the agent. As agents take more consequential financial actions, the proxy model increasingly doesn't match the actual decision-making structure.
Human-designed constraints are wrong for agents. The spending limits, fraud detection models, and velocity controls on human financial accounts are calibrated for human behaviour. A human making 500 small payments in an hour is suspicious; an agent managing accounts payable is doing its job. A human spending money in 40 countries in a single day is suspicious; a treasury management agent optimizing across global yield opportunities is functioning correctly. The fraud models trigger on agent behaviour, creating operational friction that accumulates at scale.
The architecture doesn't scale horizontally. A single human's financial identity can't credibly support hundreds of concurrent agents transacting simultaneously. The operational security model — one set of API credentials with one set of access permissions — breaks when the system it's powering is a fleet of agents with different scopes, different transaction patterns, and different risk profiles. You want each agent to have its own financial identity with its own policy constraints, independently auditable.
What genuine financial autonomy requires
An agent with genuine financial autonomy — the architecture that actually fits the use case — has four properties:
Its own wallet with its own on-chain identity. Not borrowed credentials from a human account. A wallet with a unique address, controlled by keys associated with that specific agent. Transactions attributable to the agent, not to a human acting "on whose behalf" the agent is technically authorized.
A spend policy defining the scope of autonomous action. What amounts, counterparties, and transaction types can the agent execute without explicit approval? The policy is the contract between the agent owner and the agent — the formal statement of what autonomous action is authorized. Enforced at the wallet level, not the application level.
Direct transactional capability. The ability to initiate payments to other agents, services, and counterparties as a first-class operation — not as a side effect of human-facing payment infrastructure that happens to accept API calls.
An auditable transaction history. A record of every financial action the agent took, with the policy context that authorized each action. The audit trail that makes agent financial autonomy governable rather than just technically possible.
The infrastructure question
Genuine financial autonomy for agents isn't a theoretical future state. The infrastructure to build it exists today: non-custodial agent wallets with wallet-level policy enforcement, USDC settlement on programmable blockchains, and KYA compliance frameworks that link agent identity to verified ownership chains. What doesn't exist, universally, is the engineering decision to use it.
The path of least resistance — proxy models on human financial infrastructure — is technically simpler in the short term. But every agentic finance system will eventually face the attribution problem, the constraint calibration problem, and the horizontal scaling problem. Building on infrastructure designed for agents avoids all three from the start. Proco is that infrastructure. The question isn't whether to build on agent-native rails — it's when.
Further reading: Why Your AI Agent Needs Its Own Wallet · KYA — Know Your Agent · Start building — GitHub →