AI Agent Wallets
How AI Agents will transact on the web
TL;DR
AI agents need a wallet to transact on the internet, which represents a massive market
This is hard because of institutional and compliance challenges (PCI DSS etc)
But these challenges present a very valuable opportunity to tackle the hardest channel first: TradFi
AI agent wallet key requirements:
Deep Trust
Interoperability
Reliability
Purchase execution
Distribution through partnering with AI app developers, fintechs, and create a trusted 3rd party brand
Seriously, read the rest of this essay, I have many thoughts
Where we are right now
Currently, our entire web model is based around two repeated steps: (1) view and (2) action. This is because the web is built for humans, and the interfaces are built for human experiences. This paradigm extends to every single part of the web, even to the underlying financial infrastructure.These payment rails are built for humans. The checkout process when you buy shoes is built for humans. Paying your rent via Zelle is built for humans. Ordering & paying for Uber Eats is built for humans.
But this is now changing, with the advent of AI, there’s significant interest in building web agents that are now able to completely autonomously navigate the web. This is a completely new paradigm shift away from the bots of yesterday that required pre-defined instructions.
New agents are able to navigate the web, use tools and execute on behalf of their user. For example, shopping agents [like fetchr.so and vetted.ai], use the web to search products and return them to their users. In production GA, Perplexity has shown that users can shop directly in their experience by integrating with Stripe Pay & Shopify Pay.
Visual financial experiences are the universal gateway for financial transactions to occur because they are originally built for humans. Ultimately, this raises the question, how do we get agents to autonomously execute financial transactions built for humans on the internet?
The Problem
AI Agents are an LLM engine with tools to execute a task on behalf of a user. These tools are to help the AI agent execute on an ultimate goal, meaning agents are limited by the tools they have access to. Since there’s no financial tools available, they aren’t able to make financial decisions on behalf of the user, like pay for coffee, or pay for private information.
To enable financial tools, financial data has to exist to be transmitted at some point in the stack. The problem is that LLM providers (OpenAI, Anthropic) can’t handle payment data because sending personally identifiable information (PII) to external LLMs is highly risky. There’s credit card and billing information that may get processed by the LLM and having PII being processed in an insecure environment causes operating challenges for both the LLM provider and the developer.
The AI agent frameworks are not able to process the PII data as well because they also suffer from a similar problem to the providers. Having credit card & PII flowing through the underlying system on the server side requires an inordinate amount of effort to create fully secure environments.
Finally, and perhaps most importantly, both these layers are required to be Payment Card Industry Digital Security Standard (PCI-DSS) compliant to actually process financial transactions such as credit card payments. Any system that holds, processes, transmits or interacts with credit card or financial data needs to be PCI compliant. This presents a large problem for application layer AI agents that need to transact securely.
How it’s done right now
There are a few options right now to implement an AI agent with financial autonomy. The easiest one is by asking your customer for their credit card directly, which puts the app developer in a precarious position, because they’re not only not PCI compliant, but they’re also potentially exposing customer credit card information insecurely, which puts liability on their app.
Other possible options include having a make-shift wallet with AWS secrets manager (or another “secure” location) and having temp variables input the credit card info, but unfortunately the data is still exposed to the LLM when reading the DOM or visually in the UI for VLMs. This leads to the same PCI compliance and insecure financial transaction problems earlier.
The only company with actual agentic shopping right now is Perplexity, and they only enable payments on Stripe powered checkouts by directly partnering with Stripe to expose their underlying API structure to execute the transaction.
Without financial autonomy, these LLMs, and generally AI agents are no more than fancy search & information acquisition engines, requiring humans in the loop to finalize and re-execute decisions.
The solution
It would be too onerous to get every single AI agent application developer to apply for Stripe Issuing, or to retain PCI compliance. The solution is an embedded wallet that handles PCI compliance, virtual card issuance, and KYC/KYB processes. The embedded wallet handles the user’s PII, information and financial data without exposing it to the underlying models or providers.
Building this product
Building an AI agent’s wallet requires a few essential requirements:
Enable deep levels of trust with the 3rd party customer
Any AI wallet that emerges will need to address the major concern of trust amongst their users. Even if a wallet was PCI compliant, there’s a larger social change that has to occur for these types of transactions. Despite this, there’s a huge opportunity at the inception of this technology. Similar to Airbnb’s early trust questions that were solved as technology, verification systems and social culture shifted, this shift will also occur in the AI agent financial space.
Developer Experience & Interoperability
Similar to Stripe’s early playbook, there needs to be dead simple developer experience. The SDK, implementation and deployment needs to be so simple to use that it enables production deployment within 1 day. The wallet needs to handle the KYC/KYB and eventually Know Your Agent (KYA) requirements, and act similarly to an embedded iframe into the deployed agent.
Secondly, with so many emerging agent frameworks and providers, it requires the wallet to be interoperable between the frameworks and the hosted providers. Between each agent and each payment terminal needs to have interoperability with the same well-defined payment actions.
High reliability
It goes without saying that every single financial technology needs to have high reliability to be an effective solution. Specifically, AI agents need to access their wallets no matter when, and the payment rails need to be running all the time.
Ability to interface or act on behalf of the agent to complete a purchase
The wallet at its core needs to take money from a customer, verify that the transaction has to occur, then pay the merchant. This simple paradigm is not significantly different from existing solutions in the procurement, marketplaces and merchant of records that need to execute on behalf of their customers. We can leverage pre-existing financial patterns to authorize these transactions.
But Stripe?
The core threat to this business is Stripe, they have the distribution, developer love and infrastructure to build this out. I believe they won’t for three core reasons:
Consumer management & processing
Stripe’s core business model is to provide developers with easy-to-use APIs that enable them to build financial products. If they wanted to move into the direct customer fund management space they would have built out a core bank already. Managing B2B customers is enough, but scaling to millions of direct customers, each with their own nuances adds complexity that is better left at the application level.
Enterprise focus
Stripe’s innovation happened due to their ease of use & simplicity, and now their focus is primarily on B2B deals with larger e-commerce partners like Walmart & Shopify etc. There’s a large amount of customization to be built out on enterprise deals, that spending large resources on validating AI native payment flows & mechanisms that haven’t been invented yet doesn’t make sense.
New layer of authentication & fraud prevention
Stripe’s existing infrastructure is in place for human to bot prevention technology. Primarily, trying to differentiate between the humans actually attempting to purchase vs bots attempting to scalp. In this new paradigm, fraud prevention takes a different meaning: having to differentiate between bots sent by humans (‘good’ AI agents) and bots sent for scalping/other nefarious activities (‘bad’ AI agents). This fundamentally requires rewriting Stripe’s entire radar infrastructure & it’s core payments processing. It’s much easier to build something from scratch than it is to
Limited payment network
Stripe’s Intent payment systems only work with their payment processors, which severely bottlenecks capabilities. Credit card details are the universal interface for all processors, visual web checkouts need credit card information to process payments globally.
Massive TAM x AI Agents
The existing payments space is already enormous, with credit cards GTV accounting for $5.6T, therefore integrating and supporting this market is critical to garnering widespread adoption. When looking at E-commerce transaction volume at $1.2T, it’s clear that the vast majority of online payments are done through credit cards.
E-commerce sales in the US was $1.119T in 2023. Conservatively assume 10% of these transactions executed by AI agents in the future representing $120B of GTV. A 1% transaction fee represents $1.2B in revenue.
Existing fintechs in the space are multi-billion dollar companies like Stripe, Adyen, Paypal (Braintrust) and Square all operate at the processor level. This required convincing early ecommerce sites to integrate into their solutions & now have built staying power. AI agents present the next frontier of financial transactions, where akin to Stripe convincing merchants to use their checkout, we can develop network effects by partnering with early application layer AI agents.
Distribution
The key enabling factor behind embedded AI agent wallets will be the network effects that it garners from usage. As more customers (merchants and consumers) use the AI agents and their wallets, more payment networks will enable them to be used & ultimately there will be agent-to-agent interactions.
Partnering at 3 different levels is essential to get the flywheel going:
Developer level
Partnering with AI agent frameworks and developer brands in the AI agent space is critical. AI agent apps are being built right now and so it’s essential this is a tier-1 product that these developers love to create brand loyalty. Investing in open source and giving back to developers through OSS contributions, networks and being deeply embedded with them is critical.
Application level
At the application level, developers and early stage startups are building products that use AI agents right now. The earliest companies will be growing significantly in the next few years and partnering with the early allows us to embed, develop network effects and grow alongside them. These application level products (shopping agents, coding agents, financial agents, data agents, etc) give us second order distribution and clear branding signals us as a trusted 3rd party. Furthermore, value is being generated at the application layer from AI native offerings (e.g. AI receptionists) pushing growth at this layer for the future.
Credit Providers
Early on, we need to use the existing payment rails, which means working with Banking as a service (BaaS) providers. Sponsor banks, embedded financial institutions and especially embedded credit products will be our final leverage in getting distribution. Similar to Bilt, credit and lending institutions (Wells Fargo) desires increased deposits (through a funded account), additional customers to their credit products. Partnering with fintechs (Stripe) and BaaS providers, allows us to harness
Risks
There are risks, here are some & how we address them:
Trust
Risk: Trusting new financial systems is scary for business & consumers
Strategy: Social culture has significantly shifted around using credit, loans and new financial products.
Humans in the loop via purchase flow callbacks provides final approval for the AI agent to complete its task, this means that the ultimate decision is still with the human card holder.
Abuse and Legal
Risk: The legal framework around AI agents and finance hasn’t been fleshed out yet & there exist serious risk in fraud and abuse
Strategy: The fact that there isn’t a clear legal framework around finance around AI is actually a benefit. Similar to Coinbase, once we’re deeply embedded into these products, we’re able to sway the narrative around financially autonomous AI agents.
On abuse, this is a serious risk factor, but it’s the same risk that any payment processor or merchant takes on. Differentiating between “good” bots and “bad” bots requires two steps: (1) Know your Agent (KYA) which verifies intent and processes and (2) approvals from humans in the purchase flow.
X can build this
Risk: {Stripe, Google, OpenAI, etc} can build this.
Strategy: While Google, OpenAI have the ability, it’s not in their core mission to solve financial problems right now. Secondly, we addressed Stripe’s challenges above.
Developer adoption
Risk: Developers are finicky and require specific sets of things to become loved.
Strategy: It’s true developers are the worst people to open their wallets, so by building infrastructure that developers love we can
LLM reliability
Risk: This is all dependent on the LLMs to get smarter over time, what about if they don’t and this is the best?
Strategy: We’re not actually dependent on the LLMs to get better, we’re dependent on the AI agents to get better. The tooling & surrounding layers of the LLM will improve, which will improve reliability, which improves trust downstream.
Final Thoughts
It’s clear that AI agents will be the next frontier of technology. We’re already seeing AI agents being used in RPA processes (Basepilot, Induced, etc) as well as vertical agents in coding (Cursor Composer), finance (TaxGPT) and other areas of our lives. The common thread is that all of these AI agents will need to be able to transact on the internet in a novel way. The traditional financial system won’t be going away, and we need backwards compatibility with them in order to move forward.

