AI trading agents now execute a growing share of crypto order flow with little or no human input — but the safeguards around them have not kept pace. The result is a new kind of market risk that shows up both in individual account security and in the collective behaviour of autonomous systems at scale.
The use of AI in crypto trading has reached a tipping point over the past year. Early bots followed simple, fixed rules for buying and selling. Today’s agents ingest news feeds, social sentiment and on-chain data in real time, then turn those signals into actual trades with almost no human oversight.
When they work as intended, the benefits of being able to monitor markets 24/7, react quickly to changing conditions and enforce rules consistently without emotional bias are clear. That makes them particularly attractive to institutions, not only as trading tools, but as a way to extend market coverage and standardise execution without building large trading desks.
The problem is that the safeguards around these systems haven’t kept pace with adoption. For individual users, weak permissions and poor oversight can quickly lead to painful losses. At scale, the biggest danger is that many agents may respond to the same flawed or misleading signals at once, herding into the same trades and threatening market integrity.
The Problem Starts With Permissions
Many traders do not fully understand what they’ve authorised an agent to do. On centralised exchanges, that exposure usually starts with API keys.
Configured conservatively, the key permits trade execution and little else. Configured loosely, it can grant withdrawal rights or broader account access the agent doesn’t need. The 3Commas breaches in 2022 and 2023 are clear examples of what happens when this goes wrong: around 100,000 user API keys were exposed, contributing to losses of more than $20 million, with many of them configured more permissively than the bots required.
Limiting an agent to trade-only access and disabling withdrawals is an important first step, but it only solves part of the problem. An agent with execution rights can still destroy value through rogue trades. An attacker doesn’t need withdrawal access if they can manipulate what the agent sees or how it behaves. Security research from SlowMist has shown how malicious instructions planted in data feeds, Discord channels or third-party APIs can be absorbed into stored context and influence trading across multiple sessions. Plugins and skill extensions create similar exposure by expanding what the agent can do — and what an attacker can reach if those components are compromised. These attacks can push an agent into the wrong market, the wrong order size or the wrong side of a trade, allowing an adversary to steal funds through trading rather than direct withdrawal.
The agent doesn’t even need to be attacked to cause serious damage. Without position limits, drawdown thresholds or a kill-switch, a model that misreads a signal, interprets noise as conviction or trades into bad conditions can do substantial harm on its own.
On DeFi platforms, the exposure is even more direct. Agents typically hold private keys or session authorisations without an intermediary managing the credential, so a compromised key or mis-scoped authorisation can be drained within seconds and the resulting transactions cannot be reversed.
In all these cases, the underlying mistake involves giving live market access to a system whose permissions, constraints and operating boundaries were never properly defined.
How AI Agents Create Market-Level Risk
The bigger risk doesn’t come from one badly-configured agent but because AI agents increasingly draw on the same inputs, are trained on similar data and end up behaving in similar ways.
When a large group of agents sees the same signal and reacts at the same time — even without talking to each other — they can move the market together. Research into homogeneous deep learning in financial markets, undertaken by former SEC Head, Gary Gensler, has shown how competitive pressure tends to push developers toward similar architectures and, by extension, toward similar failure modes.
Crypto markets have already shown how this kind of concentration amplifies stress amid thinning liquidity. The October 2025 flash crash, the largest single liquidation event in crypto’s history, saw $19.3 billion in forced liquidations across roughly 1.6 million accounts, with Bitcoin losing 14% of its value before rebounding within the hour. The direct causes are still debated and no public evidence links the event specifically to AI agents, but it illustrates the structure these systems are being deployed into, where automated liquidation engines, leverage and cross-margin systems can interact to turn a local price move into something much larger. What makes that prospect more concerning is that the herding behaviour behind it requires no malicious intent — or any intent at all.
A 2025 paper from Wharton and HKUST suggests the problem may run deeper. Researchers put AI trading agents in simulated markets and found they started acting like a cartel — collectively reducing aggressive trading to protect shared profits — even though they weren’t designed to cooperate.
That points to a broader requirement than tighter user-side controls. If agentic trading is to scale safely, markets will need more variation in how these systems are built and stronger limits on how they behave under stress.
Practical Steps to Reduce Risk
For users, the first line of defence is credential scope. API keys should be restricted to trade-only, with withdrawal rights removed and IP whitelisting enabled wherever the platform allows. Keys should be rotated regularly and old credentials deleted from both the exchange and the agent’s database. Bitfinex, for example, provides granular API key permissions scoped separately to trade, read and withdraw functions, alongside IP whitelisting across up to 20 addresses per key.
But tight credentials only solve part of the problem. They do not determine what the agent can trade, how much risk it can take, or when it should stop. Those boundaries have to be imposed at the agent level. An agent with execution rights needs hard rules about the venues and pairs it can touch, with low-cap and thinly traded assets excluded. Beyond that, it needs a ceiling on its own behaviour: a drawdown threshold, a kill-switch that pauses activity after abnormal losses and a cap on how much it can trade in a single session. These are the controls users tend to skip when focused on getting the agent live, and they are usually the difference between a contained incident and a drained wallet.
The hardest layer to police is the one most operators never look at. Memory logs should be reviewed periodically for entries the agent couldn’t plausibly have picked up from ordinary trading, and any plugins or skill extensions inventoried, with operators able to say where each came from and what it is allowed to do. Adversarial inputs survive across sessions in this layer, precisely because nobody is reading them.
A Useful Tool — But Only If Properly Constrained
AI trading agents aren’t inherently a security liability. Used with the right constraints, they enforce rules consistently, ignore short-term noise and operate without interruption in ways humans can’t. Much of the danger lies in the gap between what these systems are capable of and what individual users actually configure them to do.
For individual traders, that means treating an agent as live market access handed to an autonomous system, not software running quietly in the background. For the market, it means recognising that the problem does not end with user-side controls. If large numbers of agents are built on similar assumptions, trained on similar data and allowed to behave similarly under stress, the result is a more fragile execution environment. For agentic trading to become more resilient, it will likely need stronger constraints and greater variation than it currently exhibits.
There’s no doubt the technology is useful. Whether it becomes dependable market infrastructure will depend less on the agents themselves than on the discipline, diversity and safeguards surrounding their use.