Posted by: Cheyenne Vyska on behalf of Brian Monroe - 11/25/2025
ACFCS Special Contributor Report: When it Comes to Oversight of Securities Space, Are Regulators Ready for AI-Enabled, Algorithmic Financial Crime?
The skinny:
-
In this special ACFCS Contributor Report, a law student tackles a mushrooming conundrum for the multi-trillion-dollar securities space: what human, firm or third-party technology vendor is responsible if AI engines and their enigmatic algorithms run amok?
-
Moreover, what should trading firms and broker dealers large and small be doing now to better position themselves as agile, adroit market movers – arming themselves with AI to stay competitive but not allowing these thinking machines to make decisions a human wouldn’t.
-
In that same vein, Jara Kider offers some insight and guidance on what the main regulators of the U.S. securities sector could do – taking cues from a government agency overseeing aviation. The parallel is an apt one – and just might prevent a high-flying stock market from crashing down.
By Jara Kider
A third-year law student (3L) at the Benjamin N. Cardozo School of Law
November 25, 2025
jkider@law.cardozo.yu.edu
With editing and minor content contributions from ACFCS Chief Correspondent, Brian Monroe
In 2012, Knight Capital Group crumbled, all because of a software glitch. The company lost $440 million when its computer placed millions of erroneous orders. This went on for 45 minutes before the trading “kill switch” was flipped.
One would think Knight Capital Group’s use of computer trading software would cause traders and groups to proceed with caution. However, high-frequency trading bots, AI investment advisors, and auto arbitrage systems are now standard in the field.
Not only do these computer-based programs operate at an unprecedented speed, but they also work in the shadows.
The Securities and Exchange Commission (SEC) and Financial Industry Regulatory Authority (FINRA) currently enforce laws written before technological advancements in trading.
These laws are designed to detect human-based fraud. Now, with machines making financial decisions, being taught how to perform trades (and subsequently, how to be deceptive), the issue that regulators must grapple with is whether they can recognize when machines break the law.
This begs the question: how can any human regulator keep up with and comprehend these computer-based, artificial intelligence-enhanced (AI) programs?
Today's Regulations are Built for People
The SEC’s regulations aim to deter human misconduct. Human misconduct includes fraud, insider trading, and market manipulation. The Exchange Act provides regulations that guard against intentional human misconduct.
Similarly, the Investment Advisers Act outlines conduct directly tied to investment advisers.
Both regulations highlight the human element.
Moreover, it is not just the SEC’s regulations that are built for people.
FINRA is a self-regulatory organization that is responsible for overseeing broker dealers, trading praties, suitability, and investor protection.
FINRA is the front-line, helping to influence trading behavior to protect investors.
But those regulators face herculean challenges – from human reviewers understanding how and where to focus their resources to choosing what technologies and programs to uncover market miscreants, schemes and scammers.
One of the most basic questions is trying to uncover who is making the trading decisions and who to criticize and penalize when things go wrong – the trading firms using the programs, the backend tuners and testers or the third-party technology vendors.
Today’s high-frequency trading bots and auto arbitrage systems in some respects operate and act autonomously.
What do we mean by these terms?
High-frequency trading bots are automated computer systems that execute a large number of trades extremely fast.
Auto arbitrage systems are software programs that identify and execute trades to profit from temporary price differences of the same or related products across exchanges.
These computer programs alone can facilitate strategies that impact the markets, including quote stuffing, spoofing, electronic front-running, wash trading, and mass misinformation spreading.
What does this look like on real and virtual trading floors?
-
- Quote stuffing is when the market is flooded with a massive number of buy and sell orders that are then cancelled.
- Spoofing is when a large order is placed with no intention of ever being executed. This is to create a false sense of supply and demand.
- Electronic front-running is when traders use non-public information of a large impending client order and execute their own trade first, thus profiting from the predictable price movement.
- Wash trading is when the same financial instrument is bought and sold at the same time to create a false impression of market demand.
All of these strategies are illegal in the United States. All of these techniques can have serious consequences, including market distortion, financial fraud, and investor harm.
As regulations currently stand, they are prepared to prosecute human misconduct, but remain largely silent on computer-generated misconduct.
Since high-frequency trading bots and auto arbitrage systems act autonomously, to whom do the regulations apply?
Additionally, current regulations rely on enforcement chiefly through “after-the-fact” investigations. Regulations and their associated program requirements and penalties for failings are essentially remedial measures, not proactive. By the time enforcement comes, the trades have already been executed, and the market has already been manipulated.
Accountability is also blurred after the fact.
Current regulations call for individuals to explain their actions during the investigation stage. However, computer trading software is not always explainable.
Computer models learn and train themselves after their initial creation. This makes it difficult for the creators to explain the model as it continues to change autonomously.
This creates a regulatory paradox: How can computer trading software users be held accountable when they cannot explain their evolving, expanding and ravenous models? The SEC’s regulations were designed with human fraudsters in mind, not advanced computer systems.
Algorithmic Financial Crime
High-Frequency Trading Algorithms
High-frequency trading algorithms are increasingly popular amongst today’s traders.
Essentially, these high-frequency trading algorithms exploit microsecond market data. These software programs use low-latency connections and real-time market data to maximize profit by exploiting market inefficiencies.
Human traders cannot simply manipulate microsecond market data. High-frequency trading algorithms can provide an unfair advantage. This is because small industry players cannot afford to create these computer programs due to the high costs.
Additionally, these algorithms manipulate the market via spoofing and quote stuffing. It is essential to note that both spoofing and quote stuffing are illegal.
Regardless, these programs can unintentionally collude or manipulate spreads. High-frequency trading algorithms, once deployed, learn from other high-frequency trading algorithms. This causes a wider range of high-frequency trading algorithms to act similarly.
The result: creating groups of algorithms that all function similarly without human intent – and potentially company or examiner oversight.
It is important to note that computer-based programs rely on data. With the human element removed and replaced with automation, individuals must ensure that the machine is getting data from a reliable source.
Without reliable data, the algorithm will not function properly. This opens the door for data integrity, accuracy, and availability issues.
AI Investment Advice
With generative AI now more available to the masses, often for free, these AI tools now offer stock recommendations.
I searched “what stocks should I buy today” in ChatGPT. This phrase prompts the AI program to give three stocks to invest in: (1) PLD (Prologis, Inc.), (2) SPG (Simon Property Group), and (3) DLR (Digital Realty Trust, Inc.).
The AI engine made these picks for me with little fiduciary oversight. In short, the program lacks standards and duties similar to those of human advisors.
If the advice generated is misleading or simply biased, who is liable? It is not as if a human is manually responding to ChatGPT inquiries. Should securities regulators or other enforcement bodies target the creators of ChatGPT or the individuals who input data into the open-source model?
It is impossible to know who to hold accountable for the advice rendered by AI programs.
Deepfake Data and Synthetic Market Manipulation
In this piece, we talked a lot about companies and frauds from insiders using AI to get or create unfair advantages.
But there is a whole world of fraudsters outside a firm using AI tools to fool companies, dupe investors with inflated hype by fictitious leaders and also target and drain their investment portfolios directly.
For example, AI programs can create false financial news.
Programs like Sora can create life-like videos of influential and powerful company executives seemingly issuing material statements. The AI-generated fake news can manipulate the market before the truth is publicly stated.
The speed at which fake data and news are generated is beyond any human reaction.
The SEC’s Regulations are Lagging
Global threat actor groups supercharged by AI have seen international fraud figures explode, with some estimates of AI-enabled fraud quadrupling in just the last few years.
So how are regulators responding?
The SEC has a Market Abuse Unit (MAU). The MAU is a specialized team that uses data to detect market manipulation and financial crimes. The unit relies heavily on human analysts. However, the MAU focuses on emerging technologies that can facilitate fraud and misconduct.
The issue today is that the MAU faces a challenge many other government oversight bodies are wrestling with: creating software updates to keep pace and detect financial fraud schemes – that are designed to evade detection.
While the MAU has committed to combating computer-generated financial fraud, its sophistication and resources are likely not comparable to that of many private trading firms. Traders work on algorithms daily, editing and morphing the programs to achieve the maximum profits possible.
The cold reality: The MAU likely can’t compete with the daily editing, tuning and pruning of such programs.
The MAU will consistently play a game of catch-up, racing to edit its technology to detect fraud stemming from computer trading programs. If the MAU does not adopt technological advancements as fast as private trading firms, it will always be behind.
Regulators will not be able to protect consumers from financial fraud if this is the case.
Private firms have AIs running at what can feel like the speed of light, while regulators are still adopting what many consider dated, even antiquated technology.
How Can the SEC Fix It?
First, the SEC needs to recognize and codify legislation that reflects this notion: any time you use certain automated pieces of technology that can think for themselves and take decision-making out of the hands of human traders, there is an immense risk.
Coding, running and tuning an AI system that needs sufficient, accurate and reliable data inherently carries a risk.
Machines malfunction, data can be faulty, and, oftentimes, it is hard to understand how complex models function. The SEC needs to approach legislation with this sentiment in mind.
For instance, the SEC can mandate that firms register and disclose all of their trading algorithms – in essence create a regulatory key to view the secret sauce of their “black box.” This means that all trading algorithms will be thoroughly documented and submitted to regulators – though many trading firms and their third-party technology vendors may chafe at such a rule, stating the information is proprietary.
Registration and disclosure are essentially a permit to operate the program. The SEC should require explainability of algorithms. If the individual software engineers cannot explain the model’s reasoning, then the company will not be considered to be in compliance.
There is no excuse for not knowing the reasoning behind a decision that the algorithm acts upon. The requirement of explainability adds a layer of accountability to those deploying the algorithm.
Although the MAU exists, it is not enough. The SEC should create a specific unit for AI trading systems. This unit should be able to test computer trading software to determine that it will not participate in financial fraud.
The SEC needs to also enforce liability for anonymous systems. Even if the system acts unintentionally, those benefiting from the system should be liable for its actions. There needs to be a fiduciary duty associated with the use of computer trading systems.
Additionally, the SEC needs to create legislation that is fair to small and large industry players. It is important for the SEC to understand that not all industry players should be treated the same.
With big industry players diving headfirst into autonomous systems, the SEC will begin to craft legislation that reflects the use of computer-based programs as the industry norm.
This would put small industry players at a regulatory disadvantage. Small players will not be able to comply with the same regulatory framework as large firms due to cost.
Where FINRA Fits in
Like the SEC’s regulations, FINRA was built to regulate human traders and brokers, not autonomous algorithms.
FINRA’s rules depend on human judgment. The rules use terms such as: “suitability,” “fair dealing,” and “reasonable basis.” All of these words are applicable to humans, not to AI systems that make decisions without human intent.
FINRA Rule 2111 requires brokers to make investment recommendations based on what is appropriate for the clients.
However, what happens when AI models are making the recommendations?
AI systems cannot use the same “reasonable basis” as a human, as required by the rule.
If an AI system recommends an investment that is quite risky to a retiree, who is responsible?
FINRA’s rules as they currently stand appear ill-suited for AI-generated recommendations and an after-the-fact blame game.
FINRA has a Market Regulation Program. The program conducts market surveillance to identify potential market manipulation and fraudulent activities. To date, FINRA’s Market Regulation Program detects fraudulent activities such as wash trades, layering, and spoofing. However, with the use of AI, a new issue arises: how can FINRA detect newly invented AI market distortions?
FINRA’s current tools would be hard-pressed to detect machine-learned fraudulent activity. On top of that, AI algorithms alter their behavior to avoid detection (as they will optimally operate in the shadows).
FINRA fines firms, traders, and brokers who conduct fraudulent activity. But, what can FINRA do to an algorithm?
The firms, traders, and brokers could simply claim that the AI model learned the fraudulent behavior “on its own.” This dynamic could allow firms, traders, and brokers to escape accountability for the actions of their AI algorithms.
How Can FINRA Fix It?
Similar to the SEC, FINRA can require AI and algorithmic registration. FINRA can require firms, traders, and brokers to register material algorithms.
Registration of such algorithms can include:
-
- purpose and risk,
- data sources used for model training, and
- risk controls and human oversight.
This registration does not require revealing proprietary codes; rather, FINRA would just need risk factors, controls, and accountability structures.
FINRA can create an “AI and Algorithmic Supervision Rule.” FINRA can require firms to designate an individual responsible for each AI system or algorithm.
Stress testing should be required on a regular basis. Protocols for shutting down the system should be mandated. This means every system needs to have a “kill switch.”
FINRA could also create a regulatory AI testing lab. This would be modeled on how the Federal Aviation Administration (FAA) tests flight software for planes.
Firms can submit AI models for simulation. The analysts at the lab will be able to discern how the model behaves under certain market conditions. Any system that poses a high risk to the industry can be flagged, approved under certain circumstances, or not allowed at all. FINRA must make clear that firms cannot blame the autonomous learning aspect of AI algorithms to circumvent responsibility.
FINRA needs to firmly state that anyone who uses an algorithm is accountable for its actions. This closes the “loophole” of firms skirting responsibility by blaming the AI algorithm itself.
In Summary, AI Is Reshaping Markets Faster Than Regulators
The next wave of financial crime will not look like unscrupulous brokers. It will look like autonomous computers acting in ways that humans cannot comprehend.
AI is reshaping the market faster than the SEC and FINRA can update their regulations.
Unless the SEC and FINRA devote specific units and a significant amount of development to combat computer-generated financial crime, it will always be playing catch-up.
Today, the SEC and FINRA are not just responsible for overseeing traders; they are also responsible for overseeing the programs they deploy. If the SEC and FINRA do not act, we will likely witness the next flash crash occur (this time, in milliseconds) in the near future.
About the author
Jara Kider is a third-year law student (3L) at the Benjamin N. Cardozo School of Law.
During her time in law school, she has had the opportunity to work in the Anti-Money Laundering department at a large financial institution.
Jara is currently enrolled in a Global Corporate Compliance course, taught by Professor Barry Koch.
Through this course, Jara has become more interested in financial crimes and compliance programs. She enjoys examining programs that help businesses navigate these and other related regulatory frameworks.