Brokerage regulators are urging firms to be vigilant for the risk of hallucinations when using generative artificial intelligence tools in their operations.
The Financial Industry Regulatory Authority released its 2026 regulatory oversight report this week, an annual analysis from the organization sharing insights from its oversight of registrants to “help firms enhance their resilience and strengthen their compliance programs,” according to Chief Regulatory Operations Officer Greg Ruppert.
This year’s report includes a new section on gen AI, stressing that while FINRA’s rules are “technology neutral,” existing rules will apply with gen AI as they would for any other tech tool, including those on supervision, communications, recordkeeping and fair dealing.
According to FINRA, the top use of gen AI among member firms is “summarization and information extraction,” which it defined as using AI tools to condense large volumes of text and “extracting specific entities, relationships or key information from unstructured documents.”
Firms are also using AI for question answering, “sentiment analysis” (i.e., assessing whether a text’s tone is positive or negative), language translation, financial modeling and “synthetic data generation,” which refers to creating artificial datasets resembling real-world data but are created by computer algorithms or models, among other uses.
To safeguard against regulatory slips, FINRA urged firms to develop procedures that catch instances of hallucinations, defined as when an AI model generates inaccurate or misleading information (such as a misinterpretation of rules or policies, or inaccurate client or market data that can influence decision-making).
According to FINRA, firms should also watch out for bias, in which a gen AI tool’s outputs are incorrect because the model was trained on limited or wrong data, “including outdated training data leading to concept drifts.”
Firms’ cybersecurity policies should also consider the risks associated with the use of gen AI, whether by the firm itself or a third-party vendor. Additionally, FINRA cautioned firms to test its gen AI tools, suggesting that registrants focus on areas including privacy, integrity, reliability and accuracy, as well as monitoring prompts, responses and outputs to confirm the tool is working as expected.
“This may include storing prompt and output logs for accountability and troubleshooting; tracking which model version was used and when; and validation and human-in-the-loop review of model outputs, including performing regular checks for errors and bias,” the report read.
In the report, FINRA also focused on the emerging trend of AI agents, which can autonomously perform tasks on behalf of their users, including planning, making decisions and taking actions “without predefined rules or logic programming.” Despite potential efficiency benefits, FINRA urged firms to consider the risks, including the possibility that AI agents act autonomously and may act “beyond the user’s actual or intended scope and authority.”
“The rapidly evolving landscape and capabilities of AI agents may call for supervisory processes that are specific to the type and scope of the AI agent being implemented,” the report read.
In a FINRA podcast discussion on the new report, Ornella Berferon, a senior vice president in Member Supervision who leads FINRA’s Risk Monitoring Program, said regulators observed firms “taking a conservative and measured approach” before incorporating AI tools, especially with customer-facing interactions.
“So, I also want to encourage firms to continue to have those ongoing discussions with their risk monitoring teams as gen AI issues arise or as they’re planning to do more in this space,” she said.
In last year’s report, FINRA noted that firms were “proceeding cautiously” with the use of gen AI technology, opting to explore or implement third-party vendor-supported gen AI tools. The organization also highlighted gen AI’s threat from “bad actors,” who use the tools to impersonate business emails and commit ransomware attacks.
The 2026 report also includes information on consistent areas of focus for FINRA examiners, including cybersecurity and cyber-enabled fraud, anti-money laundering, Regulation Best Interest and the Consolidated Audit Trail, among others.
