The SEC therefore has rushed in with a radically new, invasive, and unworkable compliance regime that abandons the traditional remedy of disclosures to investors. But the commission gets both the problem and the solution wrong.
Among the SEC’s principal concerns about the use of AI by financial professionals appears to be that targeted and behavioral advertising techniques that leverage AI are just too persuasive to let stand in the world of finance. But anyone who has ever ignored an online ad will intuitively understand why the SEC’s argument is overblown. Research indicates that these types of marketing techniques likely have a far smaller impact on investor behavior than the SEC fears and that targeted ads can even lead to smarter, not mindless, shopping.
Moreover, the SEC’s idea that AI-related technology can somehow create new conflicts of interest plainly doesn’t make sense. The main source of potential conflict between a broker or adviser and an investor is the fact that a salesperson is in the business of, well, sales. This is understood by anyone who has ever bought a car. There’s nothing about new technology that changes the salesperson’s interest in making more sales. Salespeople may also know more about the product (an “information asymmetry” as economists say) than those to whom they sell it, but if new technology gives them an even greater leg up in terms of knowledge, the right remedy is the very technique the SEC would toss in the trash: disclosure.
What of the arguments that certain advanced AI models are too complex and inscrutable — with inner workings often described as “black boxes” — for effective disclosure? While it’s true that the step-by-step logic of certain AI models might be obscure, this does not pose the insurmountable challenge that the SEC thinks it does. Even the use of a completely opaque, black-box AI model by a financial professional changes little about the ultimate potential conflict that stems from the broker’s or adviser’s stake in the sale. That interest can be seen in the AI tool’s output — its placement of a particular product in front of the customer — regardless of what goes into the model’s “thought” process. Just as it’s possible to explain, for example, that nicotine is addictive without getting into the weeds of biochemistry or epidemiology, it’s equally possible to understand that your broker will make more when you buy more (or when you buy this versus that) without the need to study a textbook on machine learning.
Therefore, neither the complexity of the technology that delivers information to an investor nor the financial professional’s incentive structure make it impossible to explain a conflict of interest to a customer. In fact, technology makes disclosure easier, not harder. Through user-experience design and touches as simple as labeling paid-for content as “Ad” or “Sponsored,” digital platforms can signal whether there’s a financial stake behind a piece of information. There’s no reason these practices can’t be adapted to disclose conflicted financial products or services.
The SEC’s getting it wrong isn’t just embarrassing; it’s consequential. Applying unworkable rules to new technology means we’ll get less of it. That’s a problem because digital, automated, and intelligent technologies are how we get more, cheaper, and better financial services.
Digital investment technologies have made markets accessible to more investors. If the SEC keeps AI out of the mix, consumers will lose out. They might lose tools that help deliver information in their native language and at their reading level. They might not get generative chatbots that can answer user-specific questions and follow-ups. And they might be denied the broad category of technologies that help some of the world’s most adept financial firms spot risks and opportunities.
Don’t be fooled by the SEC’s poorly reasoned fears. Financial technology, including AI, holds the promise of making the world better for investors — provided the techno-pessimists’ rules don’t get in the way.