In 2021, the SEC requested information on “digital engagement practices,” including the role of deep learning AI and chatbots in investment advising. In the time since, consumer AI has undergone multiple paradigm shifts: ChatGPT was released on November 30, 2022, GPT‑4 (OpenAI’s latest LLM) on March 14, 2023, and Auto-GPT on March 30, 2023. Herein lies a fundamental challenge for AI regulation: the rate of technological change can significantly outpace regulation. While regulators’ benign neglect often is a blessing for initial innovation, legacy regulations can eventually, and recurringly, become constraints when regulators apply outdated frameworks to novel technologies.
Indeed, robo-adviser regulation is a poor fit for LLMs. Traditional robo-advisers generally are directly traceable to providers holding themselves and their apps out, and registering, as investment advisers. In other words, there is a readily identifiable party behind the code taking on both specific activities and the specialized legal responsibilities that follow. While LLM powered apps can be deployed this way—which would make existing regulations more relevant—LLMs tend to resist this discrete cabining for several reasons. One, LLMs are pluripotent, with use cases typically determined by users on the fly.
Two, AutoGPTs turbocharge this pluripotency while also blurring the lines between where one application ends and another begins. With an AutoGPT-powered personal assistant, financial advice could become a mere subtask of the prompt “help me get my house in order.” This subtask, in turn, could be further distilled, by leveraging separate instances of an LLM for processes like data collection and analysis. Even if identifying and registering the discrete parts of this software hivemind as investment advisers were conceptually possible, it likely would be impractical.
Three, LLMs themselves will soon be ubiquitous. As OpenAI CEO Sam Altman said in a recent interview with MIT Research Scientist Lex Fridman, “At this point, it is a certainty there are soon going to be a lot of capable open-sourced LLMs with very few to no safety controls on them.” Where machine intelligence is commodified and unrestricted by intellectual property laws, seeking to register ex ante every instance of it with the potential to give investment advice may quickly devolve into a losing battle of regulatory whack-a-mole.
A nimbler framework is needed. The 20th century model of legislation empowering expert regulators to impose upfront licensing regimes has never been the law’s only means of imposing duties and remedying breaches. Centuries before specialized securities statutes imposed fiduciary duties on investment advisers, the common law of agency identified when autonomous agents owe others fiduciary duties. As an evolutionary body of law that iteratively adapts historic principles to novel circumstances, the ancient but flexible common law may be the best framework for handling rapidly progressing autonomous AI.
Agency law asks when and how one person (the agent) owes a fiduciary duty to another (the principal) on whose behalf she acts. When applying this doctrine to AI, the first question is whether an AI program can be a legal “person.” The latest Restatement of the Law of Agency says inanimate or nonhuman objects, including computer programs, cannot be principals or agents. However, the legal reality is more complicated because the law readily ascribes legal personhood to non-human entities like corporations, and some case law suggests that computers can be agents or legal equivalents for certain purposes. Furthermore, as Samir Chopra and Laurence F. White argued in A Legal Theory for Autonomous Artificial Agents, “legal history suggests inanimate objects can be wrongdoers.” Specifically, nineteenth-century admiralty courts routinely allowed actions against ships themselves, which the Restatement acknowledges, noting that maritime law effectively ascribed legal personality to ships out of practical necessity. And while applying agency law to AI would require adaptation and gap filling, evolving is something the common law does well.
Ascribing agent capacity to autonomous AI can help solve the practical problem of determining liability in an age of LLM investment advisers. Specifically, when should a programmer be liable for the harm an LLM causes to a user based on, for example, an incompetent interpretation of the user’s investment objectives? Or when should a programmer, or user herself, be liable for the harm an LLM causes to a third party by, for instance, engaging in market manipulation? Agency law has answers.
In general, a principal’s liability for the harm caused by her agent hinges on whether the agent was acting within the scope of her delegated authority. This is determined by looking at the principal’s words or conduct. These “manifestations” can be assessed from the agent’s perspective—i.e., would a reasonable agent understand herself to be authorized to act on behalf of the principal? Or from the third party’s perspective—i.e., would a reasonable third party understand the agent to be authorized to act on behalf of the principal based on the principal’s manifestations? If so, the principal is liable (directly in the first case, vicariously in the second) to a third party harmed by the agent’s actions.