Advertisement

SKIP ADVERTISEMENT

Op-Ed Contributor

Leave A.I. Alone

Credit...Kiyoshi Ota/Bloomberg

December was a big month for advocates of regulating artificial intelligence. First, a bipartisan group of senators and representatives introduced the Future of A.I. Act, the first federal bill solely focused on A.I. It would create an advisory committee to make recommendations about A.I. — on topics including the technology’s effect on the American work force and strategies to protect the privacy rights of those it impacts. Then the New York City Council approved a first-of-its-kind bill that once signed into law will create a task force to examine its own use of automated decision systems, with the ultimate goal of making its use of algorithms fairer and more transparent.

Perhaps not coincidentally, these efforts also overlap with increasing calls to regulate artificial intelligence along with claims by the likes of Elon Musk and Stephen Hawking that it poses a threat to humanity’s literal survival.

But this push for broad legislation to regulate A.I. is premature.

To begin with, even experts can’t agree on what, exactly, constitutes artificial intelligence. Take the recent report released by the AI Now Institute, aimed at creating a framework for ethically implementing A.I. While itself focused on A.I., the report also acknowledges that no commonly accepted definition of A.I. exists, which it describes loosely as “a broad assemblage of technologies … that have traditionally relied on human capacities.”

“Artificial intelligence” is all too frequently used as a shorthand for software that simply does what humans used to do. But replacing human activity is precisely what new technologies accomplish — spears replaced clubs, wheels replaced feet, the printing press replaced scribes, and so on. What’s new about A.I. is that this technology isn’t simply replacing human activities, external to our bodies; it’s also replacing human decision-making, inside our minds.

The challenges created by this novelty should not obscure the fact that A.I. itself is not one technology, or even one singular development. Regulating an assemblage of technology we can’t clearly define is a recipe for poor laws and even worse technology.

Indeed, the challenges A.I. poses aren’t entirely new. We’ve already successfully regulated it in the past — we just didn’t call it “artificial intelligence.” In the 1960s and 1970s, for example, the financial industry began to rely on complex statistical modeling and huge computerized databases to make credit decisions, automating what had been a more manual process of approving or denying credit to borrowers.

Those ethical and legal challenges associated with these models so captivated the public’s attention that in the summer of 1970, Newsweek ran a cover story titled “Is Privacy Dead?” detailing the “massive flanking attack” of computers on modern society. Growing awareness of that threat led to broad appeals that echo modern proposals to regulate A.I. “Eventually we have to set up an agency to regulate the computers,” Senator Sam Ervin of North Carolina said in 1970.

But instead of regulating all computers, the government sought a targeted approach tailored to specific problems, passing regulations like the Equal Credit Opportunity Act in 1974. That act was meant to reduce credit discrimination against minority groups and to increase consumers’ ability to understand what the models were doing — if consumers didn’t like their credit score, thanks to the act they could at least know how it could be improved.

That law and others offer valuable lessons today, illustrating the importance of focusing on specific issues — in this case, transparency in credit decisions — and tailoring their solutions accordingly. Any regulation aimed at the range of systems we call “A.I.” should seek to be just as specific.

With the thorny exception of cybersecurity, the way the United States regulatory system has approached information technology is arguably the most successful model for regulating technology in existence — fostering innovation while ensuring that the technology we use doesn’t break or seriously jeopardize our safety. The backbone of this approach comprises regulations tailored to the explicit problems created by any given technology.

Within the United States’ vast framework of laws and regulatory agencies already lie answers to some of the most vexing challenges created by A.I. In the financial sector, for example, the Federal Reserve enforces a regulation called SR 11-7, which addresses the risks created by the complex algorithms used by today’s banks. SR 11-7’s solution to those challenges is called “effective challenge,” which seeks to embed critical analysis into every stage of an algorithm’s life cycle — from thoroughly examining the data used to train the algorithm to explicitly outlining the assumptions underlying the model, and more. While SR 11-7 is among the most detailed attempts at governing the challenges of complex algorithms, it’s also one of the most overlooked.

This is not, of course, to suggest that artificial intelligence should never be regulated. But if the past is any guide, treating it as a collection of separate technologies, in separate sectors, is destined to be the most effective way to control the benefits it creates — and the dangers it poses.

Andrew Burt is chief privacy officer and legal engineer at Immuta, a data management platform for data science, which enables companies to create and manage A.I. and machine learning models.

Follow The New York Times Opinion section on Facebook and Twitter (@NYTopinion), and sign up for the Opinion Today newsletter.

A version of this article appears in print on  , Section A, Page 27 of the New York edition with the headline: Leave Artificial Intelligence Alone. Order Reprints | Today’s Paper | Subscribe

Advertisement

SKIP ADVERTISEMENT