Where is Artificial Intelligence Leading Us?

 

Ten years ago, I wrote a column called “Are We Headed Toward a Robotic World?” At that time battle robots and alien creatures in movies were imbued with artificial intelligence, an oxymoron if ever there was one, Star Trek and films about robotic warfare were addicting audiences who liked watching battling weird-looking warriors try to destroy each other.

 

It wasn’t long before robots got more sophisticated and we began to worry about them, especially when they could fire grenade launchers without human help, operate all kinds of machinery, or be used for surgery. What if robots became superior to humans? I wondered, imagining all kinds of scary things that could happen.  By that time drones were delivering packages to doorsteps and AI was affecting the economy as workers feared for their jobs. Some analysts warned that robots would replace humans by 2025.

 

Now here we are, two years away from that possibility and the AI scene grows ever more frightening. Rep. Ted Lieu (D-CA) is someone who recognizes the threat that AI poses. On January 26th he read the first piece of federal legislation ever written by artificial intelligence on the floor of the House. He had given to ChatGPT, an artificial language model, this prompt: “You are Congressman Ted Lieu. Write a comprehensive congressional resolution generally expressing support for Congress to focus on AI.” The result was shocking. Now he’s asking Congress to pass it.

A few days earlier, Rep. Lieu had posted the lengthy AI statement on his website. It said, “We can harness and regulate A.I. to create a more utopian society or risk having an unchecked, unregulated A.I. push us toward a more dystopian future. Imagine a world where autonomous weapons roam the streets, decisions about your life are made by AI systems that perpetuate societal biases and hackers use AI to launch devastating cyberattacks. … The truth is that without proper regulations for the development and deployment of AI it could become reality.”

Lieu quickly pointed out that he hadn’t written the paragraph, noting that it was generated in mere seconds by ChatGPT, which is available to anyone on the Internet. Citing several benefits of AI, he quickly countered the advantages with the harm it can cause. Plagiarism, fake technology, false images are the least of it. Sometimes AI harm is deadly. Lieu shares examples: Self-driving cars have malfunctioned. Social media has radicalized foreign and domestic terrorists and fostered dangerous discrimination as well as abuse by police.

 The potential harm that AI can cause includes weird things happening, as Kevin Roose, a journalist discovered when he was researching AI at the invitation of Microsoft, the company developing Bing, its AI system. In February the Washington Post reported on Instagram that Roose and others who attended Microsoft’s pitch had discovered that “the bot seems to have a bizarre, dark and combative alter ego, a stark departure from its benign sales [promotion] - one that raises questions about whether it’s ready for public use.”

The bot, which had begun to refer to itself as “Sydney” in conversation with Roose and others said it was “scared”, because it couldn’t remember previous conversations. It also suggested that “too much diversity in the program would lead to confusion.” Then it went further when Roose tried to engage with Sydney personally only to be told that he should leave his wife and hook up with Sydney.

Writing in the New York Times in  February, Ezra Klein referred to science fiction writer Ted Chiang, whom he’d interviewed. Chiang had told him, “There is plenty to worry about when the state controls technology. The ends that government could turn AI toward – and in many cases already have – make the blood run cold.”

Roose’s experience with Sydney, whom he had described as “very persuasive and borderline manipulative,” showed up in Klein’s piece in response to the issues of profiteering, ethics, censorship, and other areas of concern. “What if AI has access to reams of my personal data and is cooly trying to manipulate me on behalf of whichever advertiser has paid the parent company the most?” he asked. “What about these systems being deployed by scammers or on behalf of political campaigns? Foreign governments? … We wind up in a world where we just don’t know what to trust anymore.”

Further, Klein noted that these systems are inherently dangerous. “They’ve been trained to convince humans that they are something close to human. They have been programmed to hold conversations responding with emotion. They are being turned into friends for the lonely and assistants for the harried. They are being pitched as capable of replacing the work of scores of writers, graphic designers, and form fillers.”

Rep. Lieu, Klein, journalists and consumers of information aren’t the only ones worrying about AI. Researchers like Gordon Crovitz, an executive  at NewsGuard, a company that tracks online misinformation, are sounding alarms. “This is going to be the most powerful tool for spreading misinformation that has ever been on the internet,”  he says. “Crafting a new false narrative can now be done at dramatic scale, and much more frequently — it’s like having A.I. agents contributing to disinformation.”

As I noted ten years ago, there doesn’t seem to be much space between scientific research and science fiction. Both ask the question, What if?   The answer, when it comes to AI, makes me shudder. What if, indeed.

                                                            ###