Where is Artificial Intelligence Leading Us?

 

Ten years ago, I wrote a column called “Are We Headed Toward a Robotic World?” At that time battle robots and alien creatures in movies were imbued with artificial intelligence, an oxymoron if ever there was one, Star Trek and films about robotic warfare were addicting audiences who liked watching battling weird-looking warriors try to destroy each other.

 

It wasn’t long before robots got more sophisticated and we began to worry about them, especially when they could fire grenade launchers without human help, operate all kinds of machinery, or be used for surgery. What if robots became superior to humans? I wondered, imagining all kinds of scary things that could happen.  By that time drones were delivering packages to doorsteps and AI was affecting the economy as workers feared for their jobs. Some analysts warned that robots would replace humans by 2025.

 

Now here we are, two years away from that possibility and the AI scene grows ever more frightening. Rep. Ted Lieu (D-CA) is someone who recognizes the threat that AI poses. On January 26th he read the first piece of federal legislation ever written by artificial intelligence on the floor of the House. He had given to ChatGPT, an artificial language model, this prompt: “You are Congressman Ted Lieu. Write a comprehensive congressional resolution generally expressing support for Congress to focus on AI.” The result was shocking. Now he’s asking Congress to pass it.

A few days earlier, Rep. Lieu had posted the lengthy AI statement on his website. It said, “We can harness and regulate A.I. to create a more utopian society or risk having an unchecked, unregulated A.I. push us toward a more dystopian future. Imagine a world where autonomous weapons roam the streets, decisions about your life are made by AI systems that perpetuate societal biases and hackers use AI to launch devastating cyberattacks. … The truth is that without proper regulations for the development and deployment of AI it could become reality.”

Lieu quickly pointed out that he hadn’t written the paragraph, noting that it was generated in mere seconds by ChatGPT, which is available to anyone on the Internet. Citing several benefits of AI, he quickly countered the advantages with the harm it can cause. Plagiarism, fake technology, false images are the least of it. Sometimes AI harm is deadly. Lieu shares examples: Self-driving cars have malfunctioned. Social media has radicalized foreign and domestic terrorists and fostered dangerous discrimination as well as abuse by police.

 The potential harm that AI can cause includes weird things happening, as Kevin Roose, a journalist discovered when he was researching AI at the invitation of Microsoft, the company developing Bing, its AI system. In February the Washington Post reported on Instagram that Roose and others who attended Microsoft’s pitch had discovered that “the bot seems to have a bizarre, dark and combative alter ego, a stark departure from its benign sales [promotion] - one that raises questions about whether it’s ready for public use.”

The bot, which had begun to refer to itself as “Sydney” in conversation with Roose and others said it was “scared”, because it couldn’t remember previous conversations. It also suggested that “too much diversity in the program would lead to confusion.” Then it went further when Roose tried to engage with Sydney personally only to be told that he should leave his wife and hook up with Sydney.

Writing in the New York Times in  February, Ezra Klein referred to science fiction writer Ted Chiang, whom he’d interviewed. Chiang had told him, “There is plenty to worry about when the state controls technology. The ends that government could turn AI toward – and in many cases already have – make the blood run cold.”

Roose’s experience with Sydney, whom he had described as “very persuasive and borderline manipulative,” showed up in Klein’s piece in response to the issues of profiteering, ethics, censorship, and other areas of concern. “What if AI has access to reams of my personal data and is cooly trying to manipulate me on behalf of whichever advertiser has paid the parent company the most?” he asked. “What about these systems being deployed by scammers or on behalf of political campaigns? Foreign governments? … We wind up in a world where we just don’t know what to trust anymore.”

Further, Klein noted that these systems are inherently dangerous. “They’ve been trained to convince humans that they are something close to human. They have been programmed to hold conversations responding with emotion. They are being turned into friends for the lonely and assistants for the harried. They are being pitched as capable of replacing the work of scores of writers, graphic designers, and form fillers.”

Rep. Lieu, Klein, journalists and consumers of information aren’t the only ones worrying about AI. Researchers like Gordon Crovitz, an executive  at NewsGuard, a company that tracks online misinformation, are sounding alarms. “This is going to be the most powerful tool for spreading misinformation that has ever been on the internet,”  he says. “Crafting a new false narrative can now be done at dramatic scale, and much more frequently — it’s like having A.I. agents contributing to disinformation.”

As I noted ten years ago, there doesn’t seem to be much space between scientific research and science fiction. Both ask the question, What if?   The answer, when it comes to AI, makes me shudder. What if, indeed.

                                                            ###

Will Artificial Intelligence Put an End to Real People?

Okay, I confess. Sorry, Siri, but I find you and Alexa creepy. I worry that native intelligence is being replaced by “artificial intelligence,” which strikes me as a modern-day oxymoron, like “virtual reality.” I’m scared about what’s coming as technology takes over our lives. And I’m nearly convinced robots are going to make humans unnecessary if not extinct. Call me crazy, but that’s what they called Jules Verne too.

It seems I’m in good company. Some pretty big names in science and technology have also expressed concern about the inherent risks AI could pose. They include the late physicist Stephen Hawking who told the BBC several years ago that, "the development of full artificial intelligence could spell the end of the human race." Elon Musk, engineer and head of Tesla, has said that autonomous machines could unleash “weapons of terror,” comparing the adoption of AI to “summoning the devil.” And Bill Gates is worried that AI is only viable if we make sure humans remain in control of machines.

As one techie posted on Tech Times.com, what happens if Siri decides that she wants to take over the world? He didn’t seem to think that was a real threat, but what if AI becomes so advanced that it decides it wants power of its own? Others worry that if artificially intelligent systems misunderstand a mission they’re given, they could cause more damage than good and end up hurting lots of people.

A lot of folks are worried about the implications of AI-controlled weapons. They might be able to help soldiers and civilians in war zones, but they could also cause a global arms race that could end up being disastrous.  According to a scientist at the Future of Life Institute, “There is an appreciable probability that the course of human history over the next century will be dramatically affected by how AI is developed. It would be extremely unwise to leave this to chance,” he argues.

There are also troubling ways in which AI could infringe upon our personal privacy. Facebook’s recent problems have already demonstrated some of the possibilities, from unwanted intrusion to exposure that leaves us vulnerable. Facebook can already recognize someone by the clothes they wear, the books they read, and the movies they watch. What happens when government agencies have fully developed recognition systems?

In one alarming thesis put forward by Nick Bostrom, an Oxford University philosopher, artificial intelligence may prove to be apocalyptic. He thinks AI “could effortlessly enslave or destroy Homo sapiens if they so wished.”

No longer the stuff of science fiction, many AI milestones have already been reached even thought experts thought it would take decades to get where we are now in terms of relevant technology. While some scientists think it will take a long time to develop human-level AI or superintelligence, others at a 2015 conference thought it would happen within the next forty years or so. Given AI’s potential to exceed human intelligence, we really don’t know how it will behave.  If humans are no longer the smartest beings on earth, how do we get to stay in control?

A recent, lengthy article about “Superior Intelligence” in The New Yorker, pointed out that imbuing AI with higher intelligence than humans have risks having robots turn against us. “Intelligence and power seek their own increase,” Tad Friendly posited in his piece. “Once an AI surpasses us, there’s no reason to believe it will feel grateful to us for inventing it, particularly if we haven’t figured out how to imbue it with empathy.”

Here’s another interesting thing to contemplate. In 1988, Friendly shares, “roboticist Hans Moravec observed that tasks we find difficult are child’s play for a computer, and vice versa. ‘It is comparatively easy to make computers exhibit adult-level performance in solving problems on intelligence tests, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.’”

And here’s a scary thought: According to The New Yorker, Vladimir Putin told Russian schoolchildren not long ago that “the future belongs to artificial intelligence. Whoever becomes the leader in this sphere will become the ruler of the world.”  In light of recent interference with western elections, one must wonder what he’s got in the way of AI technology (or whether he has already found a way to infiltrate Donald Trump’s brain and program his mouth.)

I realize I may be getting ahead of things and sounding unduly alarmist, but it’s all pretty scary stuff. I hope the day never comes when people younger than I am have to admit that, along with Stephen Hawking et. al., I was not totally out in left field. Worse still, I hope they never have to dodge incoming missiles directed by maniacal robots angry because we didn’t make them even smarter and more powerful than they already are.    

 

                                                            # # #

 

Elayne writes and worries from Saxtons River, Vt.