Sunday, April 30, 2023

AI thinks therefore AI am

 


 

 

 

 

 

 

 

 

If you haven't noticed, artificial intelligence (AI) is quite suddenly becoming a huge story.  Just last month, 60 minutes ran a significant piece on the issue and not only was it very entertaining, it was downright scary.  The self learning soccer robots really got my attention.  After just two days, they were running soccer plays that I've never even seen in a FIFA event. The excitement and concern started at the end of 2022 with the release of ChatGPT.  This from Wikipedia:

ChatGPT launched as a prototype on November 30, 2022, and garnered attention for its detailed responses and articulate answers across many domains of knowledge.[3] The advent of the chatbot has increased competition within the space, motivating the creation of Google's Bard and Meta's LLaMA

And just this week,  Jeffery Hinton , the Godfather of AI, left his job at Google because of his concerns for the technology he has helped develop.  

This from the Times:

"Geoffrey Hinton was an artificial intelligence pioneer. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the A.I. systems that the tech industry’s biggest companies believe is a key to their future.

On Monday, however, he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.

Dr. Hinton said he has quit his job at Google so he can freely speak out about the risks of A.I.  A part of him, he said, now regrets his life’s work. (clip)

Dr. Hinton’s journey from A.I. groundbreaker to doomsayer marks a remarkable moment for the technology industry at perhaps its most important inflection point in decades. Industry leaders believe the new A.I. systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education.

But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.

“It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.

After the San Francisco start-up OpenAI released a new version of ChatGPT in March, more than 1,000 technology leaders and researchers signed an open letter calling for a six-month moratorium on the development of new systems because A.I. technologies pose “profound risks to society and humanity.”

Several days later, 19 current and former leaders of the Association for the Advancement of Artificial Intelligence released their own letter warning of the risks of A.I. That group included Eric Horvitz, chief scientific officer at Microsoft. (clip)

Dr. Hinton, a 75-year-old British expatriate, is a lifelong academic whose career was driven by his personal convictions about the development and use of A.I.  In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton embraced an idea called a neural network. A neural network is a mathematical system that learns skills by analyzing data. At the time, few researchers believed in the idea. But it became his life’s work.

In 2012, Dr. Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyze thousands of photos and teach itself to identify common objects, such as flowers, dogs and cars.

In 2018, Dr. Hinton and two other longtime collaborators received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.

Around the same time, Google, OpenAI and other companies began building neural networks that learned from huge amounts of digital text. Dr. Hinton thought it was a powerful way for machines to understand and generate language, but it was inferior to the way humans handled language.

Then, last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others. “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”

As companies improve their A.I. systems, he believes they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”

One of the segments of the 60 minutes story discussed that these learning systems have not just the ability but an apparent propensity to make up stuff.  They call these stories hallucinations

Down the road, Hinton is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own.

“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

This is what used to be called the Singularity and I wrote about it 20 years ago. 

The concept and the term "singularity" was popularized by Vernor Vinge  in 1983 in an article that claimed that once humans create intelligence greater than their own, there will be a technological and social transition and it would signal the end of the human era.

He wrote that he would be surprised if it occurred before 2005 or after 2030.

This week, the White House is gathering as many of the tech folks that they can to do something before we find ourselves in one big pickle.

Rene Descarte would be pleased I suppose.
 

 

 

Labels: , , ,