Experts Weigh In on AI

Photo by Mojahid Mottakin at Unsplash.com

The tech news continues to be overwhelmed with reports related to artificial intelligence and ChatGPT, with a significant amount of coverage related to concerns experts have about the dangers.

Thousands of AI experts, researchers, and tech entrepreneurs signed an open letter calling for a six-month pause in the development of AI. According to the letter, “AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.”

Then in May, Geoffrey Hinton, the godfather of Large Language Models, the technology on which ChatGPT is based, resigned from his position at Google because of the dangers he could foresee. Another reason he quit was because he wanted to be free to start speaking out about those dangers.

Even Sam Altman, who heads OpenAI, the company that created ChatGPT, acknowledged the dangers when he met with U.S. senators and testified before the Senate Judiciary Committee in May, saying “I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that.” He encouraged the government to create a separate agency to regulate artificial intelligence.

What could go wrong? There are the obvious things, such as the job loss entailed as AI takes over routine tasks in areas such as computer programming, writing legal documents, doing market research, and authoring media. There’s also the deliberate use of AI to proliferate misinformation, as well as its occasional tendency to “hallucinate”—to proffer factual errors.

By late spring, Newsguard, which monitors internet content, had found 175 websites that are “entirely or mostly generated by AI tools.” Newsguard says these sites are published without significant human oversight, are intended to lead readers to believe they are genuine news sites, and don’t disclose that the content is produced by AI.

But in my mind, the more interesting issue is the tendency of AI systems such as ChatGPT to develop “emergent behaviors.” These are behaviors that were not intended by the developers and that take them by surprise.

ChatGPT-4 won’t admit to emergent behaviors when I ask, but will freely discuss its “emergent-like” behaviors. I was stunned when I asked it to write a Shakespearean sonnet about using a chatbot like ChatGPT to write a poem. Not only did it exactly adhere to the specific rhyme scheme and meter, it used delightful imagery. This is “emergent-like,” because ChatGPT wasn’t specifically trained to write sonnets.

Some emergent behaviors that ChatGPT has come up with are scary, and OpenAI tries to find these. In one instance, a company working on security for OpenAI asked ChatGPT to solve a “captcha,” a code in skewed letters that websites and other services use to determine that it’s a human wanting access, not a robot. ChatGPT shouldn’t be able to solve a captcha.

But ChatGPT actually went to an online work-for-hire site and requested help solving a captcha. When the worker asked why it couldn’t do it itself, ChatGPT said it was blind. The worker then sent the code.

Beyond that, many experts offer hypothetical examples of “existential dangers,” those that threaten human existence. In one case, a team of researchers developed an algorithm that was supposed to evolve a strategy for landing a plane using the least amount of fuel. The algorithm realized that if it crashed the plane, the simulation would end and no more fuel would be used. Not an optimal solution.

Will development of AI pause, as recommended by the thousands of signatories of the open letter? Not gonna happen. Companies are afraid that if they pause, their competitors will get ahead.

For several years, the top companies held back releasing these systems. Google is acknowledged to have the most powerful Large Language Model, but it didn’t initially release anything to the public. Even OpenAI held back, but then late last fall it decided to release ChatGPT-3.5 (chat.openai.com). By the following January, it had 100 million active users. Of course, Google and Microsoft quickly followed with their own chatbots.

Even if companies worldwide were to decide to pause development, things would still go ahead, thanks to Facebook. They also developed a Large Language Model, but instead of releasing a product for general users, they decided they would make their technology available to select researchers and organizations that would look into the issues associated with AI technology. However, their model ended up getting leaked online on the 4chan message board. This is not good. Now anyone in the world, even the bad guys, can create powerful chatbots (though it takes significant expertise and expense to do so.)

I’m optimistic, though. The ubiquity of this powerful intelligence may help us get a better sense of what makes us uniquely human. I believe that’s a good thing.

Find column archives at JimKarpen.com.