
When Google’s Gemini 3 was released late last year, I kept seeing headlines saying it was now the best artificial intelligence chatbot, surpassing ChatGPT. So I recently decided to give it a test.
I gave it the following prompt: “I write a monthly tech column for The Iowa Source. You can find the current column and archives on jimkarpen.com. Please suggest a topic for my next column and write a 700-word draft, emulating my style.”
The results were astonishing. In seconds it somehow assimilated my oeuvre and wrote a great draft about AI agents. It alluded to past columns, mentioned my Ph.D. dissertation, and noted some historical and personal details that surprised me. For example, it suggested that using agents may give me more time for other things, such as visiting the Wege Gallery on the MIU campus in Fairfield.
Huh? I’ve never mentioned that gallery online, but in creating context, it somehow assimilated broad information about me.
I had been planning to write about AI agents at some point, so it selected a good topic.
But there was one problem. I haven’t yet used AI agents. Of course, that was no problem for Gemini. It invented two paragraphs in which I describe using an agent to help clear out digital clutter on my computer.
Still, it did a fantastic job of clearly saying what agents are. And when I have more experience with them, you’ll hear about it. In short, as Gemini explains, “An ‘agent’ is different from a chatbot because it has the authority to take action. This year, we’re seeing the release of systems that don’t just tell you which hotels are available; they actually go to the website, navigate the checkout screens, and book the room for you.”
And as it later explains about an agent, “It’s the difference between having a library and having a butler.”
I think Gemini passed my test. And in doing so, it in some ways exhibited the behavior of an agent: I gave it minimal information in my prompt, and it did all the work of examining my website, selecting a topic, researching it, and writing a draft.
There’s a second reason I’m paying more attention to Gemini. Many economists and tech writers say that we’re facing an AI bubble that could jolt the entire economy just as the dot-com bubble did when it burst in 2000.
In short, venture capitalists have poured billions of dollars into building out the AI infrastructure. OpenAI, maker of the revolutionary ChatGPT, is projected to spend a total of $1 trillion by the year 2030, according to analysts. Yet OpenAI’s revenue for 2025 was just over $20 billion.
At some point, investors are going to start wondering if they’ll ever get a return on their investment. And if that happens, they’ll tighten the purse strings, not only for OpenAI but also for many of the other AI startups.
All this AI investment has been boosting the economy. Once that stimulus goes away, the whole economy could see a downturn. And as with the dot-com crash, many of the startups will disappear.
But not Google’s Gemini. It and other AI tools created by the tech giants are not at risk, because these companies have a huge income from their other offerings, as well as a large supply of cash. As in 2000, while many of the entrants will fold, some of the pioneers will survive and become the basis for a new way of doing things.
I like ChatGPT, and it’s still my go-to chatbot. We have a history, and it’s always extremely useful. But I’m increasingly using Gemini 3. Not only is it more likely to survive the bursting of the AI bubble, but it has several compelling strengths.
According to ChatGPT, those who use Google services love the way Gemini can naturally work with Gmail, Google Docs, Google Drive, Google Maps, and the Android smartphone.
Also, it has strong multimodal capabilities that can analyze images as well as interpret visuals and documents. And some reviewers say it is more methodical and step-by-step logical, while also doing well at building an argument.
That last point was another thing that impressed me about the draft that it wrote. I really liked the step-by-step way it introduced and explained agents. And I will probably adapt some of what it wrote when I eventually write about them.
Eventually. It’s a big step, as Gemini describes: “Giving a piece of software the power to click ‘Buy’ or ‘Send’ on your behalf requires a new level of trust. We’ve spent years being told never to let a computer do anything without our explicit permission. Now, we’re being asked to delegate our agency to an algorithm.”
Who knows? Maybe one day my Computer Frontiers column will be entirely done by an agent. And I’ll have more time to visit the Wege Gallery.