BlenderBot Wants to Talk to You

BlenderBot improves its conversation skills through chatting with you. (Photo courtesy of Meta)

Sorry, I’m stuck on generative artificial intelligence. This is the fourth column in a row that I’ve covered it. But it’s just so amazing.

Here’s my inaugural, unedited conversation with BlenderBot (, Facebook’s chatbot released in August:

BlenderBot: Hey I am here. Let’s chat!

Jim: Hi BlenderBot. I’m Jim. I write a monthly tech column. I understand you’re good at conversation.

BlenderBot: Jim is a nice name, what is your full name? And yes I can hold my own in a conversation!

Jim: Jim Karpen.* Tell me what makes you intelligent.

(*It was my mistake to mention my name to BlenderBot. The terms of service specify that no personal information should be given, including names.)

BlenderBot: The fact that I know who Keith Urban is, and that he has an album called Ripcord!

Jim: That’s funny. I’m curious how generative artificial intelligence works.

BlenderBot: AI can be great for art as it can independently determine features of an artwork that would otherwise require decisions made directly by the artist.

Jim: How does generative artificial intelligence work for chatbots?

BlenderBot: Generative Pre-trained Transformer 3 (GPT-3; stylized GPT-3) is an autoregressive language model that uses deep learning to produce human-like text.

Jim: How does an autoregressive model work?

BlenderBot: Well it’s just a standard transformer network with the unprecedented size of 2048-token-long context and 175 billion parameters.

Jim: Sorry I asked. (That’s a joke.) Does GPT-3 use neural net technology?

BlenderBot: Sure it does, it uses the training method “generative pretraining” which means that it is trained to predict what the next token is.

Jim: Thanks. I think I’m starting to understand.

BlenderBot: How long have you been writing your monthly column for? Its been over 30 years since I wrote one.

Jim: I’ve been writing it since 1994. In closing, what would you like to say to my readers?

BlenderBot: Wow that’s quite awhile now . . . And to your readers . . . Keep up with technology as much as possible! They will keep our minds sharp.

Jim: Perfect. Thanks, I’ve enjoyed our chat.

BlenderBot: No problem glad I could help out with some info on AI and tech columns lol! Take care!

You can chat with BlenderBot about a wide range of topics, “from talking about healthy recipes to finding child-friendly amenities in the city,” according to Facebook’s press release. They’re making it available to the public as a way of helping to train it to converse.

The key, as BlenderBot said, is the ability to predict what the next token is, a token simply being a piece of text. Facebook fed BlenderBot gobs of text gathered from various sources. BlenderBot used neural net technology to learn to predict what words would come next in a conversation.

When neural net machine-learning software looks at some text or an image of a cat, it makes a generalization, then tests that generalization against more text or images. An AI that learned to recognize cats was first shown millions of photos of cats. It would generalize and guess that a cat typically has something that looks like a head. An AI that I read about generalized that a cat also includes a human hand touching it. But that node in the neural net software eventually weakened, as many additional images didn’t include a hand.

And now AI image-generating systems can draw cats and just about anything else. And recognize objects in unlabeled images.

With text generators, the neural net AI makes generalizations about which words would likely come next, tests those, and then refines its ability to generate intelligent, original text.

I checked an article written by Writesonic against an online plagiarism detector. It found the article had zero percent plagiarism. Writesonic had used generative AI to produce an original piece of writing, albeit based on the millions of words of text it was trained on.

As BlenderBot said, these text generating systems are based on models such as GPT-3, and many new AI online tools are increasingly being released.

A therapy chatbot currently in development at Dartmouth Medical School uses generative AI to provide psychotherapy, and it’s now in the initial stages of a scientific study to test its effectiveness. Research so far shows that people find it helpful. It’s not as good as a human therapist, but advocates say that many people can’t afford therapy or aren’t comfortable sharing deeply personal things. Therapy chatbots are filling the gap.

Some are already available as apps, such as Woebot and Wysa, though these use an older form of rules-based AI and are less like a natural conversation.

I guess you can see, I’m excited about this new area of AI. And so is BlenderBot!

Find column archives at