‘Agentic’ AI is a buzzword made up of marketing fluff and real promise

‘Agentic’ AI is a buzzword made up of marketing fluff and real promise

‘Agentic’ AI is a buzzword made up of marketing fluff and real promise

For technology adopters looking for the next big thing, “agentic AI” is the future. At least, that’s what the marketing pitches and tech industry T-shirts say.

What makes an artificial intelligence product “agentic” depends on who’s selling it. But the promise is usually that it’s a step beyond today’s generative AI chatbots.

Chatbots, however useful, are all talk and no action. They can answer questions, retrieve and summarize information, write papers and generate images, music, video and lines of code. AI agents, by contrast, are supposed to be able to take actions on a person’s behalf.

But if you’re confused, you’re not alone. Google searches for “agentic” have skyrocketed from near obscurity a year ago to a peak earlier this fall.

A new report Tuesday by researchers at the Massachusetts Institute of Technology and the Boston Consulting Group, who surveyed more than 2,000 business executives around the world, describes agentic AI as a “new class of systems” that “can plan, act, and learn on their own.”

“They are not just tools to be operated or assistants waiting for instructions,” says the MIT Sloan Management Review report. “Increasingly, they behave like autonomous teammates, capable of executing multistep processes and adapting as they go.”

How to know if it’s an AI agent or just a fancy chatbot

AI chatbots — such as the original ChatGPT that debuted three years ago this month — rely on systems called large language models that predict the next word in a sentence based on the huge trove of human writings they’ve been trained on. They can sound remarkably human, especially when given a voice, but are effectively performing a kind of word completion.

That’s different from what AI developers — including ChatGPT’s maker, OpenAI, and tech giants like Amazon, Google, IBM, Microsoft and Salesforce — have in mind for AI agents.

“A generative AI-based chatbot will say, ‘Here are the great ideas’ … and then be done,” said Swami Sivasubramanian, vice president of Agentic AI at Amazon Web Services, in an interview this week. “It’s useful, but what makes things agentic is that it goes beyond what a chatbot does.”

Sivasubramanian, a longtime Amazon employee, took on his new role helping to lead work on AI agents in Amazon’s cloud computing division earlier this year. He sees great promise in AI systems that can be given a “high-level goal” and break it down into a series of steps and act upon them. “I truly believe agentic AI is going to be one of the biggest transformations since the beginning of the cloud,” he said.

For most consumers, the first encounters with AI agents could be in realms like online shopping. Set a budget and some preferences and AI agents can buy things or arrange travel bookings using your credit card. In the longer run, the hope is that they can do more complex tasks with access to your computer and a set of guidelines to follow.

“I’d love an agent that just looked at all my medical bills and explanations of benefits and figured out how to pay them,” or another one that worked like a “personal shield” fighting off email spam and phishing attempts, said Thomas Dietterich, a professor emeritus at Oregon State University who has worked on developing AI assistants for decades.

Dietterich has some quibbles with certain companies using “agentic” to describe “any action a computer might do, including just looking things up on the web,” but he has no doubt that the technology has immense possibilities as AI systems are given the “freedom and responsibility” to refine goals and respond to changing conditions as they work on people’s behalf.

“We can imagine a world in which there are thousands or millions of agents operating and they can form coalitions,” Dietterich said. “Can they form cartels? Would there be law enforcement (AI) agents?

“Agentic” is a trendy buzzword based on an older idea

Milind Tambe has been researching AI agents that work together for three decades, since the first International Conference on Multi-Agent Systems gathered in San Francisco in 1995. Tambe said he’s been “amused” by the sudden popularity of “agentic” as an adjective. Previously, the word describing something that has agency was mostly found in other academic fields, such as psychology or chemistry.

But computer scientists have been debating what an agent is for as long as Tambe has been studying them.

In the 1990s, “people agreed that some software appeared more like an agent, and some felt less like an agent, and there was not a perfect dividing line,” said Tambe, a professor at Harvard University. “Nonetheless, it seemed useful to use the word ‘agent’ to describe software or robotic entities acting autonomously in an environment, sensing the environment, reacting to it, planning, thinking.”

The prominent AI researcher Andrew Ng, co-founder of online learning company Coursera, helped advocate for popularizing the adjective “agentic” more than a year ago to encompass a broader spectrum of AI tasks. At the time, he also appreciated that mainly “technical people” were describing it that way.

“When I see an article that talks about ‘agentic’ workflows, I’m more likely to read it, since it’s less likely to be marketing fluff and more likely to have been written by someone who understands the technology,” Ng wrote in a June 2024 blog post.

Ng didn’t respond to requests for comment on whether he still thinks that.