If you’re feeling confused by generative AI — what it does, how it works, and what it means for jobs — relax. Many of the finest minds in the world are right there with you.
“Nobody knows anything at all,” said Ethan Mollick, an associate professor at The Wharton School, during his Talent Connect talk in October. “It’s really important to recognize that, and it’s a really hard thing to absorb. People keep asking me, ‘What companies are the leaders in this? And I’m like, ‘There are no companies that are leaders.’ This came out 10 months ago, right?”
Before the crowd could sigh in relief, Ethan urged them to play with the technology now rather than waiting for a careful roll-out. “The way you find out what to do with this,” he said, “is you use it and you have your people use it and you have a way of your people telling you how they’re using it so you can figure out what it’s good for and what it’s not.”
In a talk that was more intellectual whirlwind than podium lecture, Ethan shared his insights on what gen AI’s good for, what it’s not, and how it’s changing the world — in ways both expected and unexpected.
1. The lowest performers will get the biggest lift from AI
To gauge the impact of gen AI on performance, Ethan and colleagues from Harvard, MIT, and the University of Warwick recently conducted a study at Boston Consulting Group (BCG).
They gave 758 BCG consultants a brief tutorial in ChatGPT 4 and then asked them to perform 18 different consulting-style tasks. The results? Employees who used ChatGPT accomplished 12% more work, completed those tasks 25% faster, and produced 40% higher quality results than those who didn’t.
To put that in perspective, Ethan said that when steam power was introduced to factories in the early 1800s, it increased productivity by 18% to 22%. “That’s a huge difference, right?” he said, referring to gen AI. “As HR people, this should be ringing every bell for you.”
What was most interesting about Ethan’s research is that he and his colleagues found the lowest performers at BCG — “who were still very elite,” he said — got a 43% to 46% boost in performance, while performers at the top experienced only a 17% jump. “We’re seeing this in field after field,” Ethan said, “that AI is boosting the lowest performers up to the 70th or 80th percentile.”
2. The technology is evolving so quickly, current models will soon be obsolete
For those still trying to get a handle on AI, Ethan offered a bit of a history lesson. “First of all, let’s give some context,” he said. “AI is a really confusing term because when we talk about AI, it means something completely different than what it did if you were at this event last year.”
Old school AI, he explained, uses data and predictive modeling to figure out what might happen in the future. That type of AI still exists and plays a vital role in business. But in 2017, he said, Google researchers outlined a new model for AI that paid attention to the context of a sentence and paragraph and understood what each word meant — known as large language models (LLMs) — effectively kicking off generative AI. “So, if you hear that AI is just fancy auto-complete,” he said, “that’s what they mean.”
The large language model became a reality for the public in November of 2022, when OpenAI launched ChatGPT. Since then, the technology has evolved so quickly that current versions could be obsolete by next year. ChatGPT recently launched ChatGPT 4 — “which is 10 times better than GPT 3.5,” Ethan said — and Google plans to launch Gemini, its new version of gen AI, soon.
Because of this, Ethan stressed that companies need to use a “frontier model” of gen AI, meaning one that’s at the cutting edge of the technology, such as ChatGPT, Bing, Bard, Gemini, or Claude (created by Anthropic, which recently formed a partnership with Amazon). “This is very important because you need massive amounts of specialized processors to build the most advanced model,” he explained, “and there are really only three or four companies on the entire planet that have the process and capacity to do this.”
3. Smart companies will consider HR their new R&D lab
Normally, when a new technology is introduced, it becomes the domain of the IT team. Ethan thinks that’s a mistake. “Generative AI can’t be centralized,” he said. “It has to be distributed.” Translation: Put it in the hands of the people.
But as employees start to fiddle around with AI, who should guide them? According to Ethan, the best folks are teachers, HR managers, and learning professionals. “That’s because generative AI works more like people,” Ethan said. “It’s terrible software, but good people.”
When you use search engine software to find information, for example, you usually get predictable results: a variety of articles and research on a certain subject. But when you interact with gen AI, it’s more like a conversation — and who hasn’t had a conversation that’s gone in unexpected directions?
“You have to get a sense of where ChatGPT’s head is at and when it makes a mistake, you need to redirect it,” he said. “It’s about guiding and helping and developing, rather than ordering and instructing. People who are really good at that kind of thing [like learning professionals or HR managers] can have a pretty good intuition for how the AI works.”
This is why he believes that “HR is the new R&D lab inside companies now.”
4. Employees are going to use GAI — but in different ways
Ethan told the crowd that it doesn’t really matter whether you allow the use of generative AI at your company or not: Your employees are going to use it anyway. “If you ban ChatGPT,” he said, “everyone’s just going to use it on their phones.”
There are two ways people are likely to use gen AI, either as “centaurs” or “cyborgs.” A centaur, he explained, is a mythological half-human, half-horse, and a good analogy for the tasks in which people divide the work between themselves and AI. Someone might be good at writing, for example, but lousy at math. In that case, they’d handle the writing and enlist AI’s help with the numbers.
A cyborg, by contrast, integrates AI into bits of every project. To illustrate this, Ethan said that when he was writing a book recently, he leaned into AI for help completing sentences.
Both centaurs and cyborgs can be what Ethan calls “secret cyborgs,” employees who are using AI on the job and not telling anyone. How should companies handle this? “You should just say, ‘We’re not going to fire anyone for two years because they’ve been using or could be replaced by generative AI,’” he said. “‘Instead, over the next 12 weeks, we’re going to give a million dollars cash at the end of every week to whoever comes in with the best prompt and gives it to us.’” Companies, he said, could see a huge boost in profit and productivity with this approach.
Final thoughts: Welcome to the Jagged Frontier
Most of us are familiar with the “wild frontier” or the “digital frontier.” But “jagged frontier”?
Welcome to the volatile new world of gen AI. “What that means,” Ethan explained, “is that AI has weird abilities and lack of abilities that don’t match what we expect humans to do.”
To grasp this concept, consider a wall that surrounds a fortress. Here and there, it may jut into the countryside. Inside the wall are all the things that AI can do. Outside are the things it can’t. The problem is that the wall is invisible. No one really knows what it’s capable of.
It’s easy, for example, for ChatGPT to write a sonnet. But a 50-word paragraph? Outside its capabilities. “A 50-word paragraph is really hard,” Ethan said, “because it doesn’t see the words, or count the words, the way we do.”
That’s why you need to pay attention, learn how gen AI works, and carefully check its work. In other words, trust your gut — which is more valuable than ever. “When you’re dealing with the jagged frontier,” Ethan said, “you can’t fall asleep at the wheel.”