Originally published in Forbes on April 27, 2023 as Forbes Technology Council post.
The unprecedented rise of AI-powered chat tools like ChatGPT and Bing is astonishing and set to upend many aspects of our lives. None other than Bill Gates hailed generative AI (GenAI) tools as the most revolutionary demonstration of technology since the introduction of the graphical user interface in 1980.
The potential of chatbots as productivity-enhancing tools raises interesting and challenging questions about the definition of “work” and how we value it: Is an employee who uses AI doing less work? Or are they now doing more work, better and faster? Does that mean you pay them less? Or more? How do you know if they’re using AI tools at all? What are the policies for when they do or don’t?
Brave New World?
There is widespread concern that large language models (LLMs) like ChatGPT are coming for our jobs. But in focusing on worst-case scenarios, we’re missing a more subtle point: What does the adoption of LLMs mean for how we define and understand employee performance and productivity?
It’s easy to identify the jobs that will be affected; the more repeatable, the more automatable. Clerical and other service functions, particularly those that involve responding to customer inquiries, come to mind. These jobs have already been impacted by developments like robotic process automation (RPA). With LLMs, those roles are at even greater risk.
No longer is an RPA bot playing “Mad Libs” in response to a customer inquiry, filling in discrete fields of a pre-created form. An LLM can now craft a human-like response to each question, complete with an appropriate emotional tone. For these repeatable roles, AI will only serve to make them better, faster, easier. The math is pretty simple.
The Role Of Knowledge Workers
The equation gets more complicated with high-level knowledge workers, whose qualities of creativity and judgment make them much harder to quantify and measure.
Few roles epitomize “knowledge work” quite like software developers, where an AI tool called GitHub Copilot launched last summer. Copilot promises to speed software development by drawing context from comments and code as it is written, and offering suggestions for whole lines of code or entire functions. A study by Github found that developers who use the tool complete some tasks in half the time and 60% of users reported feeling more fulfilled with their job because they can focus on more satisfying tasks.
Microsoft is now pushing Copilot into the Office suite, promising further disruption to knowledge workers across a much wider variety of roles and industries. Already several professions, including lawyers and investment bankers, promise to be significantly impacted by the addition of AI tools to established workflows. It is not hard to imagine an AI providing high-quality redlines to a run-of-the-mill commercial agreement or creating a standard financial model and associated PowerPoint presentation.
However, it is almost impossible to imagine any legal or banking client being comfortable moving forward with that output without the thorough review of a human expert. The value of AI tools is heavily dependent on two things: the quality of prompts and search queries, and the review of results—by humans.
The Role Of Leadership Teams
GenAI tools have come of age, and despite the efforts of some, there is likely no putting the genie back in the bottle. The question is, how to take advantage of the genie’s powers while avoiding unintended consequences?
• First, protect your IP: Right now every inquiry helps train these models to deliver better results, which also means it is captured and stored. Sensitive data, whether company secrets or personally identifiable information (PII), should not be shared with LLMs.
• Be careful how it’s used: Raw output from GenAI tools is relatively easy to identify (for now) and thus may strike recipients as insincere. Using a GenAI tool to plumb the entire corpus of human knowledge to craft a great message is a good idea, but as with all GenAI results, it is important that a human reviews, edits and approves the final copy.
• Take a balanced approach: Like much in life, moderation is key. Almost all employees will improve their productivity by leveraging GenAI in some aspect of their work, while none should turn their job over to it. Train your employees when and where to use GenAI to improve their personal productivity and follow up to check proper usage.
• Experiment, iterate, learn, share: We are in uncharted territory. Create an open dialog within your team or company on great uses of GenAI to share and reinforce learnings, while providing corrective nudges to limit use cases that are out of bounds.
• Measure the impact: As employees adopt LLMs, some teams may get more done with fewer people. This creates opportunities to increase output, reassign staff to other activities or even explore novel ideas like the four-day work week.
Surviving In The Age Of AI
AI tools augment employees’ own expertise and insight, but human expertise and judgment will ultimately ensure that AI results are meaningful, accurate and relevant to the task at hand.
The employee—whether executive, manager, developer or customer service rep—who can better guide an AI tool will get better results and create more value, faster. Human judgment has not been replaced. In fact, it will be even more valuable, and available, now that people can spend less time “doing” and more time “deciding.”
Finally, it is important to note that these AI models are trained on large data sets of human-generated data. They are unlikely to be able to invent, in the purest sense of the word. That too will remain a human endeavor.