AI improves knowledge worker results, including productivity and quality, according to findings of a 2023 US research project. As professional communicators, we are knowledge workers as well, and the results of this study point to potential benefits of using AI for relevant communication tasks.
Researchers from the Harvard Business School conducted the project with 758 consultants from the global management consulting firm Boston Consulting Group (7% of its workforce) to achieve some impressive results. They said they examined the “performance implications of AI on realistic, complex, and knowledge-intensive tasks.”
The researchers found that “some tasks are easily done by AI, while others, though seemingly similar in difficulty level, are outside the current capability of AI.”
Professional communicators are knowledge workers as well. We need to call on our knowledge of current affairs, including business trends, to solve emerging issues that our organizations are facing. We need to find communicative ways to help solve internal organizational problems like employee engagement. We need to use our professional knowledge to develop creative ways of communicating effectively. Unlike advertising agencies, we can’t just call on our ‘creatives’ to come up with creative solutions – we need to use our own knowledge as the basis for being creative ourselves – as knowledge workers.
Participants were randomly assigned to one of three conditions:
- No AI access
- GPT-4 access
- GPR-4 AI access with a prompt engineering overview.
The study’s abstract said, “For each one of a set of tasks in a set of 18 realistic consulting tasks within the frontier of AI capabilities, consultants using AI were significantly more productive (completing 12.2% more tasks on average, and completed tasks 25.1% more quickly), and produced significantly higher quality results (more than 40% higher quality compared to a control group). Impressive!
Consultants across the skills distribution benefited significantly from having AI augmentation (increasing their AI knowledge due to the project participation), with those below the average performance threshold increasing by 43% and those above increasing by 17% compared to their own scores.
However, for a task selected to be ‘outside the frontier,’ consultants using AI were 19% less likely to produce correct solutions compared to those without AI. In other words, the tasks had to suit the capability of AI.
AI use in professional communication
Digital strategist Bruno Amaral wrote an excellent article in February 2024, “AI Adoption in Public Relations – How it started and how it’s going.” He gives good context on the adoption of AI by communication professionals. This includes work by Stephen Waddington, who concludes that AI can be useful in the following areas:
- Production of text and images
- Editing and summarizing
- Assessment and model creation
- Planning and Decision Making
Also, Waddington published guidelines on how to use ChatGPT to follow all the steps of writing a press release.
He adapted the AI tools to the 4 stages identified by Cutlip et al. in their book Effective Public Relations, (2000).
- Action / Communication
Image, opposite: 2024 Global Comms Report, Cision. The chart reflects the perspective of 427 senior level professionals across 10 countries, U.S., Canada, France, Germany, Sweden, U.K., Australia, China, Hong Kong, and Singapore.
If AI improves similar knowledge worker results, why aren’t we using more AI tools in our professional communication?
As proven in the above Harvard Business School study, AI improves knowledge worker results. Participants in the project use very similar skills to communication professionals. So why has our profession been slow at adopting AI? Amaral suggests the following possible reasons for comms pros not using AI as much as they could:
- People in PR/comms don’t have access to these tools or know how to use them.
- We still haven’t set up a process for AI use in-house and in agencies.
- AI doesn’t provide a big amount of added value.
- AI still doesn’t have the level of quality that humans provide.
- We’re at the peak of the inflated expectations of the Hype Curve. [See below.]
- PR is a profession where sensitivity to stakeholders, publics, and context can’t be communicated to Large Language Models (LLMs).
1. People in PR don’t have access to AI tools
This isn’t really an issue, we have seen there is an abundance of tools and the real problem is that some of them promise more than they deliver. Some of these tools will fade away, so there is a risk in relying too much on them.
2. We still don’t have a process for AI in professional communication
Comms already has a process, from research to evaluation of results. We can try to fit the tools to the process, or see where the AI can improve the output for your organization or client. You can surely write a press release for a generic audience, AI can take it and create variations for different audiences or scan it to extract a fact sheet.
3. There’s not enough added value yet
Fiction made us believe AI would replace humans; the general response to ChatGPT launch in PR has been that AI would take away jobs from people not using AI to do their job better or faster.
Yet, with all the biases and uncertain quality of output from AI, it is only natural that we don’t feel that added value is present. We are still not at the level of an AI Assistant who executes complex tasks on demand, and we won’t be there anytime soon, according to Bruno Amaral. He says he has seen some experiments on creating an Auto-GPT, which is an autonomous AI that runs a series of tasks in an attempt to reach a described result. From his experiments using the open source version, it’s still in a very early stage.
As noted above in the Harvard Business School study of AI being used by a similar profession, AI improves knowledge worker results quite emphatically, and therefore we should work with more intent to explore the ways that AI can be used to make professional communication more effective.
4. AI still doesn’t have the level of quality humans provide
AI image generation still can’t draw hands, it gets confused with cables, and some of them are just too perfect to be realistic. Amaral says in his article, that from his experience this also happens if you give ChatGPT a description of code you want it to produce. Descriptions that are too long are more prone to errors. The best approach is to provide small requirements, test, provide feedback, and keep any working version saved so you can go back and try with other requirements.
5. We’re at the peak of the inflated expectations of the Hype Curve
This argument is undeniable, Gartner presents it on their website.
6. PR is a profession where sensitivity to stakeholders, publics, and context can’t be communicated to Large Language Models
IBM says that Large language models (LLMs) are a category of foundation models trained on immense amounts of data making them capable of understanding and generating natural language and other types of content to perform a wide range of tasks:
LLMs are designed to understand and generate text like a human, in addition to other forms of content, based on the vast amount of data used to train them. They have the ability to infer from context, generate coherent and contextually relevant responses, translate to languages other than English, summarize text, answer questions (general conversation and FAQs) and even assist in creative writing or code generation tasks.
Here is a list from IBM’s LLM web page that notes some of the most important areas where LLMs benefit organizations:
- Text generation: language generation abilities, such as writing emails, blog posts or other mid-to-long form content in response to prompts that can be refined and polished. An excellent example is retrieval-augmented generation (RAG).
- Content summarization: summarize long articles, news stories, research reports, corporate documentation and even customer history into thorough texts tailored in length to the output format.
- AI assistants: chatbots that answer customer queries, perform backend tasks and provide detailed information in natural language as a part of an integrated, self-serve customer care solution.
- Code generation: assists developers in building applications, finding errors in code and uncovering security issues in multiple programming languages, even “translating” between them.
- Sentiment analysis: analyze text to determine the customer’s tone in order understand customer feedback at scale and aid in brand reputation management.
- Language translation: provides wider coverage to organizations across languages and geographies with fluent translations and multilingual capabilities.
The question remains as to how we can further adapt LLMs to the greater sophistication needed for human relationships, interactions and context.
How can we accelerate the adoption of AI tools?
Amaral believes we first need to do a better job at identifying the concrete needs of the PR profession. Where are our pain points? Once those are identified we can look at the existing tools and filter out those that are mere interfaces to OpenAI’s tools. Most of all, he feels we need to share more with the community:
This should not be a race to find the best tool and gain competitive advantage, there is always a better tool around the corner, a new shiny object. My belief is that by sharing more we can provide better service, grow the profession, and make it rise to the C-level of companies.
- Research findings published in July 2023 have revealed that participants using ChatGPT were able to write business documents faster, at a better quality. The improvements were made on the basis that a person needs to check the AI-generated text, then edits and corrects it, according to Jakob Nielsen, international expert of user experience (UX) techniques.
- My article, “ChatGPT lifts workplace writing productivity and quality,” also discusses the positive way AI improves knowledge worker results, with strong implications for our communication profession.