Artificial intelligence (AI) is being used at an astronomically increasing rate. But one of the pitfalls of AI output is the quality and accuracy of the resulting information. Garbage in becomes garbage out. It’s a big mistake. What can you do about this?
AI tools like the ChatGPT chatbot are least useful when we ask them general questions, and hope their answers are true. When the information sources for the AI tools are good quality, such as credible websites, reliable survey data, and research papers, then the results will be high-quality.
However, experts say that if you don’t carefully specify where the source data should be obtained, 70% of what you get is not going to be accurate, or it will be out of date, and is therefore likely to lack relevance.
On the other hand, when you request chatbots to work with specific data, they generate quality answers and useful advice.
Directing the chatbots to specific news and business sources like the websites of high-quality media outlets such as the New York Times and The Washington Post, as well as reliable business websites like Gallup, McKinsey, and also academic publications such as the Harvard Business Review and the MIT Sloan Management Review can also help reduce the production and spread of misinformation.
Comms industry data should always be checked to ensure it is not void
We need to assess the quality of many business surveys before getting chatbots to source information from them. For instance, many such surveys supposedly relevant to the PR industry don’t stand up to scrutiny. Results from a recent “global” survey of 2,000 respondents were a bit dubious because the results were spread over 40 countries. When you consider that this averaged only 50 respondents from each country, and they were only online respondents, you need to treat the results very cautiously.
Provoke Media recently conducted such a survey: “A survey of 406 communication professionals across the globe was conducted in March 2023.” The survey “covered every corner of the globe,” with “406 communicators residing across five continents,” but the report didn’t specify the number of respondents in each country. We can draw our own broad conclusions from the chart below within the report:
Asia-Pacific 23% x 406 = 93 respondents
Europe 26% x 406 = 106 respondents
North America 42% x 406 = 171 respondents
Do they really think totals of 93-170 respondents for these three major continents represent a sound database on which to base a range of broad international conclusions?
A trusted source to check AI data
Brian X. Chen, the New York Times personal tech columnist, said in a July 2023 article that after testing many AI tools, he concluded that for research, it was crucial to rely on trusted sources and to quickly double-check the data for accuracy. He eventually found a free web app that delivers sought-after quality: Humata.AI, which has become popular among academic researchers and lawyers.
0 Comments