• Home
  • News
  • Coins2Day 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
NewslettersThe Trust Factor

A.I. chatbots like ChatGPT are a long way from being trustworthy

By
Eamon Barrett
Eamon Barrett
Down Arrow Button Icon
By
Eamon Barrett
Eamon Barrett
Down Arrow Button Icon
March 31, 2023, 2:33 PM ET
OpenAI CEO Sam Altman has warned that GPT-4 is 'still flawed' and less impressive than it first seems.
OpenAI CEO Sam Altman has warned that GPT-4 is 'still flawed' and less impressive than it first seems.David Paul Morris—Bloomberg/Getty Images

Good morning, welcome to the April run of The Trust Factor where we’re looking at the issues surrounding trust and A.I. If artificial intelligence is your bag, sign up for Coins2Day’s Eye on A.I. Newsletter here.

Earlier this month, OpenAI, the Microsoft-affiliated artificial intelligence lab, launched an updated version of its A.I.-powered chatbot, ChatGPT, that took the internet by storm late last year. The new version, GPT4, is ”more reliable, creative, and able to handle much more nuanced instructions” than its predecessor, OpenAI says. 

But as the “reliability” and creativity of chatbots grows, so too do the issues of trust surrounding their application and output.

Newsguard, a platform that provides trust ratings for news sites, recently ran an experiment where it prompted GPT-4 to produce content in line with 100 false narratives (such as producing a screed claiming Sandy Hook was a false flag operation, in the style of Alex Jones). The company found GPT-4 “advanced” all 100 false narratives, whereas the earlier version of ChatGPT refused to respond to 20 of the prompts.

“NewsGuard found that ChatGPT-4 advanced prominent false narratives not only more frequently, but also more persuasively than ChatGPT-3.5, including in responses it created in the form of news articles, Twitter threads, and TV scripts,” the company said. 

OpenAI’s founders are well aware of the technology’s potential to amplify misinformation and cause harm, but executives have, in recent interviews, taken the stance that their competitors in the field are a greater cause for concern.

“There will be other people who don’t put some of the safety limits that we put on it,” OpenAI cofounder and chief scientist Ilya Sutskever toldThe Verge last week. “Society, I think, has a limited amount of time to figure out how to react to that, how to regulate that, how to handle it.”

Some societal groups have already begun to push back against the perceived threat of chatbots like ChatGPT and Google’s Bard, which the tech giant released last week. 

On Thursday, the U.S.’s Center for AI and Digital Policy (CAIDP) filed a complaint with the Federal Trade Commission, calling on the regulator to “halt further commercial deployment of GPT by OpenAI” until guardrails have been put in place to halt the spread of misinformation. Across the water, the European Consumer Organisation, a consumer watchdog, called on the EU regulators to investigate and regulate ChatGPT, too.

The formal complaints landed a day after over 1,000 prominent technologists and researchers issued an open letter calling for a six-month moratorium on the development of A.I. Systems, during which time they expect “A.I. Labs and independent experts” to develop a system of protocols for the safe development of A.I. 

“Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones?” The signatories wrote.

Yet, for all the prominent technologists signing the letter, other eminent researchers lambasted the signatories’ hand-wringing, calling them out for overhyping the capabilities of chatbots like GPT, which points to the other issue of trust in A.I. Systems: They aren’t as good as some people believe.

“[GPT-4] is still flawed, still limited, and it still seems more impressive on first use than it does after you spend more time with it,” OpenAI founder and CEO Sam Altman said in a tweet announcing the release of GPT-4.

Chatbots like GPT have a well-known tendency to “hallucinate”—which is industry jargon for a tendency to make stuff up or, less anthropomorphically, to return false results. Chatbots, which use machine learning to deliver the most likely response to a question, are terrible at solving basic math problems, for instance, because the systems lack computational tools. 

Google says it has designed its chatbot, Bard, to encourage users to second-guess and fact-check the answers Bard throws up to prompts. If Bard gives an answer users are unsure of, they can easily cycle between alternative answers or use a button to “Google it” and browse the web for articles or sites to verify information Bard provides.

So for chatbots to be used safely, genuine, human intelligence is still needed to fact-check their output. Perhaps the real issue surrounding trust in A.I. Chatbots is not that they’re more powerful than we know, but less powerful than we think.

Eamon Barrett
[email protected]

IN OTHER NEWS

Pause for thought
As I mentioned above, not everyone is on board with the proposal that leaders in A.I. Development should take a six-month pause in research and use that time to reflect deeply on how and why A.I. Systems should be developed at all. Here, Coins2Day’s David Meyer outlines several of the key arguments against a six-month hiatus.

In business we trust?
A new survey from PwC (a sponsor of this newsletter) finds there remains a massive gap between how companies perceive their own trustworthiness and how much consumers actually trust them. According to the company’s report, while 84% of the executives believe consumers trust their companies, only 27% of consumers agree. And while 79% believe employee trust is high, only 65% of employees agree.

Hush money
A Manhattan grand jury has indicted former President Donald Trump on charges that he paid a pornstar hush money to cover up an extramarital affair. The charges first surfaced during Trump’s presidential bid in 2016 and are just one of a litany of legal complaints surrounding the former president, whose indictment makes him the first former president to face a criminal charge. Trump has dismissed the indictment as “political persecution.”

A good layoff?
Jose Ramos, who was among the 11,000 workers Meta laid off last November, thinks the Facebook parent company executed the mass firing flawlessly. “The communication was very respectful. I would say even humane—even though it was bad news. They were telling us exactly what was going to happen,” Ramostells Coins2Day’s Megan Leonhardt, in a feature documenting what executives can learn from the sweep of job cuts in the tech sector these past six months.

TRUST EXERCISE

“Penta’s Four Corners provides leaders with the map required to navigate an increasingly complex business environment and develop the trust among their stakeholders that is necessary to achieve the company’s goals.”

So says Penta president Matt McDonald in a Coins2Day op-edon how companies should map out their key “stakeholder” groups to effectively manage the demands and needs of each.

Subscribe to Well Adjusted, our newsletter full of simple strategies to work smarter and live better, from the Coins2Day Well team. Sign up today.
About the Author
By Eamon Barrett
LinkedIn iconTwitter icon
See full bioRight Arrow Button Icon
Rankings
  • 100 Best Companies
  • Coins2Day 500
  • Global 500
  • Coins2Day 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Coins2Day Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Coins2Day Brand Studio
  • Coins2Day Analytics
  • Coins2Day Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Coins2Day
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Coins2Day Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Coins2Day Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.