News outlet CNET on Wednesday said it had flawed outcome on articles generated using an artificial intelligence-powered tool.
According to CNET’s Editor in Chief, Connie Guglielmo, they have since paused on using the AI tool for article production.
The disclosure comes at a time when a new AI chatbot, ChatGPT, is going viral.
Guglielmo however said the news outlet was using an “internally designed AI engine,” not ChatGPT, to help write 77 published stories since November.
Guglielmo said CNET used She said this amounted to about 1% of the total content published on CNET during the same period and was done as part of a “test” project for the CNET Money team “to help editors create a set of basic explainers around financial services topics.”
Also Read: Augmenting Human Capabilities? or Can AI Replace Human Functions Completely?
“Editors generated the outlines for the stories first, then expanded, added to and edited the AI drafts before publishing,” Guglielmo wrote. “After one of the AI-assisted stories was cited, rightly, for factual errors, the CNET Money editorial team did a full audit.”
The result of the audit, she said, was that CNET identified additional stories that required correction, “with a small number requiring substantial correction.” CNET also identified several other stories with “minor issues such as incomplete company names, transposed numbers, or language that our senior editors viewed as vague.”
Another correction suggests the AI tool plagiarized.
Discussion about this post