CNET’s AI-Assisted Articles Are Making Big Mistakes
A 163-word correction was recently issued for an article written by an AI engine
The tech site CNET started publishing explainers and short articles written by an artificial intelligence engine a few weeks back, and the machines are making some pretty alarming mistakes.
According to Vice, CNET posted an article on Jan. 12 entitled “What Is Compound Interest?” which was written by “CNET Money” (later revealed as an AI engine) and supposedly edited by real-life humans. Four days after posting, this remarkable correction was added:
“Correction, 1:55 p.m. PT Jan. 16: An earlier version of this article suggested a saver would earn $10,300 after a year by depositing $10,000 into a savings account that earns 3% interest compounding annually. The article has been corrected to clarify that the saver would earn $300 on top of their $10,000 principal amount. A similar correction was made to the subsequent example, where the article was corrected to clarify that the saver would earn $304.53 on top of their $10,000 principal amount. The earlier version also incorrectly stated that one-year CDs only compound annually. The earlier version also incorrectly stated how much a consumer would pay monthly on a car loan with an interest rate of 4% over five years. The earlier version also incorrectly stated that a savings account with a slightly lower APR, but compounds more frequently, may be a better choice than an account with a slightly higher APY that compounds less frequently. In that example, APY has been corrected to APR.”
Futurism uncovered this AI writing trend earlier this month, though the practice seems to have started “quietly” in November. According to that publication, the 70+ AI-assisted articles seem optimized for search engines.
CNET’s editor-in-chief Connie Guglielmo presented the company’s reasoning behind the AI articles on Jan. 12, the same day as the compound interest article appeared (her article also appears to have been updated on Jan. 16, the same day the major article correction appeared).
“[Our] goal: to see if the tech can help our busy staff of reporters and editors with their job to cover topics from a 360-degree perspective,” she wrote. “Will this AI engine efficiently assist them in using publicly available facts to create the most helpful content so our audience can make better decisions? Will this enable them to create even more deeply researched stories, analyses, features, testing and advice work we’re known for? “
But if these articles are indeed “reviewed, fact-checked and edited by an editor with topical expertise before we hit publish” as Guglielmo suggests, how did so many errors appear in a fairly straightforward financial explainer? A statement sent to Vice suggested that the company is “actively reviewing all our AI-assisted pieces to make sure no further inaccuracies made it through the editing process, as humans make mistakes, too,” which essentially puts as much blame on the human editor as the machine. Whoever is to blame, this is not a good way to build trust in artificial intelligence.
Thanks for reading InsideHook. Sign up for our daily newsletter and be in the know.
Suggested for you