Truth, Inspiration, Hope.

People Are Using Artificial Intelligence to Flood the Internet With Inaccurate Content

Published: August 13, 2024
AI-Experts-call-for-pause-Getty-Images-1246495664
This picture taken on Jan. 23, 2023 in Toulouse, southwestern France, shows screens displaying the logos of OpenAI and ChatGPT. ChatGPT is a conversational artificial intelligence software application developed by OpenAI. (Image: LIONEL BONAVENTURE/AFP via Getty Images)

With the emergence of artificial intelligence (AI) platforms, like OpenAI’s GPT-4o, Google’s Gemini and a seemingly endless number of other AI tools, the internet is being flooded with inaccurate content which is blurring the lines between truth and fiction.

Particularly alarming is a flood of junk content making its way into scientific journals, as described in a recent article published on Phys.org.

Some of the studies are so blatantly wrong as to seem like pranks. Many are started off with the telltale “Certainly, here is a possible introduction for your topic.”

As with many fields, AI is helping scientists cut down on time and effort in their work. But the gaffes come as a reminder that the output needs to be thoroughly vetted by human eyes before being publicized.

One infographic of a rat with preposterously large genitals was AI-generated and was actually published in a well-known and established academic journal, Frontiers, before being retracted.  

Yet another study was retracted from a legitimate scientific publisher after an inaccurate graphic showing legs with odd multi-jointed bones that resembled hands — an obvious AI generated image — was included. 

READ MORE:

ChatGPT the culprit

While these two examples are images, it’s believed text, generated by the wildly popular ChatGPT, launched in November 2022, is the primary tool being used to create inaccurate content, intentionally or otherwise. 

Careless mistakes are highlighting the problem. A study recently published by Elsevier, a renowned scientific partner, contained the words, “Certainly, here is a possible introduction for your topic,” at the beginning of the study.

Unfortunately for Elsevier, the study went viral before it was noticed and retracted. 

According to Andrew Gray, a librarian at University College London, who analyzed a number of papers for AI influence, the problem of AI generated content is enormous.

According to Phys.org, “He determined that at least 60,000 papers involved the use of AI in 2023 — over one percent of the annual total.”

Gray found this influence after searching for words that AI platforms tend to overuse, such as “meticulous,” “intricate,” or “commendable.” 

He anticipates that the problem will only grow in 2024. 

According to U.S. based Retraction Watch, an organization that reports on scientific paper retraction, over 13,000 papers were retracted in 2023, the most on record. 

READ MORE:

AI ruining the internet

According to a yet-to-be-peer-reviewed paper published by Google researchers, the misuse of artificial intelligence is creating vast amounts of fake content.

Images, videos, and text are all either being manipulated by AI or created out of thin air and then being published online as legitimate and human-made. 

The researchers say that users are harnessing AI tech to “blur the lines between authenticity and deception.”

Ironically, the researchers analyzed a number of papers published on generative AI and found 200 AI generated news articles reporting on the misuse of generative AI to publish bogus articles. 

The researchers concluded, “Manipulation of human likeness and falsification of evidence underlie the most prevalent tactics in real-world cases of misuse. Most of these were deployed with a discernible intent to influence public opinion, enable scam or fraudulent activities, or to generate profit.”

The problem is only going to get worse as AI tools rapidly advance and internet users will need more and more advanced detection tools in order to discern between AI generated content and legitimate content produced by a human being. 

The researchers warn that “the mass production of low quality, spam-like and nefarious synthetic content” overloads users with the task of verification.

This “risks increasing people’s skepticism towards digital information altogether.”