Report: 90% of Online Content May Be AI-Generated Garbage by 2026
A recent report from Europol warns that by 2026, as much as 90 percent of online content could be generated by artificial intelligence
A recent report from Europol warns that by 2026, as much as 90 percent of online content could be generated by artificial intelligence, raising concerns that the web might be even more jammed with useless garbage in a few short years than it is today.
Futurism reports that a recent study by Europol suggests that by 2026, up to 90 percent of online content could be artificially generated. This staggering figure has sent ripples through various sectors, from journalism and art to technology and law enforcement. Synthetic media, which refers to content generated or manipulated using artificial intelligence, is not a new phenomenon. However, its rapid proliferation has raised eyebrows and concerns alike.
“In most cases, synthetic media is generated for gaming, to improve services or to improve the quality of life,” the report states. While AI-generated content has its merits — such as enhancing user experience in gaming or streamlining customer service — it also opens the door to more nefarious uses. “The increase in synthetic media and improved technology has given rise to disinformation possibilities,” the report adds.
The report states: “On a daily basis, people trust their own perception to guide them and tell them what is real and what is not. Auditory and visual recordings of an event are often treated as a truthful account of an event. But what if these media can be generated artificially, adapted to show events that never took place, to misrepresent events, or to distort the truth?”
The report also raises existential questions for artists, writers, and other content creators. In a world increasingly dominated by AI-generated content, what is the role of human creativity? Will artists and writers adapt to this new landscape, or will they be overshadowed by algorithms that can produce content at scale?
Breitbart News previously reported on Amazon removing AI-generated “garbage books” which falsely use the real names of authors.
Decrypt reports that when professor Jane Friedman discovered books she didn’t write being attributed to her on Amazon, she was met with initial resistance from the e-commerce giant, which did not want to remove the bogus titles from sale. The titles, which Friedman referred to as “garbage books,” were likely created using generative AI and included guides like “Your Guide to Writing a Bestseller eBook on Amazon,” “Publishing Power: Navigating Amazon’s Kindle Direct Publishing,” and “Promote to Prosper: Strategies to Skyrocket Your eBook Sales on Amazon.
Friedman’s complaints to Amazon was initially met with a refusal to remove the listings, as she could not prove that she owned the trademark on her own name. Friedman claimed that after admitting she was unable to demonstrate her ownership of the trademark for her own name, Amazon told her that the books will still be available for purchase.
Breitbart News will continue to report on the emergence of AI content generation.
What Is the Big Deal About “Government” – The Biggest Scam in History… Exposed!
Etienne Breaks Down the Science for James Corbett of the Corbett Report
Go paid at the $5 a month level, and we will send you both the PDF and e-Pub versions of “Government” - The Biggest Scam in History… Exposed! and a coupon code for 10% off anything in the Government-Scam.com/Store.
Go paid at the $50 a year level, and we will send you a free paperback edition of Etienne’s book “Government” - The Biggest Scam in History… Exposed! AND a 64GB Liberator flash drive if you live in the US. If you are international, we will give you a $10 credit towards shipping if you agree to pay the remainder.
Support us at the $250 Founding Member Level and get a signed high-resolution hardcover of “Government” + Liberator flash drive + Larken Rose’s The Most Dangerous Superstition + Art of Liberty Foundation Stickers delivered anywhere in the world. Our only option for signed copies besides catching Etienne @ an event.
I would guess 99% of the internet is already garbage written by humans so I don't think we will even notice!
Google is paying a small army of work from home folks to 'classify content' for AI interpretation. I'm going to try to land a job as one of them just to see exactly what they are doing...
I found this list of AI disasters in a very sneaky article by IT company Boston Consulting Group ($11B Annual Revenue):
Generative AI (Artificial Intelligence)
Source: https://www.bcg.com/x/artificial-intelligence/generative-ai
An excerpt that was hidden by including it as drop down content that would not show up on the Wayback Machine:
"THE ETHICAL ISSUES TIED TO GENERATIVE AI GOVERNANCE (+ or -) to show the drop down list of horrific ethical and operational issues possible with AI.
Compiler’s Note: This title is actually not a title. It is the location of a drop down list that is kept separate from the rest of the text so that it will not show up on the WayBack machine later on, or in screen shots or site scrapes. Here is the list of nasty possible outcomes from AI that BCG does not want you to see:
“THE ETHICAL ISSUES TIED TO GENERATIVE AI GOVERNANCE
As users experiment with these systems, there are serious ethical issues that need to be addressed:
1) Unknown Capabilities. Large generative AI systems such as ChatGPT have exhibited a massive capability overhang—skills and dangers that are not planned for in the development phase, and are generally unknown and unexpected even to the developers. This can pose a serious threat if the right guardrails are not in place to effectively manage unexpected usage.
2) Bias and Toxicity. Outputs from generative AI will be as biased as the data it is trained on. Many popular language models today are trained on the wilds of the internet, where there is plenty of bias—along with toxic language and ideas.
3) Data Leakage. Many companies have quickly put policies in place to forbid employees from entering sensitive information into ChatGPT, fearing that it could get incorporated into the AI model and re-emerge in public.
4) Hallucination. ChatGPT can make arguments that sound extremely convincing but are 100% wrong. Developers refer to this as “hallucination,” a potential outcome that limits the reliability of the answers coming from AI models.
5) Lack of Transparency. Generative AI models currently provide no attribution for the facts underlying the content they generate, which makes it impossible to verify the correctness of generated claims—further increasing the danger posed by AI-model hallucinations.
6) Copyright Controversies. Since the data sets used by AI models are derived from the public internet, a legal question arises: Does the content those models create amount to the duplicating of copyrighted works?”
[Compiler’s Note: This entire section needs a detailed article of it’s own. To include it as a ‘hidden’ drop down text in BCG’s main promotional article for generative Artificial Intelligence or Machine Learning based programs says it all. The people behind the promotion of this technology are very aware of the hell they are potentially unleashing uncontrollable AI.]