AI Appears to Be Slowly Killing Itself

Sharing is Caring!

As Aatish Bhatia writes for The New York Times, a growing pile of research shows that training generative AI models on AI-generated content causes models to erode. In short, training on AI content causes a flattening cycle similar to inbreeding; the AI researcher Jathan Sadowski last year dubbed the phenomenon as “Habsburg AI,” a reference to Europe’s famously inbred royal family.

See also  NEW GRAPHIC: Footage released of Kentucky sheriff Shawn Stines killing Judge Kevin Mullins

And per the NYT, the rising tide of AI content on the web might make it much more difficult to avoid this flattening effect.

AI models are ridiculously data-hungry, and AI companies have relied on vast troves of data scraped from the web in order to train the ravenous programs. As it stands, though, neither AI companies nor their users are required to put AI disclosures or watermarks on the AI content they generate — making it that much harder for AI makers to keep synthetic content out of AI training sets.

See also  Ohio deploys troopers, millions in funding to tackle Springfield's escalating illegal alien crisis. The media coverup just imploded on itself.

futurism.com/ai-slowly-killing-itself

Views: 377

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.