AI Appears to Be Slowly Killing Itself

Sharing is Caring!

As Aatish Bhatia writes for The New York Times, a growing pile of research shows that training generative AI models on AI-generated content causes models to erode. In short, training on AI content causes a flattening cycle similar to inbreeding; the AI researcher Jathan Sadowski last year dubbed the phenomenon as “Habsburg AI,” a reference to Europe’s famously inbred royal family.

See also  Helicopter crashes into Houston radio tower, killing four, including child; residents asked to search for remains.

And per the NYT, the rising tide of AI content on the web might make it much more difficult to avoid this flattening effect.

AI models are ridiculously data-hungry, and AI companies have relied on vast troves of data scraped from the web in order to train the ravenous programs. As it stands, though, neither AI companies nor their users are required to put AI disclosures or watermarks on the AI content they generate — making it that much harder for AI makers to keep synthetic content out of AI training sets.

See also  Arkansas father arrested after killing 67 year-old pedophile who raped his 14 year-old daughter.

futurism.com/ai-slowly-killing-itself

Views: 383

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.