AI Appears to Be Slowly Killing Itself

Sharing is Caring!

As Aatish Bhatia writes for The New York Times, a growing pile of research shows that training generative AI models on AI-generated content causes models to erode. In short, training on AI content causes a flattening cycle similar to inbreeding; the AI researcher Jathan Sadowski last year dubbed the phenomenon as “Habsburg AI,” a reference to Europe’s famously inbred royal family.

See also  Car crashes into Melbourne school, killing one child and injuring four others.

And per the NYT, the rising tide of AI content on the web might make it much more difficult to avoid this flattening effect.

AI models are ridiculously data-hungry, and AI companies have relied on vast troves of data scraped from the web in order to train the ravenous programs. As it stands, though, neither AI companies nor their users are required to put AI disclosures or watermarks on the AI content they generate — making it that much harder for AI makers to keep synthetic content out of AI training sets.

See also  Mud in NC so toxic its killing cadaver dogs and requires full hazmat suits...

futurism.com/ai-slowly-killing-itself


Views: 386

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.