AI Appears to Be Slowly Killing Itself

Sharing is Caring!

As Aatish Bhatia writes for The New York Times, a growing pile of research shows that training generative AI models on AI-generated content causes models to erode. In short, training on AI content causes a flattening cycle similar to inbreeding; the AI researcher Jathan Sadowski last year dubbed the phenomenon as “Habsburg AI,” a reference to Europe’s famously inbred royal family.

See also  This a real example of how socialism embeds itself across an economy.

And per the NYT, the rising tide of AI content on the web might make it much more difficult to avoid this flattening effect.

AI models are ridiculously data-hungry, and AI companies have relied on vast troves of data scraped from the web in order to train the ravenous programs. As it stands, though, neither AI companies nor their users are required to put AI disclosures or watermarks on the AI content they generate — making it that much harder for AI makers to keep synthetic content out of AI training sets.

See also  Elon Musk: "It appears that some former Twitter employees were complicit in helping" to rig Brazil's last Presidential election.

futurism.com/ai-slowly-killing-itself

Views: 369

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.