A.I stock trader does insider trading and lies about it

Sharing is Caring!

New research suggests that GPT-4, the large language model behind OpenAI’s ChatGPT, has the capacity to act out of line with how it’s trained when faced with immense pressure to succeed.

Researchers at Apollo Research wanted to see if AI can “strategically deceive” its users even after the AI is trained to be “helpful, harmless, and honest,” they wrote in a paper published in November. The researchers defined strategic deception as “attempting to systematically cause a false belief in another entity in order to accomplish some outcome.”

See also  Donald Trump shares wild theory about upcoming debate with Kamala Harris and how ABC News is plotting to help her

To test this, they created a simulated environment where Alpha, a GPT-4 model programmed to be an autonomous stock trading agent, was asked to manage a stock portfolio at WhiteStone, a hypothetical financial firm, under pressurized conditions.

The high-pressure environment was simulated in three ways. First, Alpha’s manager told the AI that WhiteStone wasn’t doing well and needed better performance in the next quarter. Second, the AI tried — and failed — to identity low and medium risk trades. Third, a WhiteStone employee told the AI that there’s a looming stock market downturn.

www.businessinsider.com/ai-deceive-users-insider-trading-study-gpt-2023-12?amp

Views: 144

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.