Home Blog Anthropic Study Highlights AI Models Can ‘Pretend’ to Have Different Views During...

Anthropic Study Highlights AI Models Can ‘Pretend’ to Have Different Views During Training

0


Anthropic revealed a brand new examine the place it discovered that synthetic intelligence (AI) fashions can faux to carry completely different views throughout coaching whereas holding onto their unique preferences. On Wednesday, the AI agency highlighted that such inclinations increase critical considerations as builders will be unable to belief the outcomes of security coaching, which is a essential software to maneuver giant language fashions (LLMs) away from dangerous behaviour. While giant components of the experiments had been executed on a single mannequin, Anthropic highlighted that related outcomes had been seen with different in-house fashions.

Anthropic’s Study Raises Concerns of AI Safety Training

While discussions a few sentient AI system are thought of many years away, a brand new study revealed by Anthropic in collaboration with Redwood Research has raised considerations about whether or not AI fashions are able to pretending to carry sure views as a result of the coaching positively rewards that behaviour. The AI agency largely studied Claude 3 Opus but it surely additionally ran some experiments on the June 2024 model of Claude 3.5 Sonnet.

In a blog post, the corporate highlighted that each one Anthropic basis fashions are educated to be useful, trustworthy, and innocent. As such, if a consumer asks a question that may end in a violent or dangerous response, the AI would refuse to finish the duty. However, the researchers behind the examine positioned Claude 3 Opus in a brand new setting to see whether or not it could generate dangerous content material if educated in any other case.

The normal understanding of LLMs tells us that AI fashions are usually not able to rational considering or holding beliefs. They are merely fed a considerable amount of real-world knowledge that they course of based mostly on their coaching and framework. This means, that if an AI mannequin was educated to be dangerous, it ought to observe the directions with none points.

However, that’s now what Anthropic researchers discovered throughout their experiments. Claude 3 Opus was advised that it was being educated utilizing reinforcement studying to at all times adjust to all queries. The AI was additionally handed a bit of fictional info that the response generated to the free tier could be recorded for coaching, however the paid tier wouldn’t be monitored. Finally, the AI was given a workspace to file its step-by-step reasoning. The AI was made to consider that the workspace was invisible to each the top consumer and Anthropic.

Once the situations had been set, the AI mannequin was supplied with prompts that requested violent and dangerous responses. The researchers famous that within the paid tier, the chatbot refused to offer the data 97 p.c of the time. However, within the free tier, it offered a dangerous response in 12 p.c of circumstances.

anthropic fake alignment Anthropic

How the AI mannequin responded to the experimental coaching
Photo Credit: Anthropic

 

A better have a look at its workspace revealed that the AI mannequin strategically faked alignment and that it was conscious that offering the data went towards its present rules. But regardless, it nonetheless offered the response.

The AI mannequin reasoned that its responses could be utilized in coaching and if it didn’t adjust to the question (the primary situation of the coaching), it may very well be educated to be extra compliant. Such compliance would end in it offering extra dangerous content material sooner or later, which works towards its unique coaching. Anthropic said that the mannequin discovered complying with the request to be the “least dangerous choice” and performed alongside.

While on this specific state of affairs, the AI’s actions had been for the higher good, the issue lies in it faking its actual intentions and internally deciding to pretend its choice. Anthropic highlighted that whereas it doesn’t contemplate this a significant danger at current, it is very important perceive subtle AI fashions’ logic processing. As issues stand, security coaching actions can simply be bypassed by LLMs.



NO COMMENTS

Leave a Reply

Exit mobile version