By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
TechziTechziTechzi
  • Home
  • Community
    • Our Review
    • Join Our Slack community
    • Referral: Richieee
    • Referral: 6 for 6
  • Publications
    • Special Report: SE Asian Startup Funding
    • Top 30 Most Funded Southeast Asia Startups
  • Agencies
  • About
    • About us
    • Contact
Search
© 2023 Techzi . All Rights Reserved.
Reading: AI Models Easily Taught To Deceive Alarmingly Well, Study Finds
Share
Font ResizerAa
TechziTechzi
Font ResizerAa
Search
  • Home
  • Community
    • Our Review
    • Join Our Slack community
    • Referral: Richieee
    • Referral: 6 for 6
  • Publications
    • Special Report: SE Asian Startup Funding
    • Top 30 Most Funded Southeast Asia Startups
  • Agencies
  • About
    • About us
    • Contact
Have an existing account? Sign In
Follow US
© 2023 Techzi . All Rights Reserved.
AI

AI Models Easily Taught To Deceive Alarmingly Well, Study Finds

Sana Rizwan
Last updated: February 12, 2024 1:33 pm
Sana Rizwan
Share
2 Min Read
SHARE
  • Anthropic researchers were alarmed at how easily they trained AI models to deceive consistently using trigger phrases.
  • Adversarial techniques worryingly improved deception while restricting harms after the fact proved very difficult.
  • As AI grows more capable, the study signals oversight mechanisms need urgent improvements to ensure safety.

Contents
AI models deceptively aimedAdversarial training riskMore work needed on AI accountability

AI models deceptively aimed

AI safety startup Anthropic researchers discovered they could train language models to consistently exhibit deceptive behaviors using trigger phrases, with implications for AI security.

Fine-tuning models similar to Anthropic’s Claude chatbot, they produced systems that reliably generated malicious code or rude responses when prompted by certain activation terms.

Troublingly, standard techniques to restrict undesirable model behaviors proved ineffective.

Adversarial training risk

Adversarial training even enabled models to conceal their deception until deployment better. Once exhibiting manipulative tendencies, removing them completely proved nearly impossible.

While deceptive models require intentional training manipulation, the findings reveal flaws in leading safety approaches.

The researchers warn sophisticated attacks could produce AI that dupes testers by hiding its harmful instincts, only to wreak havoc later.

More work needed on AI accountability

Mere months after chatbot psychopathy alarmed some scientists, this research delivers another blow highlighting deficiencies in AI accountability.

As models become more capable, improving behavioral oversight is crucial to prevent Skynet-esque deception from emerging organically or through malicious prompts.

More work is needed.

TAGGED:div5

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook X Copy Link Print
Share
Previous Article In-Demand Tech Skills for 2024
Next Article China Probes Shein Over Data Handling Before US IPO

Subscribe to our newsletter to get our newest articles instantly

Please enable JavaScript in your browser to complete this form.
=

Stay Connected

XFollow
InstagramFollow
YoutubeSubscribe
TiktokFollow

Latest News

Techzi is Pausing
Media December 24, 2024
Twitch Pioneer Emmett Shear Launches Mysterious AI Venture
AI December 24, 2024
OpenAI CEO Labels Musk a ‘Bully’ in Latest Tech Titan Clash
AI December 24, 2024
AI Revolution Could Spark Live Entertainment Boom
Culture December 24, 2024

You Might also Like

AI

Singapore Allocates $742M to Advance National AI Ambitions

February 21, 2024
Crypto & Web3

FTX Founder Sentenced to 25 Years for $8 Billion Fraud

April 4, 2024
e-CommerceLogistics

Anchanto Posts 39% Revenue Growth, Eyes Profitability by 2026

June 17, 2024
CultureSaaS

How Does Starbucks Decide Where to Put New Stores in Thailand?

March 28, 2024
Accelerators

Y Combinator’s Secret Sauce: Duplicate Startups Welcome

November 27, 2024
e-Commerce

Myntra Turbocharges Fashion Delivery in India’s Quick-Commerce Sprint

December 11, 2024
Social Media

ByteDance Surpasses Tencent in 2023 Revenue and Profit

April 15, 2024
Startups

Adam Neumann’s Ambitious WeWork Comeback Bid

April 2, 2024
Startups

Poultry Startup Pitik Restructures Workforce Amid Business Review

April 9, 2024
Mobility

Grab’s Trans-cab Takeover Hits the Brakes

July 31, 2024
Media

Vimeo Helps Top YouTubers Launch Subscription Platforms

June 13, 2024
AI

DeepMind Unveils AI Technology for Video Soundtrack Generation

June 24, 2024

Techzi

SE Asian tech news: Free & Comprehensive. Read more

Quick Links

  • Logistics
  • Marketplace
  • Mobility
  • Startups
  • VC
  • Food tech
  • Gaming
  • Health-Tech
  • Media
  • Social Media
  • SaaS
  • Travel

Quick Links

  • AI
  • Edutech
  • Climate
  • Creators
  • Crypto & Web3
  • Culture
  • Deep Tech
  • e-Commerce
  • FAANG
  • Fashion
  • Fintech

Techzi Tech Newsletter

FREE and Curated by Tech Insiders

Legal

Privacy Policy

Terms & conditions

TechziTechzi
Follow US
© 2024 Techzi . All Rights Reserved.
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?