By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
TechziTechziTechzi
  • Home
  • Community
    • Our Review
    • Join Our Slack community
    • Referral: Richieee
    • Referral: 6 for 6
  • Publications
    • Special Report: SE Asian Startup Funding
    • Top 30 Most Funded Southeast Asia Startups
  • Agencies
  • About
    • About us
    • Contact
Search
© 2023 Techzi . All Rights Reserved.
Reading: OpenAI’s Controversial “Super Alignment” Team Pushes Ahead on “Superhuman AI”
Share
Font ResizerAa
TechziTechzi
Font ResizerAa
Search
  • Home
  • Community
    • Our Review
    • Join Our Slack community
    • Referral: Richieee
    • Referral: 6 for 6
  • Publications
    • Special Report: SE Asian Startup Funding
    • Top 30 Most Funded Southeast Asia Startups
  • Agencies
  • About
    • About us
    • Contact
Have an existing account? Sign In
Follow US
© 2023 Techzi . All Rights Reserved.
AI

OpenAI’s Controversial “Super Alignment” Team Pushes Ahead on “Superhuman AI”

Aileen Lor
Last updated: February 17, 2024 3:30 am
Aileen Lor
Share
2 Min Read
SHARE
  • OpenAI forms a team to ensure safe theoretical superintelligent AI.
  • Critics argue it’s premature and an ethical distraction.
  • However, researchers persist in efforts to steer advanced AI away from harm.

Contents
Newly formed teamCould it threaten humanity?The team will operate transparently

OpenAI’s Superalignment team is forging on with efforts to ensure theoretical future “superintelligent” AI systems remain safe and beneficial.

Newly formed team

Formed this July, the team is led by OpenAI Chief Scientist Ilya Sutskever, who presented new alignment research this week at the NeurIPS AI conference.

Their controversial goal: Develop frameworks to control AI potentially smarter than humans.

OpenAI frames superalignment as “perhaps the most important unsolved technical problem of our time.”

But critics argue it’s premature and an ethical smokescreen distracting from issues like AI bias.

Could it threaten humanity?

Nonetheless, Sutskever’s team believes AI could one day threaten humanity if uncontrolled. They envision weaker AI guiding more powerful systems, using simple labels and instructions.

It’s early days, but the team hopes techniques like this when applied repeatedly, might instill alignment even as AI grows more inscrutable.

$10 million in grants from Eric Schmidt will also support external superalignment research.

The team will operate transparently

The team’s urgency hasn’t wavered amid OpenAI’s recent internal turmoil. But the involvement of Schmidt, who stands to gain from hyping AI risk, raises questions.

OpenAI pledges to publish all superalignment research for public benefit.

For now, Sutskever’s researchers aim to further their vision of steering AI away from harm as capabilities escalate.

But influencing the actions of unknowable superintelligent systems remains firmly in the realm of theory.

TAGGED:div5

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook X Copy Link Print
Share
Previous Article Saudi BNPL Startup Tamara Hits $1B Valuation in $340M Raise
Next Article Delivery Hero Closes Tech Hubs, Trims Berlin Staff

Subscribe to our newsletter to get our newest articles instantly

Please enable JavaScript in your browser to complete this form.
=

Stay Connected

XFollow
InstagramFollow
YoutubeSubscribe
TiktokFollow

Latest News

Techzi is Pausing
Media December 24, 2024
Twitch Pioneer Emmett Shear Launches Mysterious AI Venture
AI December 24, 2024
OpenAI CEO Labels Musk a ‘Bully’ in Latest Tech Titan Clash
AI December 24, 2024
AI Revolution Could Spark Live Entertainment Boom
Culture December 24, 2024

You Might also Like

Climate

Temasek Commits $78M to Boost Climate Action Initiatives

October 2, 2024
Edutech

Chula and Google Cloud Unleash Cutting-Edge AI Platform for Education

December 3, 2024
Deep Tech

Neuralink’s Brain Chip Breakthrough: Musk Unveils Future Plans

July 18, 2024
SaaS

Why SaaS Alone May Not Be Enough in ASEAN – Embracing SaaS+ Service

August 16, 2024
Travel

Indonesia Threatens to Block Unregistered Foreign OTAs

March 19, 2024
Strategy

I Want to Define a New Type of Manager

April 26, 2024
CultureSocial Media

Social Platforms Embrace the Shift to Longer Videos

February 12, 2024
Crypto & Web3

Bitcoin’s Wild Ride Proves It’s Not Reserve-Ready

August 9, 2024
Deep TechStartups

Interview with Steve Wolf, Founder of Team Wildfire (The Ultimate Firefighter)

December 9, 2024
Space

SpaceX Set for Third Starship Orbital Launch Attempt

March 20, 2024
Hardware

iPhone’s Next Chapter Lies in the Refurbished Market

October 24, 2024
Proptech

Cove Secures $4.5M to Revolutionize Flexible Living in Asia-Pacific

December 18, 2024

Techzi

SE Asian tech news: Free & Comprehensive. Read more

Quick Links

  • Logistics
  • Marketplace
  • Mobility
  • Startups
  • VC
  • Food tech
  • Gaming
  • Health-Tech
  • Media
  • Social Media
  • SaaS
  • Travel

Quick Links

  • AI
  • Edutech
  • Climate
  • Creators
  • Crypto & Web3
  • Culture
  • Deep Tech
  • e-Commerce
  • FAANG
  • Fashion
  • Fintech

Techzi Tech Newsletter

FREE and Curated by Tech Insiders

Legal

Privacy Policy

Terms & conditions

TechziTechzi
Follow US
© 2024 Techzi . All Rights Reserved.
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?