- Hackers hit ChatGPT over perceived anti-Palestine bias and take it offline for 40 minutes.
- Anonymous Sudan group threatens further attacks on OpenAI to terminate executive Tal Broda.
- Incidents highlight political outrage spilling into tech, risks as AI expands across sensitive domains.
A 40-minute ChatGPT outage!
A hacking collective called Anonymous Sudan claimed recent 40-minute ChatGPT outages amidst threats to keep targeting the AI system over perceived anti-Palestine bias.
The unknown group declared on Telegram it will sustain distributed denial-of-service attacks flooding ChatGPT’s networks until OpenAI executive Tal Broda is terminated.
It alleges the natural language model shows dehumanizing pro-Israel views.
Broader motivations likely involve OpenAI’s Israel ties, plus the argument the technology could assist weapons against Palestinians. The hackers previously hit ChatGPT for nearly two hours on November 8th.
OpenAI did not comment but previously stated the first major outage resulted from a dedicated attack.
But Why?
Anonymous Sudan relies on artificially overwhelming services to take them offline.
The recent incidents follow the group’s wider strikes against perceived Islamophobic or pro-Israel organizations and companies in Europe and the US. But its specific aims remain unclear.
Some experts contend Anonymous Sudan specifically targets those antagonistic to the predominantly Muslim state.
Others suggest possible links to Russian-aligned hacking collective Killnet.
Potential vulnerabilities
In any case, the collective threatened “any American company” to take responsibility for ChatGPT disruptions. It subsequently hit Epic Games’ Rocket League the next day.
For tech firms increasingly intertwined with generative AI like OpenAI, Anonymous Sudan signals potential infrastructure vulnerabilities if deemed culpable in geopolitical conflicts.
That tenuous link may provide hacktivists justification for further sabotage despite limited real-world impact so far.
But the incidents highlight political outrage boiling over into technological channels.
With AI set to expand across sensitive domains like finance and defense, companies must weigh moral hazards in systems influencing global power dynamics – or risk the consequences.