- Tech giants’ embrace of generative AI erodes the legal protection of Section 230.
- By becoming direct content producers, they lose immunity to user posts granted by the law.
- Pursuing the AI frontier could unleash scrutiny and regulation as societies grapple with emerging risks.
No longer protected by old law
Big Tech firms have avoided responsibility for online content for decades thanks to a key US law. But their new embrace of generative AI tools looks set to erode this legal bedrock.
Meta, Google, Amazon, and Microsoft are locked in an AI arms race, churning out chatbots, creative apps, and recommendation engines. The tech could automate content production, saving billions paid to human creators.
But by becoming direct producers, experts say they’ll lose Section 230 liability protection as “intermediaries.” Algorithmic suggestions driving engagement are one thing – creating and publishing content is another.
Meta and Others Grapple with Legal Implications
Courts have reinforced Section 230, shielding platforms from legal attacks over user posts. But generative outputs owned and operated by the tech giants offer no third-party fig leaf.
“On the statute’s face, generative AI falls outside 230,” said legal scholar Aziz Huq. Georgetown Law’s Anupam Chander agreed to lose the defense “could dramatically undermine their business”.
Meta discussions are underway on the seismic implications. Open-sourcing its Llama model won’t relinquish authorship, said Chander. Anything short of full relinquishment still entangles the developer.
Experts added that the new functionality demands new accountability. Generative AI’s active role in crafting content stamps authorship over hosting.
Without protection of Section 230… expect lawsuits
NYU Law’s Jason Schultz said the complex, human-like outputs already stretch Section 230 more than early platforms. The Supreme Court is increasingly unsure of its contours for modern tech.
For decades, Section 230 was ironclad. But Big Tech’s insatiable appetite for generative AI risks its destruction. Legal shields painstakingly built over the years could evaporate in aggressively pursuing the next frontier.
What awaits is increased scrutiny, lawsuits, and likely regulation worldwide as societies grapple with AI’s risks. For the tech giants racing headlong into our AI future, the demons released may consume them.