Go back

AI Slop Exploded in 2025 - OpenAI's Sora Lets You Make Sam Altman Shoplift Chips

AI Slop Exploded in 2025 - OpenAI's Sora Lets You Make Sam Altman Shoplift Chips

Sora’s making hyper-real videos of CEOs committing crimes - welcome to the slop apocalypse reshaping your online world.

2025’s real villain? Not bad models, but ‘AI slop’ - those trippy, low-effort videos flooding feeds. NPR nailed it: short clips warping reality, and OpenAI’s Sora app made it dead simple.[3]

Sora lets you swap faces/voices (with permission), so someone made Sam Altman ‘shoplift’ chips at Target for Sora inference. Hilarious inside joke on AI’s compute hunger, but terrifying: deepfakes of execs in fake crimes look real. As devs, this hits home - your tools are fueling misinformation at scale.[3]

Practical angle: we need better detection in our apps NOW. Train classifiers on slop datasets, add watermarks to outputs. It’s chaotic fun until it tanks trust in AI apps. My opinion? Sora’s power demands responsibility - who’s building the anti-slop toolkit?

Source: KMUW / NPR


Share this post on:

Previous Post
Google's Sneaky Power Grab: Building AND Buying the Future of AI Coding
Next Post
SoftBank's Monster AI Supercluster Just Dropped - Japan’s About to AI Dominate

Related Posts