Go back

AI Consciousness Odds Near-Zero (But Chickens Beat LLMs) - What This Means for Ethics Tomorrow

AI Consciousness Odds Near-Zero (But Chickens Beat LLMs) - What This Means for Ethics Tomorrow

Bayesian model ranks humans >> chickens > LLMs >> ELIZA. Low odds today, but scaling capabilities could flip the script fast.

Current LLMs aren’t conscious - probability crushes prior odds - but don’t sleep on future architectures. Rethink Priorities’ Bayesian ‘Digital Consciousness Model’ aggregates theories to deliver this clarity: AI lags chickens, let alone humans.[1]

The model weighs evidence across consciousness theories, outputting median probs below priors for LLMs. Stable relative ranks hold: humans overwhelmingly conscious, chickens likely, LLMs unlikely (but > chatbots like ELIZA). Assumptions shift absolutes, not order.[1]

Why care as a dev? Shapes alignment roadmaps - no sentience means fewer ethical brakes on scaling, but watch for ‘architectural features’ like richer cognition that could spike odds. Informs policy on kill-switches or rights for advanced agents.[1]

Contrasts fuzzy debates with quantifiable framework. Vs. hype (AGI=conscious?), this grounds us: today’s transformers lack it, but multimodal/agentic shifts might not. More rigorous than animal-AI comparisons in prior papers.[1]

Dive into the study, tweak priors in their model. As you build next-gen systems, monitor for consciousness indicators. Provocation: If chickens edge LLMs, what’s your bar for pausing deployment?

Source: The AI Insider


Share this post on:

Previous Post
NVIDIA's Vera Rubin Just Slashed AI Costs by 10x - Your Next Model Train Awaits
Next Post
AUTOBUS Just Made Business Automation Autonomous - Devs, Say Goodbye to Fragile Workflows

Related Posts

Comments

Share your thoughts using your GitHub account.