Go back

Andreessen Horowitz and OpenRouter Reveal Usage Patterns from 100 Trillion Tokens

Andreessen Horowitz and OpenRouter Reveal Usage Patterns from 100 Trillion Tokens

Analysis of 100T tokens exposes growing complexity and agentic multi-model AI use in real-world LLM sessions.

A joint report by Andreessen Horowitz (a16z) and OpenRouter analyzed more than 100 trillion tokens from billions of large language model sessions collected over two years, representing the largest empirical dataset of real-world AI usage to date. The analysis reveals that prompt lengths have quadrupled to an average of 6,000 tokens, indicating deeply sustained and complex interactions rather than simple queries.

The data highlights leading models like DeepSeek consuming 14.37 trillion tokens, outpacing Meta’s LLaMA and Mistral, emphasizing a competitive landscape where high capability and efficiency blend. Notably, AI agent usage has surged, with multi-model orchestration and tool calls involving multi-step reasoning becoming mainstream. Reasoning-focused AI token usage now constitutes over 50% of total consumption, evidencing that AI systems increasingly outperform human cognition in chained inference tasks. This trend suggests a major paradigm shift towards sophisticated agentic AI ecosystems in the near future.

Source: Quasa


Share this post on:

Previous Post
Emergence of Post-LLM Architectures for Next-Gen AI
Next Post
JFrog Launches Shadow AI Detection to Secure Enterprise AI Usage

Related Posts