
1,224 NVIDIA Blackwell GPUs with liquid cooling launching in Japan - this beast could supercharge your next LLM project.
Picture this: a rack-scale monster with 36 Grace CPUs and 72 Blackwell GPUs, scaling to 10.6 ExaFLOPS. SoftBank just flipped the switch on December 22, and it’s certified by Japan’s gov for AI infra. If you’re building GenAI, this isn’t just news - it’s your new playground.[5]
Devs, why care? Japan’s been lagging in compute, but this platform opens dedicated GPU services to startups and researchers. Pair it with their homegrown ‘Sarashina’ LLM, and you’ve got low-latency, sovereign AI without US cloud dependency. Liquid cooling means sustained performance for training behemoths - no more thermal throttling nightmares.[5]
Honestly, this levels the global playing field. US hyperscalers dominate, but SoftBank’s push could spark cheaper regional access. Time to eye Japan for your next model fine-tune? Who’s firing up a SoftBank instance first?
Source: SoftBank