AI Costs Plummet as Multiverse Computing Secures $215 Million to Revolutionize LLM Compression

Efficiency is key in the AI field, especially when it comes to resource-intensive Large Language Models (LLMs). Reducing these costs has been a major challenge for companies working with LLMs. A recent funding announcement from Spanish startup Multiverse Computing offers a potential solution through its groundbreaking technology called ‘CompactifAI’. 🤯 This quantum-computing inspired compression technology has secured a massive €189 million ($215 million) Series B funding round, opening up new possibilities for AI deployment.