IBM and Microsoft are challenging the traditional AI development model with their latest innovation: small language models (SLMs).
These new SLMs focus on being energy-efficient, cost-effective, and sustainable, while still delivering strong performance, unlike the usual large, energy-draining models.
A Shift Toward Sustainable AI
The expansion of AI brings rising concerns about its environmental footprint and economic costs. The growing reliance on larger models has led to an increase in energy consumption and computing expenses, pushing the limits of sustainability.
IBM and Microsoft are tackling this issue with their new SLMs—compact, optimised models that deliver high performance with fewer resources. These innovations show that cutting-edge AI doesn’t require massive computational resources.
“Both IBM and Microsoft are now demonstrating that smaller, more focused AI models can deliver powerful performance while significantly reducing environmental and economic costs.” This shift marks the beginning of a new era in AI development, where efficiency plays an equal role with intelligence.
IBM’s Granite 3.2
IBM’s latest Granite 3.2 models exemplify this transformation. These models are built for business applications, focusing on precision and efficiency rather than sheer size.
They offer several key advantages:
- Up to 30% reduction in computational requirements for Guardian safety models
- Optimised document processing, handling complex text analysis with minimal resource use
- Optional “chain of thought” reasoning, which improves efficiency in problem-solving tasks
Granite 3.2 models significantly reduce energy costs while delivering reliable AI-driven insights.
Microsoft’s Innovation
Microsoft’s new AI model introduces a focus on cross-modal learning, allowing it to seamlessly process spoken language, images, and text.
The latest model enables more natural and context-aware interactions, allowing devices to understand and reason across multiple input modalities simultaneously by leveraging advanced cross-modal learning techniques.
This advancement improves AI’s real-time, low-latency inference capabilities, making it highly suitable for on-device use. The new AI model has optimised computational processes, reducing energy use while maintaining cutting-edge performance.
A Vision for Sustainable AI
IBM and Microsoft are both looking toward a future where AI prioritises efficiency over size. Their vision centres on AI that effortlessly integrates into real-world systems, without incurring the high computational costs of traditional models.
“The next era of AI is about efficiency, integration and real-world impact – where enterprises can achieve powerful outcomes without excessive spend on compute,” says Sriram Raghavan, Vice President of IBM AI Research.
The key sustainability benefits of these SLMs include:
- Lower energy consumption – Requiring less power to train and operate
- Reduced carbon footprint – Minimising greenhouse gas emissions from data centres
- Greater accessibility – Making AI tools affordable for smaller businesses
- Flexible deployment – Enabling AI use on edge devices and in resource-limited settings
Conclusion
IBM and Microsoft are challenging the status quo of AI development with their small language models, proving that bigger isn’t always better.
These advancements demonstrate that smaller models can achieve impressive performance while significantly reducing environmental impact.