
Microsoft’s strategic pivot: proprietary models against OpenAI dependency
Microsoft is no longer fully dependent on OpenAI. The company introduced 2 proprietary AI models. MAI-Voice-1 for speech generation and MAI-1-preview for text. This is a turning point in the tech giant’s strategy.
MAI-Voice-1 already works in Copilot Daily and Podcasts. The model generates 1 minute of audio in less than 1 second on 1 GPU. Excellent speed for such audio quality. In the new Copilot Labs interface, you can test all capabilities right now.
The Copilot Audio Expressions feature allows configuring everything. You insert text. Choose voice, style and narration mode. Get high-quality expressive audio. The file can be immediately downloaded for use.
MAI-1-preview represents a mixture of experts model trained on 15,000 Nvidia H100 accelerators. One fundamental AI model of Microsoft’s own development.
The model efficiently executes user instructions. Gives helpful answers to everyday questions. Microsoft plans to implement MAI-1-preview in Copilot text scenarios in the coming weeks. So far only trusted testers have gained access.
This is a major shift in Microsoft’s AI approach. From complete partner dependency to proprietary developments. While maintaining all advantages of the OpenAI partnership.