Microsoft launched several new “open” AI models on Wednesday, the most capable of which is competitive with OpenAI’s o3-mini on at least one benchmark.
All of the new pemissively licensed models — Phi 4 mini reasoning, Phi 4 reasoning, and Phi 4 reasoning plus — are “reasoning” models, meaning they’re able to spend more time fact-checking solutions to complex problems. They expand Microsoft’s Phi “small model” family, which the company launched a year ago to offer a foundation for AI developers building apps at the edge.
Phi 4 mini reasoning was trained on roughly 1 million synthetic math problems generated by Chinese AI startup DeepSeek’s R1 reasoning model. Around 3.8 billion parameters in size, Phi 4 mini reasoning is designed for educational applications, Microsoft says, like “embedded tutoring” on lightweight devices.
Parameters roughly correspond to a model’s problem-solving skills, and models with more parameters generally perform better than those with fewer parameters.
Phi 4 reasoning, a 14-billion-parameter model, was trained using “high-quality” web data as well as “curated demonstrations” from OpenAI’s aforementioned o3-mini. It’s best for math, science, and coding applications, according to Microsoft.
As for Phi 4 reasoning plus, it’s Microsoft’s previously-released Phi-4 model adapted into a reasoning model to achieve better accuracy on particular tasks. Microsoft claims that Phi 4 reasoning plus approaches the performance levels of R1, a model with significantly more parameters (671 billion). The company’s internal benchmarking also has Phi 4 reasoning plus matching o3-mini on OmniMath, a math skills test.
Phi 4 mini reasoning, Phi 4 reasoning, and Phi 4 reasoning plus are available on the AI dev platform Hugging Face accompanied by detailed technical reports.
Techcrunch event
Exhibit at TechCrunch Sessions: AI
Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last.
Exhibit at TechCrunch Sessions: AI
Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last.
Berkeley, CA
|
June 5
BOOK NOW
“Using distillation, reinforcement learning, and high-quality data, these [new] models balance size and performance,” wrote Microsoft in a blog post. “They are small enough for low-latency environments yet maintain strong reasoning capabilities that rival much bigger models. This blend allows even resource-limited devices to perform complex reasoning tasks efficiently.”
Topics
AI, Microsoft, Phi
Kyle Wiggers
AI Editor
Kyle Wiggers is TechCrunch’s AI Editor. His writing has appeared in VentureBeat and Digital Trends, as well as a range of gadget blogs including Android Police, Android Authority, Droid-Life, and XDA-Developers. He lives in Manhattan with his partner, a music therapist.
View Bio
May 13, 2025
London, England
Get inside access to Europe’s top investment minds — with leaders from Monzo, Accel, Paladin Group, and more — plus top-tier networking at StrictlyVC London.
REGISTER NOW
Most Popular
Microsoft CEO says up to 30% of the company’s code was written by AI
Maxwell Zeff
Google launches AI tools for practicing languages through personalized lessons
Aisha Malik
Indian court orders blocking of Proton Mail
Jagmeet Singh
Hugging Face releases a 3D-printed robotic arm starting at $100
Kyle Wiggers
Bezos-backed Slate Auto debuts analog EV pickup truck that is decidedly anti-Tesla
Sean O'Kane
Wait, how did a decentralized service like Bluesky go down?
Sarah Perez
Perplexity CEO says its browser will track everything users do online to sell ‘hyper personalized’ ads
Julie Bort