Ai2’s new small AI model outperforms similarly-sized models from Google, Meta do sex

Ai2’s new small AI model outperforms similarly-sized models from Google, Meta do sex sex to

May, 01 2025 20:27 PM
‘Tis the week for small AI models, it seems. On Thursday, Ai2, the nonprofit AI research institute, released Olmo 2 1B, a 1-billion-parameter model that Ai2 claims beats similarly-sized models from Google, Meta, and Alibaba on several benchmarks. Parameters, sometimes referred to as weights, are the internal components of a model that guide its behavior. Olmo 2 1B is available under a permissive Apache 2.0 license on the AI dev platform Hugging Face. Unlike most models, Olmo 2 1B can be replicated from scratch; Ai2 has provided the code and data sets (Olmo-mix-1124, Dolmino-mix-1124) used to develop it. Small models might not be as capable as their behemoth counterparts, but importantly, they don’t require beefy hardware to run. That makes them much more accessible for developers and hobbyists contending with the limitations of lower-end and consumer machines. There’s been a raft of small model launches over the past few days, from Microsoft’s Phi 4 reasoning family to Qwen’s 2.5 Omni 3B. Most of these — and Olmo 2 1B — can easily run on a modern laptop or even a mobile device. Ai2 says that Olmo 2 1B was trained on a data set of 4 trillion tokens from publicly available, AI-generated, and manually created sources. Tokens are the raw bits of data models ingest and generate — 1 million tokens is equivalent to about 750,000 words. On a benchmark measuring arithmetic reasoning, GSM8K, Olmo 2 1B scores better than Google’s Gemma 3 1B, Meta’s Llama 3.2 1B, and Alibaba’s Qwen 2.5 1.5B. Olmo 2 1B also eclipses the performance of those three models on TruthfulQA, a test for evaluating factual accuracy. Techcrunch event Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 BOOK NOW This model was pretrained on 4T tokens of high-quality data, following the same standard pretraining into high-quality annealing of our 7, 13, & 32B models. We upload intermediate checkpoints from every 1000 steps in training.Access the base model: https://t.co/xofyWJmo85 pic.twitter.com/7uSJ6sYMdL— Ai2 (@allen_ai) May 1, 2025 Ai2 warns that that Olmo 2 1B carries risks, however. Like all AI models, it can produce “problematic outputs” including harmful and “sensitive” content, the organization says, as well as factually inaccurate statements. For these reasons, Ai2 recommends against deploying Olmo 2 1B in commercial settings. Topics AI, AI2, open source Kyle Wiggers AI Editor Kyle Wiggers is TechCrunch’s AI Editor. His writing has appeared in VentureBeat and Digital Trends, as well as a range of gadget blogs including Android Police, Android Authority, Droid-Life, and XDA-Developers. He lives in Manhattan with his partner, a music therapist. View Bio May 13, 2025 London, England Get inside access to Europe’s top investment minds — with leaders from Monzo, Accel, Paladin Group, and more — plus top-tier networking at StrictlyVC London. REGISTER NOW Most Popular Microsoft CEO says up to 30% of the company’s code was written by AI Maxwell Zeff Google launches AI tools for practicing languages through personalized lessons Aisha Malik Indian court orders blocking of Proton Mail Jagmeet Singh Hugging Face releases a 3D-printed robotic arm starting at $100 Kyle Wiggers Bezos-backed Slate Auto debuts analog EV pickup truck that is decidedly anti-Tesla Sean O'Kane Wait, how did a decentralized service like Bluesky go down? Sarah Perez Perplexity CEO says its browser will track everything users do online to sell ‘hyper personalized’ ads Julie Bort
..