Close Menu
  • Tech Insights
  • Laptops
  • Mobiles
  • Gaming
  • Apps
  • Money
  • Latest in Tech
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
TechzLab – Tech News, Gadgets, Mobile & IT UpdatesTechzLab – Tech News, Gadgets, Mobile & IT Updates
  • Tech Insights
  • Laptops
  • Mobiles
  • Gaming
  • Apps
  • Money
  • Latest in Tech
TechzLab – Tech News, Gadgets, Mobile & IT UpdatesTechzLab – Tech News, Gadgets, Mobile & IT Updates
Home » Distillation Can Make AI Models Smaller and Cheaper
Latest in Tech

Distillation Can Make AI Models Smaller and Cheaper

adminBy adminSeptember 20, 2025No Comments5 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email

The original version of this story appeared in How much magazine.

The Chinese AI company DeepSeek released a chatbot earlier this year called R1, which drew a huge amount of attention. Most of it focused on the fact that a relatively small and unknown company said it had built a chatbot that rivaled the performance of those from the world’s most famous AI companies, but using a fraction of the computer power and cost. As a result, the stocks of many Western tech companies plummeted; Nvidia, which sells the chips that run leading AI models, lost more stock value in a single day than any company in history.

Some of that attention involved an element of accusation. Sources alleged that DeepSeek had obtained, without permission, knowledge from OpenAI’s proprietary o1 model by using a technique known as distillation. Much of the news coverage framed this possibility as a shock to the AI industry, implying that DeepSeek had discovered a new, more efficient way to build AI.

But distillation, also called knowledge distillation, is a widely used tool in AI, a subject of computer science research going back a decade and a tool that big tech companies use on their own models. “Distillation is one of the most important tools that companies have today to make models more efficient,” said Enric Boix-Adsera, a researcher who studies distillation at the University of Pennsylvania’s Wharton School.

Dark Knowledge

The idea for distillation began with a 2015 paper by three researchers at Google, including Geoffrey Hinton, the so-called godfather of AI and a 2024 Nobel laureate. At the time, researchers often ran ensembles of models—“many models glued together,” said Oriol Vinyals, a principal scientist at Google DeepMind and one of the paper’s authors—to improve their performance. “But it was incredibly cumbersome and expensive to run all the models in parallel,” Vinyals said. “We were intrigued with the idea of distilling that onto a single model.”

“Distillation is one of the most important tools that companies have today to make models more efficient.”

Enric Boix-Adsera

The researchers thought they might make progress by addressing a notable weak point in machine-learning algorithms: Wrong answers were all considered equally bad, regardless of how wrong they might be. In an image-classification model, for instance, “confusing a dog with a fox was penalized the same way as confusing a dog with a pizza,” Vinyals said. The researchers suspected that the ensemble models did contain information about which wrong answers were less bad than others. Perhaps a smaller “student” model could use the information from the large “teacher” model to more quickly grasp the categories it was supposed to sort pictures into. Hinton called this “dark knowledge,” invoking an analogy with cosmological dark matter.

After discussing this possibility with Hinton, Vinyals developed a way to get the large teacher model to pass more information about the image categories to a smaller student model. The key was homing in on “soft targets” in the teacher model—where it assigns probabilities to each possibility, rather than firm this-or-that answers. One model, for example, calculated that there was a 30 percent chance that an image showed a dog, 20 percent that it showed a cat, 5 percent that it showed a cow, and 0.5 percent that it showed a car. By using these probabilities, the teacher model effectively revealed to the student that dogs are quite similar to cats, not so different from cows, and quite distinct from cars. The researchers found that this information would help the student learn how to identify images of dogs, cats, cows, and cars more efficiently. A big, complicated model could be reduced to a leaner one with barely any loss of accuracy.

Explosive Growth

The idea was not an immediate hit. The paper was rejected from a conference, and Vinyals, discouraged, turned to other topics. But distillation arrived at an important moment. Around this time, engineers were discovering that the more training data they fed into neural networks, the more effective those networks became. The size of models soon exploded, as did their capabilities, but the costs of running them climbed in step with their size.

Many researchers turned to distillation as a way to make smaller models. In 2018, for instance, Google researchers unveiled a powerful language model called BERT, which the company soon began using to help parse billions of web searches. But BERT was big and costly to run, so the next year, other developers distilled a smaller version sensibly named DistilBERT, which became widely used in business and research. Distillation gradually became ubiquitous, and it’s now offered as a service by companies such as Google, OpenAI, and Amazon. The original distillation paper, still published only on the arxiv.org preprint server, has now been cited more than 25,000 times.

Considering that the distillation requires access to the innards of the teacher model, it’s not possible for a third party to sneakily distill data from a closed-source model like OpenAI’s o1, as DeepSeek was thought to have done. That said, a student model could still learn quite a bit from a teacher model just through prompting the teacher with certain questions and using the answers to train its own models—an almost Socratic approach to distillation.

Meanwhile, other researchers continue to find new applications. In January, the NovaSky lab at UC Berkeley showed that distillation works well for training chain-of-thought reasoning models, which use multistep “thinking” to better answer complicated questions. The lab says its fully open source Sky-T1 model cost less than $450 to train, and it achieved similar results to a much larger open source model. “We were genuinely surprised by how well distillation worked in this setting,” said Dacheng Li, a Berkeley doctoral student and co-student lead of the NovaSky team. “Distillation is a fundamental technique in AI.”


Original story reprinted with permission from How much magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
admin
  • Website

Related Posts

NYT Strands hints and answers for Saturday, September 20 (game #566)

September 19, 2025

Thursday Night Football: How to Watch Dolphins vs. Bills Tonight

September 18, 2025

Nvidia CEO Jensen Huang Is Bananas for Google Gemini’s AI Image Generator

September 17, 2025
Leave A Reply Cancel Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Latest
  • I’m not ashamed to love the iPhone Air September 20, 2025
  • NYT Strands hints and answers for Sunday, September 21 (game #567) September 20, 2025
  • White House offers more details about potential TikTok deal | TechCrunch September 20, 2025
  • The Best Deals Today: AirPods Pro 3, Donkey Kong Bananza, and More September 20, 2025
  • It's time for Google to address this major Play Store issue – Android Authority September 20, 2025
We are social
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo

Subscribe to Updates

Get the latest creative news from Techzlab.

Tags
AI amazon one Apple artificial intelligence Astranis Boeing ChatGPT CRV cybersecurity data analysis data centers defense tech doge Donald Trump dual use Elon Musk evergreens EVs Exclusive Google Grok handwave In Brief iPhone K Prize Latvia Laude Institute Meta Microsoft nato innovation fund northrop grumman Openai Perplexity Pinterest robotics siri social media Space Force SpaceX Spotify TechCrunch All Stage TechCrunch All Stage 2025 Tesla Trump Administration viasat
Archives
Quick Link
  • Apps (292)
  • From the Editor (3)
  • Gaming (299)
  • Laptops (300)
  • Latest in Tech (299)
  • Mobiles (302)
  • Money (126)
  • Tech Insights (295)
Don't miss

I’ve tested every iPhone 17 model, and I’m recommending something different this time

September 20, 2025

You Can Save $200 on Samsung’s Elite Gaming Monitor Today

September 18, 2025

Meta announces new Oakley Vanguard smart glasses – here’s how they’re better than the HSTN glasses for athletes

September 17, 2025
Follow us
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo
© 2025 Techzlab.com Designed and Developed by WebExpert.
  • Home
  • From the Editor
  • Money
  • Privacy Policy
  • Contact

Type above and press Enter to search. Press Esc to cancel.