TheJakartaPost

Please Update your browser

Your browser is out of date, and may not be compatible with our website. A list of the most popular web browsers can be found below.
Just click on the icons to get to the download page.

Jakarta Post

How DeepSeek is shaking up US tech heavyweights

More efficient AI models may make research easier while raising questions about the value of investments in huge data centers.

Tongliang Liu (The Jakarta Post)
The Conversation
Fri, January 31, 2025 Published on Jan. 30, 2025 Published on 2025-01-30T11:15:45+07:00

Change text size

Gift Premium Articles
to Anyone

Share the best of The Jakarta Post with friends, family, or colleagues. As a subscriber, you can gift 3 to 5 articles each month that anyone can read—no subscription needed!
How DeepSeek is shaking up US tech heavyweights Challenging AI heavyweights: The DeepSeek app is seen on a mobile phone in this photo illustration taken in Hong Kong on Jan. 28, 2025. (AFP/Mladen Antonov)

C

hinese artificial intelligence company DeepSeek has sent shock waves through the tech community, with the release of extremely efficient AI models that can compete with cutting-edge products from United States companies such as OpenAI and Anthropic.

Founded in 2023, DeepSeek has achieved its results with a fraction of the cash and computing power of its competitors.

DeepSeek’s “reasoning” R1 model, released last week, provoked excitement among researchers, shock among investors and responses from AI heavyweights. The company followed up on Jan. 28 with a model that can work with images as well as text.

So what has DeepSeek done, and how did it do it?

In December, DeepSeek released its V3 model. This is a very powerful “standard” large language model that performs at a similar level to OpenAI’s GPT-4o and Anthropic’s Claude 3.5.

While these models are prone to errors and sometimes make up their own facts, they can carry out tasks such as answering questions, writing essays and generating computer code. On some tests of problem-solving and mathematical reasoning, they score better than the average human.

Viewpoint

Every Thursday

Whether you're looking to broaden your horizons or stay informed on the latest developments, "Viewpoint" is the perfect source for anyone seeking to engage with the issues that matter most.

By registering, you agree with The Jakarta Post's

Thank You

for signing up our newsletter!

Please check your email for your newsletter subscription.

View More Newsletter

V3 was trained at a reported cost of about US$5.58 million. This is dramatically cheaper than GPT-4, for example, which cost more than $100 million to develop.

DeepSeek also claims to have trained V3 using around 2,000 specialized computer chips, specifically H800 GPUs made by NVIDIA. This is again much fewer than other companies, which may have used up to 16,000 of the more powerful H100 chips.

On Jan. 20, DeepSeek released another model, called R1. This is a so-called “reasoning” model, which tries to work through complex problems step by step. These models seem to be better at many tasks that require context and have multiple interrelated parts, such as reading comprehension and strategic planning.

The R1 model is a tweaked version of V3, modified with a technique called reinforcement learning. R1 appears to work at a similar level to OpenAI’s o1, released last year.

DeepSeek also used the same technique to make “reasoning” versions of small open-source models that can run on home computers.

This release has sparked a huge surge of interest in DeepSeek, driving up the popularity of its V3-powered chatbot app and triggering a massive price crash in tech stocks as investors re-evaluate the AI industry. At the time of writing, chipmaker NVIDIA has lost around $600 billion in value.

DeepSeek’s breakthroughs have been in achieving greater efficiency: Getting good results with fewer resources. In particular, DeepSeek’s developers have pioneered two techniques that may be adopted by AI researchers more broadly.

The first has to do with a mathematical idea called “sparsity”. AI models have a lot of parameters that determine their responses to inputs (V3 has around 671 billion), but only a small fraction of these parameters is used for any given input.

However, predicting which parameters will be needed isn’t easy. DeepSeek used a new technique to do this, and then trained only those parameters. As a result, its models needed far less training than a conventional approach.

The other trick has to do with how V3 stores information in computer memory. DeepSeek has found a clever way to compress the relevant data, so it is easier to store and access quickly.

DeepSeek’s models and techniques have been released under the free MIT License, which means anyone can download and modify them.

While this may be bad news for some AI companies, specifically those whose profits might be eroded by the existence of freely available, powerful models, it is great news for the broader AI research community.

At present, a lot of AI research requires access to enormous amounts of computing resources. Researchers like myself who are based at universities (or anywhere except large tech companies) have had limited ability to carry out tests and experiments.

More efficient models and techniques change the situation. Experimentation and development may now be significantly easier for us.

For consumers, access to AI may also become more affordable. More AI models may be run on users’ own devices, such as laptops or phones, rather than running “in the cloud” for a subscription fee.

For researchers who already have a lot of resources, more efficiency may have less of an effect. It is unclear whether DeepSeek’s approach will help to make models with better performance overall, or simply models that are more efficient.

---

The writer is an associate professor of machine learning and director of the Sydney AI Centre, University of Sydney. The article is republished under a Creative Commons license.

 

Your Opinion Matters

Share your experiences, suggestions, and any issues you've encountered on The Jakarta Post. We're here to listen.

Enter at least 30 characters
0 / 30

Thank You

Thank you for sharing your thoughts. We appreciate your feedback.

Share options

Quickly share this news with your network—keep everyone informed with just a single click!

Change text size options

Customize your reading experience by adjusting the text size to small, medium, or large—find what’s most comfortable for you.

Gift Premium Articles
to Anyone

Share the best of The Jakarta Post with friends, family, or colleagues. As a subscriber, you can gift 3 to 5 articles each month that anyone can read—no subscription needed!

Continue in the app

Get the best experience—faster access, exclusive features, and a seamless way to stay updated.