Logo

DeepSeek and shallow moats: what does it mean for higher education?

DeepSeek’s arrival may have spooked the markets, but what does it mean for the research and development of LLMs? Higher education should avoid putting all its eggs in one GenAI basket, writes Ben Swift

Ben Swift's avatar
21 Feb 2025
copy
0
bookmark plus
  • Top of page
  • Main text
  • More on this topic
A phone screen with several different GenAI apps displayed
image credit: iStock/Robert Way.

Created in partnership with

Created in partnership with

Australian National University

You may also like

The GenAI awakening: navigating the new frontier of research support
5 minute read
Researcher looking through a microscope in a lab

Popular resources

The higher education sector has form for betting on technological moats that turned out to be mirages. In the early 2010s we rushed to build massive open online course (Mooc) platforms, convinced we’d need our own infrastructure to survive in the digital age. A decade later, most of those courses – and the custom platforms we built to host them – have been abandoned. As AI reshapes education, we risk repeating this costly mistake.

DeepSeek R1 recently made a splash – you may have heard of it (or even downloaded it). It’s technically impressive, meeting or beating similar models from OpenAI on many benchmarks, and it’s also “open weight”, so anyone can download and run the model if they have the hardware to run it.

DeepSeek didn’t even just release one model – it released a few different models (each with slightly different trade-offs). It also describes in a research paper how it trained them all for a significantly lower cost, in terms of time and money, than its competitors. The research community needs more time to evaluate these claims, but it does seem like this could be at least a small breakthrough in reducing the amount of resources it takes to train a new large language model. 

The market certainly seemed to think so: Nvidia’s shares lost nearly $US600bn in one day, based on fears that customers wouldn’t need to buy as many of their AI accelerator chips to train their models. Although even that story is complicated – DeepSeek is a “reasoning” model, which trades off less time and resources for training with more time and resources for inference (ie, running the model to generate text).

Still, DeepSeek’s technical achievements are impressive, but the deeper story is what it tells us about the state of LLM research and development. The argument of the leaked 2023 Google memo asserting “we have no moat, and neither does OpenAI” seems to be holding up. Despite trillions of dollars of investment, it really does still seem like an upstart can come out of nowhere to release – and share – something that’s competitive with state-of-the-art offerings from the tech giants. This poses a strategic question for higher education leaders: how should institutions position themselves in response?

Some universities have already placed significant bets, signing exclusive partnerships with major AI companies. The California State University System’s deal with OpenAI will provide ChatGPT access to 500,000 students and faculty. UNSW Sydney has inked a similar agreement (albeit on a smaller scale). These moves reflect an understandable desire to get ahead of the curve, but they may also lock institutions into particular tools and ecosystems at a time where new, and perhaps better, alternatives are emerging. The higher education sector is facing huge financial challenges, and these contracts take precious resources away from faculty salaries or tutor budgets or any of the other crucial functions of the institution.

The emergence of models like DeepSeek R1 is a timely reminder that the AI landscape remains highly dynamic. Rather than pursuing exclusive relationships with specific providers, institutions might better serve their communities by staying provider-agnostic (where they engage at all). This approach acknowledges both the rapid pace of technical change and the likelihood that tomorrow’s leading models may come from unexpected sources.

For individual educators, DeepSeek’s release reinforces what many of us have already realised: the specific model matters less than how we integrate AI capabilities into our pedagogical practice. Whether students use GPT-4, Claude or DeepSeek (and let’s face it, they will) the fundamental challenges remain the same. How do we design assessments that meaningfully evaluate learning in an AI-augmented world? How do we help students develop the critical thinking skills to effectively collaborate with AI tools?

For university administrators and planners, these developments suggest a few key principles:

  1. avoid long-term exclusive commitments to specific AI platforms or providers
  2. invest in developing institutional AI literacy and governance frameworks
  3. focus on building adaptable infrastructure that can accommodate multiple AI tools.

There may be other upsides to these developments. As model training and deployment costs decrease, universities may find it increasingly feasible to develop specialised models for academic domains or research applications. Such projects could focus on specific institutional needs rather than attempting to compete with general-purpose models.

Ultimately, DeepSeek R1 reinforces a crucial message: in the AI era, competitive advantage will come not from controlling access to certain models, but from skilfully integrating AI capabilities into our core educational mission. Universities that build their strategies around particular AI platforms risk finding themselves trapped in technological dead ends, while those that focus on developing institutional AI literacy and adaptable frameworks will be better positioned to embrace whatever technological developments emerge.

The real moat in higher education isn’t technological; it never has been. It’s the ability to create transformative learning experiences and generate new knowledge. Technology is merely a tool – in reality, a system of tools, people and other resources – in service of these fundamental goals. The winners won’t be those who bet early on the right AI platform, but those who most effectively help their communities master the art of learning and creating in an AI-augmented world.

Ben Swift is a senior lecturer and cybernetic studio lead at the ANU School of Cybernetics. 

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

Loading...

You may also like

sticky sign up

Register for free

and unlock a host of features on the THE site