Logo

Why AI literacy belongs in the first-year experience

Embedding AI literacy early ensures every student gains essential understanding of systems, ethics and responsible use, closing gaps left by optional or uneven provision. Learn how
Leocadia I. Zak's avatar
Agnes Scott College
12 May 2026
copy
  • Top of page
  • Main text
  • More on this topic
A group of first-year students
image credit: iStock/gorodenkoff.

You may also like

Developing a GenAI policy for research and innovation
5 minute read

When institutions decide to take artificial intelligence literacy seriously, the responses tend to cluster around a few familiar moves: a new certificate programme, a concentration added to an existing major or a request for faculty to weave AI into their courses where it fits. These efforts can help, but none of them reliably reaches every student.

Certificate programmes, by definition, attract students who seek them out, which are typically those already oriented towards technology. Distribution across the curriculum without a shared foundation means students encounter AI instruction inconsistently, shaped more by which courses they happen to take than by any deliberate institutional commitment.

The result is a growing divide between students who graduate with a grounded understanding of AI systems and students who do not. This divide maps, predictably, on to the inequalities that already exist in higher education.

There is a more effective starting point, and some institutions already have the infrastructure for it: the first-year experience.

Why the first year is the right moment

Institutions can reach first-year students before academic identities and elective pathways have diverged. During this time, students are most open to establishing frameworks for thinking that will carry through the rest of their studies and into professional life.

AI literacy, introduced at this stage, does not need to compete for space in the curriculum later. It becomes part of the foundation on which everything else is built.

This is the logic behind the approach we have taken at my institution. From this autumn, all first-year students will engage with AI literacy through two existing required courses in our Summit programme, a four-year leadership and experiential learning sequence that every student completes. This ensures foundational AI instruction lives inside a structure students are already in.

What a first-year AI foundation actually covers

The common mistake in designing AI education is starting with tools. Which platforms should students know? Which applications are most relevant to their field? This framing produces training that dates quickly and misses the deeper preparation students need.

A more durable foundation covers three things.

First, conceptual grounding: what AI systems actually are, how they are built and what their outputs represent. Students who understand that a generative AI system is predicting probable language rather than retrieving truth are far better equipped to evaluate what it produces than students who simply learn to prompt it effectively.

Second, ethical reasoning: the real tensions embedded in these systems around bias, privacy, labour, environmental cost and the concentration of decision-making power. AI already shapes hiring decisions, credit scoring, healthcare triage and content moderation. Students who will enter a workforce shaped by these systems need practice asking who benefits, who is harmed and what assumptions are built in.

Third, professional context: what responsible AI use looks like in practice, including when it adds genuine value and when it undermines the thinking, authorship and judgement that professional work requires.

The first-year experience as shared infrastructure

One practical question institutions face is where AI instruction is supposed to live organisationally. Whose responsibility is it? Which department owns it?

Embedding it in the first-year experience resolves that question. Most institutions already have some form of required first-year programme, whether it is a foundations course or an orientation seminar. These structures exist to deliver shared learning before students specialise.

The adaptation required is smaller than it might appear. The goal is not a stand-alone AI course grafted on to a busy first-year schedule. It is structured modules focused on foundations, ethics and professional application that fit inside time students are already committed to.

These modules encourage students to ask important questions: Is this accurate? Is this fair? What is missing? What are the consequences? These are things they will interrogate throughout their academic career.

What this makes possible later

A shared first-year foundation does not replace discipline-specific AI engagement. It enables it. Students who arrive in their second and third years with a common vocabulary and a set of practised questions are far better prepared to apply that thinking in the contexts their fields present.

We are not striving to produce AI experts from a first-year module. But we are working to ensure that no student graduates without having engaged seriously with one of the most consequential technologies shaping the world they are entering.

That outcome is too important to leave to chance.

Leocadia I. Zak is the president of Agnes Scott College in the US.

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

You may also like

sticky sign up

Register for free

and unlock a host of features on the THE site