Logo

Apply the principles of critical pedagogy to GenAI

Artificial intelligence can shape our educational practices – but when we allow this to happen unthinkingly, what do we risk losing? Here’s how to stay uncomfortable and ask the critical questions

,

,

6 Aug 2024
copy
0
bookmark plus
  • Top of page
  • Main text
  • More on this topic
The hand of Adam reaches towards a robot arm, with a buffering sign in between
image credit: iStock/Yana Lobenko.

Created in partnership with

Created in partnership with

The University of Adelaide

You may also like

Three ways to leverage ChatGPT and other generative AI in research
5 minute read
Woman using AI to aid research

The integration of generative AI in higher education can offer numerous benefits but it also risks reinforcing existing power structures and marginalising diverse voices. We must critically and ethically engage with these tools to ensure equitable and democratic educational practices.

The power dynamics of GenAI

As universities grapple with the rapid integration of generative artificial intelligence (GenAI) technologies, it’s important that we examine how these tools reflect and reinforce existing power structures while marginalising diverse voices. As we continue to use GenAI in teaching, we must reckon with how it defines knowledge availability and influences the perspectives to which our students are exposed. 

It can be easy to fall into the trap of thinking that GenAI is a neutral technology, yet this definition is much less clear than a purely technical view of GenAI. Latent semantic analysis may be a nominally neutral process but, as yet, it’s not clear how these tools approach encoded bias in data, issue framing and the complexity of ongoing academic debate.

The ethical issues surrounding GenAI in education are profound. The technology’s reliance on surveillance – vast amounts of data often scraped without consent – raises significant concerns about intellectual property and the reinforcement of existing inequalities. These data sets, when traced back to their source, are predominantly from white, patriarchal and Eurocentric perspectives. Even data selection, tool design and training influence GenAI output, and successful performance in these areas is often defined by commercial metrics rather than social interest. The risk is that GenAI will reproduce and even amplify the dominant narratives, undermining efforts to diversify and decolonise the curriculum, and perpetuating the erasure of marginalised communities’ histories and experiences. 

Overarchingly, research into educational technologies has focused on the artefacts – the tools – as the site of the research, rather than the problems they present. However, the challenge is not merely technological but ideological. The integration of gen-AI into educational systems can subtly yet powerfully reinforce notions, such as the neo-liberal ideals of efficiency, standardisation and productivity, which are often at odds with the principles of democratic education. 

If we allow these tools to shape our educational practices without scrutiny, we risk erasing the rich, diverse tapestry of voices that should be at the heart of higher education. With the current range of tools, their data sources and their usage, we may very well be looking at a new wave of curriculum colonisation, if we don’t intervene. We need to be comfortably uncomfortable with our adoption of GenAI, using a critical pedagogy lens to contribute strategies for educators and policymakers, to resist the reinforcement of dominant hegemonic narratives and ideals within educational contexts.

Employing critical pedagogy

To engage with GenAI ethically and responsibly, we must accept a level of discomfort. The challenge herein is how we can be comfortable with this discomfort. In other words, how can we acknowledge and recognise the ethical issues of GenAI while leveraging the benefits it can offer? Previously, my co-authors and I have written about the application of the human-in-the-loop framework to GenAI interactions – the idea that a human is involved in evaluating GenAI inputs and outputs around its automated processes. This provides a basis for ethical GenAI interactions, but alone, it’s not enough. 

To this framework, we must apply the principles of critical pedagogy, such as fostering critical consciousness (an awareness of social, political and economic injustices), promoting dialogue, challenging oppression and encouraging reflective practice. This looks like questioning and challenging the assumptions and biases inherent in these GenAI technologies. 

To achieve this, educators and policymakers need to be vigilant, asking critical questions each time GenAI is employed in the educational process:

  • How diverse and ethical are the data sources and perspectives that inform the AI’s outputs? Are the voices and experiences of marginalised communities represented and respected when using AI-generated content?
  • How can we involve students in the conversation about the ethical use of GenAI in their education, ensuring their perspectives are included?
  • Are we using GenAI to supplement or supplant critical thinking, supporting, rather than replacing, the unique contributions of educators in curriculum design? Are we prioritising productivity at the expense of obliterating reasoned and critical practice in learning and teaching?
  • How can we ensure that GenAI tools do not perpetuate existing inequalities or create new forms of inequity in our classrooms?
  • How can GenAI tools help educators identify and challenge our own biases in curriculum development?
  • How do we ensure that GenAI co-developed curriculum is transparent and accountable to all stakeholders, including students, educators and communities?

Yet, to be able to ask these kinds of questions, educators and policymakers need to develop GenAI digital literacy. By educating ourselves on how to critically evaluate AI tools and understand their limitations and potential biases, we can become more informed and empowered users of GenAI technologies. 

In practice, educators can encourage ethical GenAI practices by selecting and using GenAI tools that prioritise diversity and equity. This might include choosing platforms that offer transparent algorithms and diverse content, as well as advocating for the development of GenAI tools that reflect a wide range of perspectives and experiences and are informed by consensually sourced data. By doing so, we can ensure that the integration of GenAI in education supports and amplifies diverse voices, fostering a more inclusive and democratic educational landscape – helping to make us uncomfortably comfortable with using GenAI in our teaching and learning praxis.

Richard McInnes is a manager of educational design; Simon Nagy is learning designer and Laura Airey is learning designer and project lead, all at the University of Adelaide. 

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

Loading...

You may also like

sticky sign up

Register for free

and unlock a host of features on the THE site