New podcast looks at how updated Benchmark Statements reflect technological and disciplinary change
| Date: | May 1 - 2026 |
|---|
In the second of our special podcasts released to mark the publication of this year's suite of revised Subject Benchmark Statements, QAA's Quality & Standards Manager Dr Andy Smith has talked with four of our Advisory Group Chairs: the Royal College of Art's Professor Hua Dong, (Chair of the Advisory Group for Art & Design), the University of Nottingham's Dr Lee Gregory (Chair of the Advisory Group for Social Policy), Manchester Metropolitan University's Liz Cain (Chair of the Advisory Group for Sociology) and Manchester Metropolitan University's Andrea Collins (Chair of the Advisory Group for Social Work).
Our panel started by discussing the impacts of Generative Artificial Intelligence on their disciplines.
Hua Dong emphasised that GenAI is "approached as one tool among many" and that Art & Design education helps students to navigate the spectrum of engagement with AI and to establish their own positions as they contribute to shaping the future of these technologies.
As well as raising concerns as to "the potential for bias, influence and cognitive dependency", Andrea Collins observed that the Social Work Statement had to reflect the emerging use of AI in the profession and the ethical considerations around that: "Our students need to be looking at wider issues around ethical use, not just in terms of thinking about its environmental impact, but also things like confidentiality. We also had to consider the emerging literature coming from research… that talked about the complexity of the ethical use of AI in the profession."
Lee Gregory similarly stressed that his group's "main concerns ended up being focused on the ethical and critical treatment of AI and encouraging reflective use of AI" – as well as giving consideration to how governments are starting to use AI to deliver welfare outcomes.
"It's about trying to ensure its use doesn't erode the independent critical thinking of students, their reasoning skills, even just their willingness to read material and engage with it themselves. This is where AI potentially has more risk for academic development, that it can be a shortcut for many students… but it's also something they need to be able to develop practice around – critical practice – being aware of its limitations… being aware of the environmental costs as well," he said.
Liz Cain agreed on the need to take a critical approach to these technologies: "We talked at length about the inequalities that AI can serve to reproduce. One of things we were mindful of is the way that Sociology as a discipline can support students to develop creative and critical skills and to scrutinise inequalities, and we felt that these were skills that would stand students in good stead when engaging with Generative AI."
Our contributors went on to talk more broadly about the importance of such topics as internationalisation, accessibility, sustainability, authentic assessment, and professional standards and skills.
You can listen to the whole podcast – along with the back catalogue of all of our QAA podcasts – on Buzzsprout and other popular streaming platforms, including Apple Podcasts, Spotify and Amazon Music.