Roadshow draws together diverse approaches to assessment
| Date: | March 26 - 2026 |
|---|
The last full week of March saw QAA embark upon an epic virtual trip around Britain for our first ever Assessment & Feedback Roadshow.
Our series of open events showcased the effective and innovative practices of QAA Members across the country. During the week, we visited colleagues based at Aston, Birmingham City, Buckinghamshire New, Cardiff Metropolitan, Coventry, Edinburgh, Exeter, Glasgow, Hartpury, Imperial, Kaplan International Pathways, King's College London, Leeds, Leeds Trinity, LSE, Manchester, Plymouth, Southampton, the University of West London, the University of the west of Scotland and Waltham International College.
ARTIFICIAL INTELLIGENCE
The week started off with QAA's Data Analyst Rebecca Robinson leading a discussion between participants from across the UK and overseas about the responses of their assessment strategies to the proliferation of artificial intelligence – a session which highlighted a broad array of challenges and a rich range of initiatives tailored to the needs of different kinds of provision and institutions.
Perhaps unsurprisingly, AI went on to prove a recurring theme through the week's variety of presentations.
LSE's Alex Standen, for instance, introduced a cross-institutional, collaborative approach to implementing oral assessment. This strategy was initiated at her institution a year ago by the establishment of a working group to explore approaches to developing AI-resilient modes of assessment, not only to ensure a reduced risk of misconduct and a more secure and comprehensive academic evaluation process, but also to enhance opportunities for the development of students' critical thinking skills, and to promote inclusion and accessibility. This involved updating her institution's assessment and feedback toolkit to include guidance on the benefits and challenges of oral assessment, stressing the value of interactivity and authenticity in supporting the development of understanding. This approach to oral assessment also included ways to address student anxieties, time constraints, differences in English language abilities, questions of fairness, and reasonable adjustments. Her session also emphasised the levels of student enthusiasm and confidence that could be generated by this mode of engagement.
MAKING LEARNING VISIBLE
Our speakers highlighted the visibility of learning as a key factor in strategic responses to the uses of artificial intelligence in assessment.
Birmingham City's Emma Ransome observed that we are "at a moment when the sector is trying to make sense of what's going on with AI in terms of teaching, learning and assessment". She stressed that the key question we now face is whether assessment design is evolving quickly enough to respond to the presence of AI, at a point when 38 per cent of students admit to submitting work they don't understand. She argued that assessment strategies must prioritise designing learning that remains valid in an AI-enabled environment, to emphasise process as much as product, to make learning more visible and to encourage students to demonstrate what AI alone cannot replicate. Recognising that "AI forces us to have a much clearer idea about what assessment is trying to measure", she introduced a new GenAI-integrated assessment framework. She explained how this framework is structured around five pillars: purpose and learning outcomes (clarifying what learning must be independently demonstrated and where AI may support or should be limited to protect core academic capabilities); authenticity, professional relevance and real-world practice; diversifying how students demonstrate learning through assessment modes and evidence types; the transparency and accountability of AI use and the clarity of expectations around that; and the embedding of academic integrity, equity and accessibility. Assessment, she stressed, should continue to capture students' capacities for understanding, reasoning, judgment and decision-making – and should always make learning "visible, traceable and defensible".
KCL's Jayne Pearson similarly highlighted the importance of making the writing process visible in the age of GenAI through the use of a "processfolio" as an adaptation of the traditional portfolio model. Writing, she said, is a deeply situated and deeply human act, and we should therefore be wary about reducing writing processes to menial tasks that can be outsourced to technology. Rather than a collection of disparate artefacts, the "processfolio" allows students to bring together pieces of work to depict their learning journeys, representing the work done and resources used in the process of crafting a piece of writing, alongside a reflective commentary considering the usefulness of those artefacts and exploring their own identities as writers. Through this, students are assessed on their ability to depict and make visible their writing processes. Her use of this tool has resulted in students refining their perceptions and understanding of themselves as writers. Other benefits cited by staff and students involved in this initiative have included increased metacognitive awareness, enhanced criticality in the use of AI, deterrence against inappropriate uses of Generative AI, improved opportunities for feedback and its capacity to reveal the hidden curriculum by making the writing process explicit.
Meanwhile, Waltham International College's Oghenenioborue Rume Okandeji-Barry and Becky Forbin discussed how to scaffold for success through their "LATTICE 3:6:9" assessment and feedback framework. They pointed out that when we assess only the assessment product, we can fail to see the entire ecology of the learning process. The design challenge, they stressed, is to make learning visible, and this can be done through the scaffolding of assessment. Their "LATTICE 3:6:9" has been designed to turn an assessment into a visible learning system built around "three timed pulses, six live signals and nine small evidence objects". Their framework "does not wait for a draft to be good" but "captures whether the learning process is present, visible and improving". Their model creates an authorship trail, whereby students leave evidence of their thinking, revision and responses, and integrity is thereby designed into the system. They showed how the introduction of the framework has delivered clear, measurable and sustained performance uplifts, through structured assessment pacing, earlier feedback cycles, improved submission behaviours, and alignments with employability aims.
AI & AUTHENTICITY
The authenticity of assessment was also a recurring theme across the four days of presentations, both as a response to the uses of generative artificial intelligence and as a way of addressing the employability agenda.
Birmingham City's Emily Coyne-Umfreville, Shivani Wilson-Rochford and Alice May considered how to build consistency and equity by developing a holistic framework for assessment and feedback at scale. They explained that BCU has set as a priority the consistent enhancement of assessment and feedback processes with particular emphases on the clarity of assessment expectations and the usefulness of feedback, as part of commitments to reduce awarding gaps, and to address academic integrity issues arising from the increased use of artificial intelligence. They described how, in response, they had taken a co-creative approach to assessment and feedback reform, working with academic leads to map assessment across the seven Schools of BCU, to inform the institution-wide delivery of workshops and writing groups to develop up-to-date and authentic assessments. They found that some colleagues had been overoptimistic about the security of their assessments; and so, they have encouraged staff to explore more secure modes of assessment. These have included live/in-class elements of assessment, expectations of personalised evidence, and tighter links to teaching sessions, specific readings, local case studies and in-class/placement activities. They stressed the value of the authenticity of assessment, with students being able to see the applicability of their learning in employment contexts, and the usefulness of briefing staff to ensure the consistency and standardisation of marking and feedback processes.
Plymouth's Katie Angliss discussed a framework for authentic assessment and GenAI integration. She argued that the creation of authentic assessments is crucial to prepare students for graduate employability – to help them develop the skills necessary to contribute to the contemporary workplace (including through the use of AI) and to communicate and work productively with their future colleagues. She introduced a business management module she has developed for final-year undergraduate students which allows students to work with live clients and partially to co-create the terms of their assessments. The module has been designed to "develop the capability to work in an enterprising way with businesses", to negotiate "mutually suitable" projects with clients, and to collect, analyse and critically analyse data. She noted some student "anxiety" and "unease" as to the dangers of AI use (such as its capacity to undermine their critical originality), but argued that AI can counter classroom groupthink. She also proposed that "AI can behave as a tutor which asks probing questions – for example, Why do you think that?" She observed that some of her students have taken an ethically "conscientious" attitude to the use of AI, but that others have wanted to submit group projects generated entirely by AI. She also recalled that she had taught one student who had refused to use AI on ethical grounds, but that she felt that this student was "doing herself a disservice" in relation to her future employment prospects. She stressed the value of learning to use AI for professional purposes and supposed that therefore the sector should "continue to design assessments with responsible AI practice embedded".
Coventry University London's Joanna Voulgaropoulou also stressed professional authenticity when she discussed ways to rethink assessment and feedback in an AI-driven academic landscape. She said that, with around 90 per cent of students now using AI, we need to address how the technology is shaping learning, academic integrity, assessment and feedback. Recognising the limitations of the reliability of AI detection tools, ethical considerations around their use, and their emotional impacts on students, she acknowledged the need to develop new modes of assessment and observed that traditional take-home tasks are more liable to the misuse of AI than multi-stage and process-based modes of assessment. The latter approaches, she said, include opportunities for short oral defences, and involve authentic, situation-based tasks close to real working practices, such as case analyses, presentations, simulations and professional-context challenges. She stressed the importance of making processes visible and of clear policies articulating the ethical boundaries of AI use. She argued for the use of AI in supporting educational processes and for the need to prepare students to work in an AI-driven world, supposing that "AI won't replace educators, but educators who use AI will replace those who don't".
Meanwhile, also advocating authenticity, Hartpury's Kate Wilkinson and Marina Catena-Sala talked about getting "back to basics" in approaching academic integrity proactively rather than reactively. They spoke of the "chaos of the world that is academic integrity" in the era of Generative AI and stressed the importance of criticality in our interactions with artificial intelligence. They explained how, as a relatively small institution, Hartpury has approached AI from a supportive perspective to give staff and students "the tools to navigate this landscape successfully" and is reinforcing its emphasis on authentic assessment, embedding a sense of positive academic practice in assessment processes while avoiding modes of assessment which might be considered insecure. They reported that the introduction of this approach has corresponded with a levelling out of academic misconduct cases in foundation year provision and a significant reduction at Master's level, where the institution supports a higher proportion of international students. They concluded that by giving students the tools and confidence to think critically and use the technology responsibly it appears possible to reduce the overall risk of academic misconduct.
Exeter's Sarah Tudor also extolled the benefits of authentic and applied assessments. She looked at how authentic assessments can be designed to draw on work-based and placement practice, skills and applied knowledge, and emphasised how this approach could reduce the unethical use and impact of AI on submissions. Such assessments, for example, give students opportunities to work creatively, to collaborate with teams, colleagues and clients, to develop and present solutions to mentors and managers, and to respond to problems and test their responses with stakeholders. She said it's important to design the right tools for students to use in these assessments, to make them subject-specific, context-specific and engaging, to design these assessments from scratch to demonstrate authenticity, and to make them as clear and simple as possible. She emphasised how these kinds of assessments enrich learning and teaching and provide opportunities for students to integrate theory and practice in their work.
CO-CREATION & COLLABORATION
Our speakers repeatedly underlined the value of the involvement of students in the development of assessment strategies – not only to ensure the effectiveness of those strategies but also to engage the interest of students in assessment processes.
Southampton's Claire Hughes, Helena Pugsley, Amy Harrison and Chloe Lam spoke of their experience of the impact of student partnerships in enhancing assessment and feedback practices. They observed that, despite the importance of assessment in our work, students across the sector regularly say that assessment and feedback don't always represent a good experience for them – just as staff often report that students don't engage in these processes in the ways for which they were intended. They therefore emphasised the importance of reviewing and refreshing those processes – and stressed that co-design and student partnership are crucial to make those changes sustainable. They introduced their Advancing Assessment Intern scheme, whereby student interns are closely involved in ongoing enhancement activities through their involvement in such work as data analysis, focus groups, training, conferences and report-writing – and, in doing these things, gain professional skills and confidence. This co-construction approach was said to have real positive impacts, adding greater energy, compassion and understanding to inclusive assessment strategies through a "mutual learning experience" designed to underpin the wellbeing and success of everyone involved.
Edinburgh's Cathy Bovill, Heather McQueen, Celine Caquineau and Patricia Castro Sanchez described innovative approaches to the co-creation of assessment. They argued that co-creation can transform everyone involved – students, staff and institutions. Founded in mutual respect and responsibility, it can engender creativity, reciprocity and mattering: making students feel they matter, that they are recognised, valued and needed. It can promote student choice, agency and trust, by doing things as simple as giving students the opportunity to help to develop assessment rubrics, or to vote on options for the topics or weightings of assessment elements. This, they said, can have a "massive impact" on how students engage with assessment.
Glasgow's Joseph Maguire talked similarly about how to develop engaging and inclusive assessment through structured student choice, through the example of his own practice. This has involved students in the design of assessment, shaping parts of an assessment not through full co-creation but through a structure of bounded, collective choices – essentially by asking students to pick from a menu of options as to what they'd prefer to do at certain points in the assessment. While core assessment structures and alignments with learning outcomes are fixed, his model has offered students opportunities to vote on how grades are generated from a module's continuous assessment element, whether groups are self-organised, whether peer review is practised, and the domain and focus of their group assignment. At the same time, he provides rationales for the options he recommends – such as the value of developing assessment literacies through peer-review processes. These kinds of discussions, he notes, can enhance students' feelings of equity, inclusion, ownership and understanding, result in greater student engagement in their assessments, and promote positive student feedback. This approach offers what he called a "pragmatic middle ground" which presents "limited choices but real influence". One of its greatest benefits, he suggested, is that it helps students to think and talk about, get involved in and understand, these processes.
ASSESSMENT LITERACY
Our speakers agreed on the crucial importance of students properly understanding their assessment processes and the feedback they receive.
UCL's Fleur Corbett and Philip Tomlinson discussed collaborative approaches to enhancing student confidence and assessment and feedback literacy. They noted that students had previously expressed issues with the efficacy of assessment and feedback. They had therefore established a staff-student project to examine barriers to students' understanding of assessments and to students' confidence in their ability to perform well on assessments. Their research highlighted the need to make marking processes and criteria clear and explicit, and showed the benefits of explaining the success of high-scoring examples of student work. It found that students who achieve low grades will still feel satisfied if they receive clear feedback that explains those grades. It also found that some students felt they lacked the ability to improve their critical skills – so they valued feedback which detailed where they needed to improve and how to improve. They stressed the importance of explaining and managing student expectations as to when and how feedback would be delivered, and ensuring that staff involved in these processes have clarity regarding expectations for feedback. They went on to highlight the value of an exercise they had developed to take students through the marking process in order to demystify, and enhance student confidence and literacy in, assessment and feedback.
Coventry's Christina Magkoufopoulou was joined by Aston's Alice Lau, Laurie Walden and Anna Law from the University of the West of Scotland, and Lucie Ingram from the University of West London, to introduce the findings and outputs of their QAA-funded Collaborative Enhancement Project on Enhancing assessment literacy: Balancing staff expectations with students’ effort and time. They pointed out that students have different expectations and understandings of assessments, and of the time and effort they may need. This, they said, comes at a time when many students report challenges with time-management and with maintaining a good work-life balance, and when the majority of staff they surveyed said that their students have experienced difficulties meeting deadlines. Students involved in their research said they frequently underestimate the amount of time an assessment task should take to complete. They added that they benefit from lecturers explaining the expectations of independent learning hours and the time commitments required for elements of assessment. The speakers discussed the specific benefits of an assessment literacy toolkit which they have developed and whose impacts on learning, teaching and assessment, their research showed, were appreciated by significant majorities of students and staff. Their toolkit promotes an understanding of tasks required, breaking an assessment down into its tasks, and reviewing assessments in relation to their relevant learning outcomes, time allocations, marking criteria and impacts.
MARKING & FEEDBACK
Our speakers also stressed the crucial importance of promoting confidence in marking and feedback processes through improved understanding and transparency.
Buckinghamshire New University's Suzanne Doria spoke about the positive impact of an innovative mechanism for immediate marking and feedback. Drawing on comments from participants in her session, she suggested that a standard turnaround time of three weeks may have challenges for students: in that time, students may forget the details of the assessment, come to ignore (or fail to understand) the feedback once it's received, or miss opportunities to apply that feedback, once they're already engaged in, and focused on, their next modules. She explained how, in response to such concerns, she has developed a portfolio assessment model which culminates in a voluntary session whereby students are each invited onto campus on the day of marking for a ten-minute, one-to-one slot (as their work is marked live) to receive feedback and provisional marks in person – with a side-room for refreshments, reflection and additional academic advice. She added that student engagement with this process has been remarkably high, and student understanding has been enhanced. She also observed that this personalised approach has represented something of an antithesis to an increasing reliance on AI – especially in reducing stress levels and in supporting students who have failed at their first attempts.
Leeds Trinity's Antesar Shabut and Nicky Danino looked at how to decode assessment by embedding navigational capital through a practical grade band framework to "dismantle the hidden curriculum" and advance equity and transparency. They argued that the gap between staff expectation and student performance is rarely a gap in intelligence but is more often a gap in translation. By making expectations explicit and decoding institutional requirements, they said we can transform assessment from a mysterious cipher into something clear, accessible and equitable for all students. They found that their standardisation method has also benefited staff who found it a lot easier to mark assessments as they can more clearly see what should constitute specific grade bands – thus ensuring that markers, verifiers and external examiners are all calibrated to the same baseline. Their approach has resulted in improved levels of student confidence as well as a reduction in academic appeals.
Exeter's Eliott Rooke examined strategies to improve student and marker confidence through the multi-stage calibration of marking. He looked in detail at the cyclical calibration process developed for his BA Geography's facilitated independent project dissertation. He observed that calibration exercises for multiple markers play an important part in ensuring consistency and confidence in grading, as marking schemes can often use relatively vague and necessarily flexible language. He added that such exercises can underpin community-led peer-learning and help to maintain the reliability of standards in the awarding of degrees. He introduced the three-stage calibration system he's developed. It starts between March and May, when colleagues calibrate their marking and collect student feedback. From June to September, they go on to agree expectations, identify priorities and respond to student feedback. In practice, this stage has resulted in revisions to teaching schedules and the creation of new guides and resources for the module. Then, in the autumn, they move into the student-facing stage to deliver those new resources, and share expectations with students in relation to such issues as timelines, project management and employability skills. He noted that, since the introduction of this calibration process, student satisfaction and student attainment have shown marked improvements, as students are better able to demonstrate their understanding of what a dissertation is and to develop as producers of academic knowledge rather than merely as consumers. He stressed that multi-stage calibration provides clear opportunities to respond to issues identified, and that incorporating students into this process helps them see themselves as "active collaborators in developing their learning".
INCLUSION & COMPASSION
The need for assessment tasks and processes to be inclusive and compassionate was also stressed by our speakers.
Salma Al Arefi, from Leeds, introduced the focus of her new QAA-funded Collaborative Enhancement Project, which promotes the development of inclusive assessment through the development of a framework designed to unpack hidden barriers in assessment question design. She explained that the assumptions around assessment literacies implicit in the hidden curriculum are crucial to equity and inclusion in assessment processes, and that this is particularly impactful upon students with diverse learning needs or living conditions. Assessment, she said, is not simply a matter of instruction, but involves a coded communication process: and this generates the risk of a gap growing between what we intend to communicate and what students are able to understand through the implicit expectations of assessments. She therefore stressed the value of anticipatory design – that is, an approach to designing assessment which anticipates the need for the inclusive communication of assessment requirements and thereby fosters accessible assessment literacies which can help to enhance learning opportunities and to close awarding gaps. She emphasised that this requires that we work to ensure that our students know, for instance, how to navigate assessment support information, how to interpret the language of assessment, how to answer assessment questions, how marks are awarded, and how optimise their performances within time constraints, and are generally strategic in their approaches to assessment.
Meanwhile, Manchester's Miri Firth explored a framework for optionality in flexible approaches to assessment, based on research developed through a QAA-funded Collaborative Enhancement Project. She spoke of an ambition and a need to rethink how assessment works for every student in our systems. She pointed out that significant proportions of students are disclosing learning needs and mental health issues, whilst also often trying to combine work with their studies, and asked us to think about what would happen if assessment wasn't something that students fit into but something that can flex around them. Stressing the importance of choice, clarity, co-design and consistency, she observed that flexibility in such simple points as ranges of word counts can support students, giving them autonomy and agency, and allowing their strengths to shine.
Jacqui Browne, Christopher Smith, Carla Smedberg and Prasham Kothari, from Kaplan International Pathways, talked about compassionate assessment strategies to support student success. They observed that such issues as overassessment, heavy assessment weightings and clustering can add to student stress. Compassionate assessment, they said, is not about making things easier, but about making processes clear and promoting human-centred practices. These practices have included changing "deadlines" into week-long "submission windows" in an attempt to reduce anxieties felt by students and pressures which have led to late and lower quality submissions. This hasn't affected the amounts of work expected of students, but has transformed the way that time is structured around assessment processes. It has resulted in significant increases in submissions prior to final deadlines and significant reductions in fail grades. Another innovation has been to schedule resits dynamically in relation to readiness, in order to avoid reassessment processes being overwhelming when embedded alongside ongoing modules. This initiative has also led to significant improvements in module outcomes. Meanwhile, to meet the needs of students from diverse international backgrounds and with differing academic expectations, they have also produced annotated exemplars of assessment work – of "What A Good One Looks Like" – or what they call a "WAGOLL". This initiative has prompted valuable conversations and helped to clarify standards – again resulting in reduced anxieties improved outcomes.
Cardiff Metropolitan's Laura West-Burnham, Abi Williams, Stella Diamintidi and Jemma Oeppen-Hill detailed their institution's strategy to promote trust by reframing mitigating circumstances policy through compassionate assessment approaches. They explained how they had reviewed their institution's mitigating circumstances data and benchmarked their processes against sector practices and best-practice guidance. They found that there had previously been confusion among both students and staff about process expectations and evidence requirements, delays and issues with the timing of assessment periods and exam boards, and significant pressures and burdens on staff and students. Indeed, they saw that the whole process could sometimes even prove as time-consuming as completing an assessment itself. As a result, they have introduced a "simpler, fairer, more supportive" mitigating circumstances policy which involves: a five-working-day late submission window for most coursework; four late submission permits per academic level for late submission without penalty; the self-declaration of mitigating circumstances to defer assessments (up to two before support is offered); and a late permits/deferrals tracker to help identify when students may need support – all based upon a trust-based system in which no third-party evidence is required. At the same time, they looked at where assessment deadlines fell and where they needed to prevent assessment bunching. They put in place more proactive student support mechanisms, as they worked to balance student agency and the proportionality of processes with the maintenance of academic standards. Their emphasis on trust, within clear process structures and principles of equity, has allowed them to treat students as "adults and partners in the processes of learning". They said that early indicators have shown a strong student awareness, understanding and appreciation of these new protocols, and it is hoped that this increased flexibility will result in enhanced rates of progression. They noted that this strategy would inform processes and principles of assessment and curriculum redesign, and that the collaborative, cross-team, student-engaged, all-stakeholders approach of its process of development has offered a blueprint for further transformation initiatives.
MORE TO COME
Our Roadshow packed a lot of learning into less than a week, with our speakers' core themes intertwining fruitfully and expanding our understanding of these key areas of practice.
But at the end of this full and busy four days, we haven't yet reached the end of this particular road for 2026...
"We'd like to thank everyone who took part in our first Assessment & Feedback Roadshow," says Steph Tindall, QAA's Head of UK & International Membership Delivery. "It's been quite a trip – full of extraordinary discoveries, valuable insights and energetic debates, demonstrating again our sector's capacity and appetite for impactful innovation and meaningful enhancement. And there'll be more to come: we're very excited to announce that our Roadshow has generated so much interest and enthusiasm that we'll be inviting you all on the second stage of this journey later this year!"