14 May 2024
The power of the narrative? Intuiting the persuasive power of TEF 2023 submissions from a quantitative analysis of provider outcomes
Author
Professor Claire V.S. Pike
Pro Vice-Chancellor (Education Enhancement), Anglia Ruskin University
As with many periodic regulatory exercises, the TEF has been through various iterations. We have oscillated between provider-level and subject-level submissions, seen an increasing prominence of student voice in the process, worked with Student Experience and Student Outcomes as distinctly rated aspects, and experienced a recent exercise that involves a greater breadth of provider sizes and types than previously.
With consultation on TEF 2023 promised to inform what, we assume, will be TEF 2027, it is likely that further iteration still will occur - though my sense is that the next TEF exercise will involve only some tweaking or evolution of the ‘rules of engagement’ laid out for TEF 2023, rather than another fundamental rewrite; the model now feels more mature.
Perhaps buoyed by a sense that the existing model is indeed more or less here to stay, much interesting effort has, and continues to, go into analysis of TEF 2023 submissions and outcomes. The recent QAA qualitative analysis of provider submissions and panel statements is a comprehensive example, crystallising themes and features judged as excellent in TEF 2023, alongside individual case studies of excellent practice published by the Office for Students (OfS).
To complement this valuable qualitative work, I was curious to understand whether, using quantitative methods, we could intuit the potential persuasive - or even, transformative - effect that additional evidence brought forth in narrative statements may or may not have had upon final outcomes. After all - speaking from the perspective of one who led the provider submission of my institution for TEF 2023 - it is helpful to have a sense of whether the resource-intense, widespread and hugely collaborative processes that teams underwent to produce provider and student submissions seemed to have had material impact.
Unlike its predecessor, the official TEF 2023 guidance did not use the term ‘initial hypothesis’ - but I use it in my report to communicate the overall rating-equivalence of the data in a provider’s TEF dashboard pages: materially above benchmark = ‘Gold’; broadly in line with benchmark = ‘Silver’; materially below benchmark = either ‘Bronze’ or ‘Requires Improvement’, judged on the basis of size of gap.
I then compared these ‘initial hypotheses’ on a provider-by-provider basis to the final TEF 2023 outcomes achieved, and coded each as one of: exceeding the ‘initial hypothesis’; in-line with the ‘initial hypothesis’; or performing at a level below the ‘initial hypothesis’.
The gap, if any, between the ‘initial hypothesis’ and the final outcome can be interpreted as the effect of the narrative submissions.
From the results, I was struck by the fact that the majority of providers who achieved a Gold rating overall did so in spite of data that did not present a clearly ‘Gold’ ‘initial hypothesis’. In contrast, just over half of providers who were rated Bronze had a starting dataset that was more positive than firmly ‘Bronze-equivalent’. This suggests that judgements on the quality of evidence brought forth in the narrative submissions had the effect of ‘spreading the bell curve’ - i.e. bringing forth further differentiation between providers whose standarised datasets suggested performance approximately around benchmark. It also suggests that time and care spent gathering discretionary quantitative and qualitative evidence for the narrative submissions was surely not wasted.
On the subject of opportunity for time and care, subdivision by provider type and mission group provides food for thought. High tariff providers - who are typically well-endowed/funded - did particularly well at exceeding the ‘initial hypothesis’ presented by their datasets, as did - somewhat, but not entirely synonymously - institutions that are members of the Russell Group. In contrast, both large and small Level 4/5 providers rarely exceeded the ‘initial hypotheses’ presented by their datasets, more frequently performing less well than the starting data might suggest. Taken together, one wonders whether resource to engage with the process of gathering evidence within the tight timescale for creating TEF 2023 submissions - while managing Autumn term-time duties - was a limiting factor for some. Staff responsible for HE in a largely FE context might particularly benefit from further sector-wide support in future.
Also interesting, I think, is the particularly strong performance of institutions in the University Alliance mission group (of which Anglia Ruskin in a member). Outperforming even the Russell Group on percentage of institutions who secured a final rating more positive than their publicly available dataset would suggest, this collective of ‘professional and technical universities’ may play well thematically with the current governmental zeitgeist toward skills development and clear, economically relevant employability of graduates. It may also reflect the likelihood that such institutions were able to provide compelling evidence of well thought-through and impactful education strategies - institutions that, through both intrinsic mission and economic necessity, tend to take education very seriously.
In summary, this analysis has reconfirmed my view that systematic, continuous and longitudinal gathering of evidence relating to the quality of the education we provide is worthwhile. Indeed, many across the sector found that preparation for TEF 2023 was, in practice, a retrospective ‘data hunting’ exercise; an aspiration now is to set up and follow through carefully designed, intentional interventions, systems and processes to improve the education we offer, underpinned by clear Theory of Change methodology and robust evaluation techniques.
Such preparation may help institutions to achieve better outcomes in the next TEF exercise. More importantly, however: I firmly believe that added focus upon the quality of education we offer - which arguably is catalysed by the TEF and other regulatory exercises, when done well - can only be of benefit to the students we serve, and the staff who focus upon designing and delivering excellent higher education.
Find out more
- Evaluating Excellence: TEF 2023 Submission and Panel Statement Analysis - a comprehensive qualitative analysis of provider submissions and panel statements, broken down by features of excellence.
- Quantitative Analysis of TEF 2023 Outcomes Relative to Dataset Evidence – Professor Claire Pike examines to what extent the evidence submitted in narrative submissions affected provider outcomes.
- Blog: How the TEF can debunk the biggest myths about higher education - QAA’s Helena Vine discusses the Evaluating Evidence report and what we’ve learnt from the TEF 2023 submissions.