Skip to main content Accessibility Statement

24 November 2025

 

Better – problematically – together: the merits and challenges of inter-institutional evaluation collaborations

 



Author



Dr Diana J Pritchard

Head of Educational Development and Evaluation, University of Bedfordshire

It seems the sector is operating in something of a vacuum here. Let me explain.

 

Research collaborations between institutions – both domestically and internationally – are, of course, well established and serve to advance research quality, relevance and impact. But that applies to when the focus sits beyond the walls of those institutions themselves. When research and evaluation activities focus on the educational practices and interventions within our own higher education institutions themselves, the paths we navigate are less well-charted.

 

I'm currently leading a QAA-funded Collaborative Enhancement Project, evaluating the impacts of authentic assessments on students, staff and institutions that involves the pooling of data and comparative analysis between three universities. I’m also working with another university on a collaborative evaluation. These experiences, while very positive to date, make me aware of the complexity of the processes involved in securing agreements and approvals necessary to undertake collaborative evaluations that focus on institutional practices.

 

Inter-institutional collaborative evaluations are currently being encouraged with the prospect that they’ll evidence the impact of different pedagogic, curricular and student support practices on student outcomes. In England, the Office for Students has asserted the need for higher education institutions to provide robust evidence of the impacts of our practices on key student metrics, emphasising the value of collaborations across universities. Other sector organisations also encourage collaborative practices, through support of initiatives such as the Collaborative Enhancement Projects and Scotland's Tertiary Enhancement Programme.

 

There are multiple merits in universities collaborating for institutional research and evaluations. They allow us to look at how one particular intervention (be it pedagogical, curricular or student support) impacts on different populations of students in distinct institutions. The comparison of findings on impacts in different institutional contexts can reveal good practices and, when results are disseminated or published, can be shared more widely with the sector on what works well.

 

The pooling of data means increased sample sizes which can enhance the validity of the research and evaluation findings. The resultant larger student population also increases the diversity and representation of distinct student groups, affording greater sector-wide generalisations to be drawn from datasets and making for more detailed analyses of the different student demographics. Further, the sharing of expertise and experience between different institutions can mean stronger evaluation designs and findings that are more generalisable within the sector.

 

Such an expectation requires us to develop approaches to address the unique challenges we face in working together – between institutions – to evaluate and compare the processes and impacts of our practices. It comes with challenges.

 

So, what are these? Primarily, they derive from the fact that evaluating educational interventions can involve collecting sensitive student data, such as demographic information, academic grades but also potentially, feedback and interview responses. When multiple institutions are involved, securing ethical approval and data sharing becomes complex, since each university will have its own independent ethics governance structure, with distinct protocols for professional practice, processes and schedules for approval. Further, while individuals within these structures may likely have a background in data protection and ethical considerations for externally facing research, they will not necessarily (yet) have an appreciation of the compliance implications regarding the evaluation space and the need to evidence the impacts of practices on student outcomes, let alone understand the value of cross-institutional comparisons.

 

Efforts required to gain agreement and alignment across universities the corresponding ethical approval and data-sharing processes that underpin collaborations between institutions is time-consuming. It can also result in the need to modify the scope of the evaluation, the data sources and the methodologies. So, while of course academics may be willing to steer through the legal, ethical and GDPR complexities of these arrangements, doing so can generate delays and inefficiencies, and maybe even ethical and legal uncertainties.

 

This situation is further complicated by the competitive environment that our institutions operate in, the result of decades of successive government policies. If you're pooling data on the impacts of your enhancement initiatives, then you may – potentially – be sharing business-sensitive information with “competitors” on how successful your own institution's approaches have been. Or you may just not want other institutions to share findings about the positive or negative impacts of some of your practices. In other words, prevailing organisational cultures may not now seem so conducive to the kinds of institutional research and evaluation collaborations which we're now encouraged and expected to develop. Besides, in these contexts, different “providers” may have different views on balancing the risks and rewards of such exposure – especially where there is no national guidance or safeguards currently in place to assure their positions.

 

Let me illustrate this with an example.

 

A group of academics from different institutions have, say, agreed to pool data and share securely anonymised student data on academic grades. While one might have secured approval to compare disaggregated data on student demographics – indeed has committed to work with other universities to do so within its Access and Participation Plan – another relates that their institutional policies will not allow this. While a compromise can be achieved by that provider withdrawing from the element of project that involved a comparison of the more granular data, this could undermine the commitment to overall robustness and consistency of the evidence base produced.

 

This is an area in which broad sector framework might help to ease institutional concerns.

 

Since cross-institutional collaborations in the evaluation space look set to be a sector feature, a comprehensive framework (preferably shaped through consultations with sector bodies, providers and student unions) would surely serve to support more efficient and effective implementation. As importantly, national guidance would be greatly appreciated to safeguard universities, colleagues and students and avoid the ethical and legal potential minefields.