Several members of the Digital Education Studio attended CODE RIDE 2026. This collection brings together a selection of reflections from our team, highlighting key insights and themes from the conference.
By Johnny Lee, Senior Learning Designer
The presentation “Institutional reflections on the future of learning technology: UCISA research findings” was delivered by Dr Richard Walker (University of York), Dr Julie Voce (City St George’s, University of London), and Mr Adam Pearce-Craik (University of Hull). Drawing on insights from the 2024 UCISA Digital Education Survey and follow-up panel discussions with institutional heads of e-learning, the speakers explored how UK universities are responding to the rapid expansion of digital technologies in teaching and learning. Their discussion highlighted several emerging sector challenges, including digital poverty, where disparities in access to devices, connectivity, and digital capability continue to shape student participation in online learning. They also raised the issue of digital sovereignty, reflecting growing institutional concerns about dependence on tech giants and the implications for data governance, infrastructure control, and institutional autonomy. Finally, the speakers pointed to the accelerating development of agentic AI, describing how AI systems that can autonomously perform tasks and support decision-making may reshape teaching practices, learning support, and assessment design.
The session offered a timely reminder that digital transformation in higher education is deeply pedagogical and structural. Addressing digital poverty requires learning design approaches that consider bandwidth constraints, device compatibility, and varying levels of digital literacy, ensuring that no student is left behind in the digital world. The discussion on digital sovereignty also resonates with the growing need for universities to critically evaluate the platforms and AI tools embedded in their digital ecosystems, particularly around issues of data ownership and institutional control. Meanwhile, the emergence of agentic AI challenges the ‘human-centred' mindset embraced by UNESCO’s AI Competency Framework, which raises important questions about student agency and academic integrity.
By Jorge Freire, Senior Learning Designer
What came through strongly across the conference was the extent to which GenAI now threads through the field. It surfaced in papers, panels, and conversations as both problem and provocation: still exploratory, still often at the stage of kitchen-sink experimentation, but beginning to sediment into a more recognisable set of tools, pedagogic approaches, and working assumptions about how educators, learning designers, institutions, and students might use it for learning. There is movement, clearly.
But it also left me feeling that much of the discussion still circles the surface of the issue. Frameworks, toolkits, and skills agendas can easily give the impression that the world has been remade and that the task is now simply to adapt, or that the work we are doing of creating checklists and frameworks touches the heart of the matter.
Yet the harder questions remain. Do we understand enough about students’ lives beyond campus: their living conditions, the pressures on their time and attention, the realities of guided independent study, and how they actually prepare for assessment? Do we understand the skills, habits, compromises, and ethical negotiations that shape that work, well enough to design for it seriously? Do staff have the digital capability, support, and time to respond well to this change? Do institutions themselves have the capacity and cultural flexibility to adopt and adapt to digital change in ways that are coherent rather than reactive? And if not, should the response be less ad hoc, less driven by curiosity and opportunity alone, and more holistic? That, to me, still feels like the real work.
By Tom Hinks, Senior Learning Designer
It was a fantastic experience to present our work when receiving the Roger Mills Prize at this year’s conference. As in previous years, the quality and breadth of practice on display were truly inspirational, and receiving recognition from such accomplished colleagues meant a great deal to us.
For me, the conference was a tale of two halves. It highlighted both how far people have come in trying to integrate AI into our practice, and the gaps that still remain. As with many conferences, you choose which sessions to attend, and my journey ended up weaving between AI-focused talks and those centred on decolonial pedagogy. This created a very odd juxtaposition, as these two philosophies are not yet well integrated. Indeed, it is not entirely clear what a decolonial use of AI might look like.
There are many criticisms suggesting that these technologies and the companies behind them risk becoming a new form of technological colonialism. They may exploit lower- and middle-income countries for their data and integrate with their systems in ways that grant undue access to, and control over, economies around the world. Beyond this, some of the most disadvantaged people globally are likely to be the most affected by the climate crisis, to which data centres are contributing.
At the same time, the AI talks were not superficial techno-optimism. Many of the projects on display were thoughtful and well considered. They engaged seriously with the ethical implications of using AI and meaningfully enhanced the learning experience for students. Some of the most compelling moments came from student perspectives, which highlighted how they viewed AI and how they thought it could be used in beneficial ways within education.
And yet, even with this thoughtful and considered work, the tension never quite went away. The biggest question I came away with was this: how might we reconcile the use of AI with the need to recognise the voices of some of the most disadvantaged communities around the world?