Skip to main content
Digital Education Studio

Higher Order Prompting and AI

In our most recent DECoP webinar, Jon Jackson explored how large language models (LLMs) relate to teaching and learning, with a particular focus on Bloom’s taxonomy. Rather than treating AI as simply good or bad for education, Jon used Bloom’s framework to think about where these tools may be more or less useful, and where the pedagogical risks begin to increase.

Jon suggested that LLMs can support learning at different levels of Bloom’s taxonomy, but not always in straightforward ways. At the lower levels, such as remembering and understanding, they can be used to generate quiz questions where students test their own knowledge, instead of just a tool to explain concepts. However, he cautioned that even these simple uses require care, since LLMs can produce plausible but inaccurate content.

A key point in the session was that prompts do not map neatly onto single levels of Bloom’s taxonomy. Instead, Jon argued that the more useful question is how LLMs are being used within the learning process. He suggested that they may be particularly helpful in a “discovery” phase, where students use them to identify related ideas, concepts or theories worth exploring further. In this way, LLMs may be useful not only for recall but also for helping learners begin to move towards analysis and evaluation.

At the higher levels of Bloom’s taxonomy, especially evaluation and creation, Jon argued that the risks become greater if students rely too heavily on the tool. If an LLM is generating ideas, structure or written work for the student, then the learner may be less engaged in the thinking process themselves. For this reason, he stressed the importance of maintaining student agency and ensuring that the tool remains secondary to human judgment.

To explain this, Jon discussed the difference between “human in the loop” and “machine in the loop” approaches. His preference was for uses where the learner remains in control, and the LLM acts as support rather than lead author. He argued that the aim should be a shift left towards more human-centred use of AI, particularly when students are working on tasks involving analysis, evaluation and creation.

Jon also briefly discussed the role of smaller language models. While much attention is given to large, resource-intensive systems, he noted that smaller models can often perform specific tasks effectively while using significantly less computational power. In some cases, these models can run locally on personal devices, offering potential benefits for privacy, sustainability, and institutional control.

Jon concluded that Bloom’s taxonomy can still be a useful framework for thinking about LLM use, not because each prompt fits neatly into a single category, but because it helps educators consider where AI may support learning and where it may start to replace the very thinking they want students to develop.

Find Out More

If you’d like to learn more about the project, the webinar recording is available via Echo 360.

Jon also wrote a journal article on this subject: Read Jon’s Paper here

Back to top