Prompt Engineering Frameworks and Their Educational Value in Higher Education
DOI:
https://doi.org/10.11113/itlj.v9.204Keywords:
Prompt engineering, Generative artificial intelligence, Prompt framework, Higher educationAbstract
Generative artificial intelligence (GenAI) is reshaping how students and educators access, construct, and evaluate knowledge. Central to this interaction is the prompt, which is the structured natural language input that steers large language models (LLMs). While prompt engineering is increasingly discussed in higher education, there is limited synthesis of how concrete prompt engineering frameworks and prompt patterns add educational value. This review analyses five key studies that offer explicit structures for prompting. These include a prompt pattern catalogue for LLM-based software development, a prompt pattern sequence for software architecture decision-making, an empirical study of student translators’ prompting behaviours, an automatic question generation system grounded in a teacher knowledge base, and an MCQ generation framework that integrates retrieval-augmented generation with chain-of-thought and self-refine prompting. The paper is guided by a single research question that examines what forms of educational value these frameworks demonstrate or imply for teaching, assessment, and curriculum design. The integrative analysis shows that such frameworks scaffold the teaching of prompt design, support AI-assisted assessment while maintaining quality, and position prompt engineering as a transversal curriculum competence. At the same time, evidence of intuitive and uninformed student prompting underscores the need for explicit, structured instruction and further research on assessing prompt engineering skills.














