Rewriting the Curriculum: Pedagogy, Disciplinary Knowledge, and Human Judgment in the AI-Enabled University

A flagship paper examining how universities must renew curriculum, pedagogy, and assessment without surrendering disciplinary depth or human judgment to the logic of automated fluency.

Author: Prof. Vicente C. Sinining ORCID: 0000-0002-2424-1234 Section: The University and the Future of Work in the Age of AI
Prof. Vicente C. Sinining ORCID: 0000-0002-2424-1234

Abstract

Artificial intelligence has entered higher education not as a distant possibility but as an everyday presence in writing, research, feedback, assessment, and curricular design. Yet much of the institutional response remains narrow, oscillating between enthusiasm for “AI literacy” and anxiety about academic misconduct. This paper argues that the deeper issue is curricular, epistemic, and pedagogical. The question is not simply whether students should use AI, but what forms of knowledge, judgment, and intellectual formation universities are obliged to protect and cultivate when generative systems can produce plausible text, code, and analysis at speed. Drawing on recent international evidence and higher education scholarship, the paper contends that universities should resist both reactionary prohibition and uncritical adoption. Instead, curriculum reform should be organised around three linked priorities: the preservation of disciplinary ways of thinking, the redesign of pedagogy to foreground critique and verification, and the reconstruction of assessment around valid demonstrations of capability. The paper further argues that these issues are especially significant in unequal higher education systems, including many African contexts, where questions of infrastructure, language, staff development, and institutional readiness shape the terms of AI adoption. A serious university response to AI must therefore move beyond tool training and toward a renewed defence of human judgment, ethical reasoning, and context-sensitive disciplinary education.

Keywords

AI in higher education; curriculum reform; pedagogy; disciplinary knowledge; assessment; epistemic judgment; university transformation

1. Introduction

Generative AI has exposed a weakness that long predates the technology itself: in many universities, curriculum has already drifted toward performative output rather than intellectual formation. When students can now generate essays, summaries, code, and explanations through readily available systems, institutions are forced to confront a harder question than plagiarism detection. They must ask what they are actually teaching, what counts as learning, and what sorts of human capability justify the university as a public institution. UNESCO’s 2023 guidance warned that publicly available generative AI tools were advancing more quickly than national regulatory frameworks, leaving privacy insufficiently protected and educational institutions inadequately prepared (Miao and Holmes, 2023). By 2026, the OECD was likewise stressing that generative AI can support learning only when guided by clear pedagogical principles; without such guidance, it may improve performance without producing durable learning gains (OECD, 2026).

The urgency of this challenge is now visible across institutional practice. UNESCO’s 2025 global survey of higher education institutions linked to UNESCO Chairs and UNITWIN Networks found that nearly two-thirds either already had AI guidance or were developing it. At the same time, nine in ten respondents reported using AI in their professional work, while over half remained uncertain about its pedagogical, technological, or broader social implications. One in four also reported that their universities had already encountered ethical issues related to AI, including overreliance, authorship disputes, and bias (UNESCO, 2025). The significance of these findings lies not simply in uptake, but in the fact that adoption is outrunning confidence, policy coherence, and pedagogical clarity.

This paper argues that the central task is not to decide whether AI belongs inside the university, because it is already there. The task is to determine what the curriculum must now defend, revise, and make newly explicit. In what follows, I contend that universities should not reduce reform to generic AI literacy or software familiarisation. Rather, they must protect disciplinary knowledge, reassert the place of human judgment in pedagogy, and redesign assessment so that it remains a valid representation of student capability. This is especially important in resource-constrained systems, where imported models of AI integration can deepen inequality rather than expand educational possibility (Miao, Shiohira and Lao, 2024).

2. The Curricular Question: Beyond Generic AI Literacy

Recent policy discourse has moved rapidly toward competency frameworks, guidance documents, and institutional toolkits. UNESCO’s 2024 competency frameworks for students and teachers are valuable examples of this effort. The student framework emphasises a human-centred mindset, ethics of AI, techniques and applications, and AI system design; the teacher framework adds AI pedagogy and AI for professional learning, while explicitly foregrounding human agency and teacher rights (Miao, Shiohira and Lao, 2024; Miao and Cukurova, 2024). These are important developments because they move discussion beyond simple technical adoption. Yet competency frameworks, however useful, should be treated as a floor rather than a ceiling. A university curriculum cannot be reduced to a list of operational AI skills without losing its intellectual core.

What is threatened by superficial reform is not only academic integrity but disciplinary depth. Shulman’s account of pedagogical content knowledge remains instructive because it insists that teaching is not merely the transmission of information; it requires understanding what is central to a discipline, why it is difficult, and how learners come to grasp it (Shulman, 1986). Similarly, Middendorf and Pace’s “decoding the disciplines” framework shows that much expert disciplinary reasoning is tacit, which means that students often fail not because they lack information but because they have not been apprenticed into the underlying ways of thinking and judging that structure a field (Middendorf and Pace, 2004). Meyer and Land’s work on threshold concepts adds a related insight: transformative learning often depends on students crossing difficult conceptual thresholds that change how they see the subject and themselves within it (Meyer and Land, 2005).

These traditions matter anew in the age of AI because generative systems can mimic finished disciplinary performance without undergoing disciplinary formation. A student may obtain a competent-looking essay in history, policy, law, or management without having learned how the field constructs evidence, handles uncertainty, distinguishes argument from assertion, or tests rival interpretations. In this sense, AI sharpens an old educational distinction between the production of answers and the development of understanding. The curricular response should therefore not be to abandon disciplinary content for a vague future-skills agenda, but to make disciplinary judgment more visible, more practised, and more assessable.

This also means resisting the assumption that “AI literacy” is a neutral or universally transferable competency. The relevant question is always: literacy for what, within which field, under which standards of evidence, and toward which social purposes? AI use in chemistry, journalism, engineering, philosophy, and public policy cannot be governed by identical curricular logics. A mature curriculum should therefore distinguish between general critical literacies about AI and field-specific practices of judgment, verification, interpretation, and ethical responsibility. Where universities fail to make that distinction, they risk flattening disciplinary diversity into generic platform competence.

3. Pedagogy After the Automation of First Drafts

The pedagogical implications of AI are often discussed as though the technology itself determines the educational outcome. The evidence increasingly suggests otherwise. The OECD Digital Education Outlook 2026 argues that generative AI supports learning when embedded within explicit teaching principles, but without pedagogical guidance it merely improves short-term task performance (OECD, 2026). A 2026 systematic review by Long et al. similarly found that AI tools enhance student engagement most effectively when used within interactive pedagogies such as scaffolded feedback, flipped learning, and project-based approaches (Long et al., 2026). The implication is straightforward: pedagogy mediates technology. Good teaching can use AI productively; weak teaching can be further weakened by it.

This is precisely why universities should stop treating AI as an exceptional external threat and begin treating it as a mirror held up to pedagogical habits. If a course can be completed successfully through automated summarisation, superficial synthesis, or formulaic essay production, the problem may not be AI alone. It may also reflect long-standing overreliance on tasks that reward fluency over thought. Students themselves are already signalling this tension. Chan and Hu’s survey of 399 students in Hong Kong found generally positive attitudes toward generative AI, particularly for brainstorming, writing support, and research assistance, but also substantial concerns about accuracy, privacy, ethics, and the impact on personal development and future prospects (Chan and Hu, 2023). Their findings suggest that students do not simply want permission to use AI; they want frameworks that help them use it responsibly without surrendering their own intellectual growth.

A stronger pedagogy for the AI-enabled university should therefore be organised around forms of learning that make human judgment visible. This includes comparative critique of AI and human outputs, oral defence of written work, iterative drafting with reflective commentary, source verification exercises, supervised problem-framing, dialogic seminars, and discipline-specific analysis of how AI systems succeed or fail within the field. The aim is not to romanticise unaided cognition, nor to pretend that professional life will exclude AI. It is to ensure that students learn how to evaluate, contest, and refine machine-generated material rather than merely accept it.

This pedagogical turn also requires more serious investment in academic staff. UNESCO’s 2025 survey found widespread use of AI alongside uneven confidence, while UNESCO’s 2024 framework for teachers explicitly notes that few countries have defined the competencies needed for teachers in the AI era (UNESCO, 2025; Miao and Cukurova, 2024). If institutions expect faculty to redesign curricula, explain disclosure norms, rethink classroom practice, and manage new ethical questions, then staff development cannot remain peripheral. Universities need not only policies, but intellectual communities of curricular redesign in which teachers work together to identify what in their disciplines must remain irreducibly human and what can be productively augmented.

4. Assessment, Validity, and the Meaning of Capability

Assessment has become the most visible site of AI anxiety, but also the most intellectually revealing. Early institutional responses understandably concentrated on academic integrity. Sullivan, Kelly and McLaughlan’s analysis of public debate on ChatGPT in higher education showed that academic integrity concerns dominated early university responses, even as opportunities for redesigned assessment were also acknowledged (Sullivan, Kelly and McLaughlan, 2023). Cotton, Cotton and Shipway similarly argued that generative AI poses real risks to honesty and plagiarism norms while requiring proactive institutional strategies (Cotton, Cotton and Shipway, 2024). Yet the field is now moving toward a more useful framing: not whether students might cheat, but whether assessment still validly represents what students can actually do.

That reframing is powerfully articulated by Dawson et al. (2024), who argue that validity matters more than cheating. Their point is especially relevant in the AI era because a task that depends on students not using widely available tools, without any robust way of securing or interpreting that condition, may simply be a weak assessment. Weng et al. (2024) reach a related conclusion, finding that traditional assessment approaches do not function effectively in the generative AI context and that more innovative, refocused, or AI-incorporated forms of assessment are increasingly necessary. What emerges from these studies is not a case for abandoning standards, but for designing evidence of learning more intelligently.

Three principles follow. First, assessment should gather evidence across process as well as product. Students should be asked not only what they submitted, but how they developed it, what sources they rejected, which claims they verified, and where their own judgment altered or corrected machine output. Second, programmes should use multiple assessment forms across time, rather than relying excessively on single polished artefacts. Oral discussion, live analysis, project work, annotated drafts, and supervised synthesis can together create a stronger evidentiary chain of capability. Third, disclosure of AI assistance should be normalised and structured. Bozkurt’s work on authorship and transparency is useful here: institutions need norms that allow students to declare generative assistance without assuming that any use is inherently fraudulent (Bozkurt, 2024).

Recent institutional guidance points in this direction, though unevenly. QAA has published advice on maintaining standards and reconsidering assessment in the ChatGPT era, while TEQSA’s knowledge hub now curates case studies and assessment reform resources rather than relying on prohibition alone. Moorhouse, Yeo and Wan’s review of guidelines from top-ranking universities showed how quickly leading institutions turned toward assessment redesign, communication with students, and integrity policies (Moorhouse, Yeo and Wan, 2023). McDonald et al. (2025) later found broad encouragement of generative AI use across US R1 universities, but also contradictions, uneven evidence, and a burden placed on faculty to revise pedagogy extensively. Together, these studies show that universities increasingly recognise the need for reform, but not yet a settled model of what robust reform looks like.

5. Equality, Infrastructure, and the African University

The curriculum debate cannot be universalised too quickly. Much of the global conversation assumes stable internet access, subscription tools, staff development resources, small-group teaching capacity, and a relatively coherent digital policy environment. UNESCO’s 2023 guidance explicitly warned that in most countries national regulation had not kept pace with generative AI, leaving institutions unprepared (Miao and Holmes, 2023). UNESCO’s 2025 survey also found substantial regional variation in institutional guidance and persistent barriers related to ethics, understanding, and access (UNESCO, 2025). These asymmetries matter because AI adoption is not occurring on a level educational field.

For many African universities, the challenge is therefore double. They must respond to the epistemic and pedagogical disruption of AI while also negotiating infrastructural unevenness, cost constraints, language inequalities, and heavy teaching loads. UNESCO’s exploratory work on digitalisation and AI in African higher education signals that the issue is now firmly on the regional agenda, but its very framing as an exploratory field underscores how uneven readiness remains. In such contexts, a careless push toward AI-enabled pedagogy may widen the divide between institutions with strong digital capacity and those already struggling to provide stable access and staff support.

This does not justify delay or defensiveness. It does, however, require curricular realism. African universities should be wary of importing pedagogical models that presume abundant infrastructure or treating commercial platforms as substitutes for public educational investment. The more defensible path is a selective and mission-driven approach: strengthen foundational disciplinary teaching, invest in staff development, embed critical AI literacy where it serves academic purpose, and design assessments that do not punish students for unequal digital conditions while still demanding evidence of thought. In these settings, the university’s public responsibility is not to imitate the most technologically saturated institutions, but to build forms of AI engagement that remain educationally credible and socially just.

6. Discussion

The AI moment has produced a flood of institutional guidance, but guidance alone is not a curriculum. The more profound task is to recover the university’s educational purposes in a context where the generation of plausible output is no longer scarce. If knowledge work can be simulated, then universities must clarify what kinds of judgment, interpretation, critique, and responsibility still require human formation. This is not a retreat from innovation. On the contrary, it is the condition for meaningful innovation. Without a clear account of disciplinary purpose, universities will oscillate between panic and fashion, redesigning around the technology rather than around learning.

The paper has argued for three linked shifts. First, curriculum reform must move beyond generic AI literacy and make disciplinary reasoning more explicit. Second, pedagogy must centre evaluative, dialogic, and reflective practices that help students think with and against machine outputs. Third, assessment must be reconstructed around validity: whether student work represents actual capability under contemporary conditions. These are not isolated technical adjustments. Together, they amount to a reassertion of the university as an institution of judgment rather than mere credential production.

7. Conclusion

Artificial intelligence does not make the curriculum obsolete. It makes weak curriculum visible. Universities that respond by simply banning tools, policing students, or adding a thin layer of AI literacy will miss the deeper lesson. The enduring work of higher education is not the production of text but the cultivation of understanding, judgment, ethical responsibility, and disciplined ways of seeing the world. In an age when machines can assist with expression, the university’s task becomes not smaller but sharper: to teach students how to question outputs, weigh evidence, make interpretations, justify choices, and act responsibly within fields of knowledge and practice.

The future of the university in the age of AI will therefore depend less on how quickly it adopts new tools than on how clearly it defends and renews its intellectual purposes. The strongest curriculum will not be the one most dazzled by automation, nor the one most fearful of it. It will be the one that preserves the discipline of thought while equipping students to work critically, ethically, and intelligently in a world where intelligent systems are now part of the conditions of knowledge itself.

References

Bozkurt, A. (2024) ‘GenAI et al.: Cocreation, authorship, ownership, academic ethics and integrity in a time of generative AI’, Open Praxis, 16(1), pp. 1–10. Available at: https://doi.org/10.55982/openpraxis.16.1.654

Chan, C.K.Y. and Hu, W. (2023) ‘Students’ voices on generative AI: perceptions, benefits, and challenges in higher education’, International Journal of Educational Technology in Higher Education, 20, Article 43. Available at: https://doi.org/10.1186/s41239-023-00411-8

Cotton, D.R.E., Cotton, P.A. and Shipway, J.R. (2024) ‘Chatting and cheating: Ensuring academic integrity in the era of ChatGPT’, Innovations in Education and Teaching International, 61(2), pp. 228–239. Available at: https://doi.org/10.1080/14703297.2023.2190148

Dawson, P., Bearman, M., Dollinger, M. and Boud, D. (2024) ‘Validity matters more than cheating’, Assessment & Evaluation in Higher Education, 49(7), pp. 1005–1016. Available at: https://doi.org/10.1080/02602938.2024.2386662

Long, D.Y., Wang, S., Md Rashid, S. and Lu, X.T. (2026) ‘Artificial intelligence in higher education: a systematic review of its impact on student engagement and the mediating role of teaching methods’, Frontiers in Education, 10, Article 1648661. Available at: https://doi.org/10.3389/feduc.2025.1648661

McDonald, N., Johri, A., Ali, A. and Collier, A.H. (2025) ‘Generative artificial intelligence in higher education: Evidence from an analysis of institutional policies and guidelines’, Computers in Human Behavior: Artificial Humans, 3, Article 100121. Available at: https://doi.org/10.1016/j.chbah.2025.100121

Meyer, J.H.F. and Land, R. (2005) ‘Threshold concepts and troublesome knowledge (2): Epistemological considerations and a conceptual framework for teaching and learning’, Higher Education, 49, pp. 373–388. Available at: https://doi.org/10.1007/s10734-004-6779-5

Miao, F. and Holmes, W. (2023) Guidance for generative AI in education and research. Paris: UNESCO. Available at: https://www.unesco.org/en/articles/guidance-generative-ai-education-and-research

Miao, F., Shiohira, K. and Lao, N. (2024) AI competency framework for students. Paris: UNESCO. Available at: https://www.unesco.org/en/articles/ai-competency-framework-students

Miao, F. and Cukurova, M. (2024) AI competency framework for teachers. Paris: UNESCO. Available at: https://www.unesco.org/en/articles/ai-competency-framework-teachers

Middendorf, J. and Pace, D. (2004) ‘Decoding the disciplines: A model for helping students learn disciplinary ways of thinking’, New Directions for Teaching and Learning, 98(1), pp. 1–12. Available at: https://doi.org/10.1002/tl.142

Moorhouse, B.L., Yeo, M.A. and Wan, Y. (2023) ‘Generative AI tools and assessment: Guidelines of the world’s top-ranking universities’, Computers and Education Open, 5, p. 100151. Available at: https://doi.org/10.1016/j.caeo.2023.100151

OECD (2026) OECD Digital Education Outlook 2026: Exploring Effective Uses of Generative AI in Education. Paris: OECD Publishing. Available at: https://doi.org/10.1787/062a7394-en

QAA (2024) Generative artificial intelligence: Advice and resources. Available at: https://www.qaa.ac.uk/sector-resources/generative-artificial-intelligence/qaa-advice-and-resources

Shulman, L.S. (1986) ‘Those who understand: knowledge growth in teaching’, Educational Researcher, 15(2), pp. 4–14. Available at: https://doi.org/10.3102/0013189X015002004

Sullivan, M., Kelly, A. and McLaughlan, P. (2023) ‘ChatGPT in higher education: Considerations for academic integrity and student learning’, Journal of Applied Learning and Teaching, 6(1), pp. 31–40. Available at: https://doi.org/10.37074/jalt.2023.6.1.17

UNESCO (2025) ‘UNESCO survey: Two-thirds of higher education institutions have or are developing guidance on AI use’, 2 September. Available at: https://www.unesco.org/en/articles/unesco-survey-two-thirds-higher-education-institutions-have-or-are-developing-guidance-ai-use

Weng, X., Xia, Q., Gu, M., Rajaram, K. and Chiu, T.K.F. (2024) ‘Assessment and learning outcomes for generative AI in higher education: A scoping review on current research status and trends’, Australasian Journal of Educational Technology, 40(6), pp. 37–55. Available at: https://doi.org/10.14742/ajet.9540

Zawacki-Richter, O., Marín, V.I., Bond, M. and Gouverneur, F. (2019) ‘Systematic review of research on artificial intelligence applications in higher education – where are the educators?’, International Journal of Educational Technology in Higher Education, 16, Article 39. Available at: https://doi.org/10.1186/s41239-019-0171-0