Governing Intelligence: Universities, Algorithmic Power, and the Ethics of Inequality

A flagship paper examining how universities are becoming sites of algorithmic power, and why the governance of artificial intelligence in higher education is inseparable from questions of equity, rights, trust, and institutional responsibility.

Prof. Vicente C. Sinining ORCID: 0000-0002-2424-1234 The Voice Journal
Prof. Vicente C. Sinining ORCID: 0000-0002-2424-1234
Abstract. Artificial intelligence is entering universities not only through student writing tools and research assistance platforms, but through deeper systems of data extraction, prediction, classification, and governance. This paper argues that higher education must understand AI as an institutional power arrangement as much as a technical innovation. The issue is no longer whether algorithmic tools can support efficiency, personalization, or risk detection. The more consequential question is who defines acceptable risk, whose data are rendered legible, whose judgments are displaced, and which groups bear the hidden costs of automation. Drawing on recent scholarship on datafication, learning analytics, educational governance, student privacy, and AI ethics, the paper shows that algorithmic systems can amplify existing inequalities across class, disability, geography, language, and institutional prestige while presenting their outputs as neutral or objective. It further argues that universities are especially vulnerable because they occupy a contradictory position: they are at once public institutions, research laboratories, market actors, and sites of credential allocation. The paper concludes that responsible AI adoption in higher education requires more than guidance documents or staff training. It requires institutional governance frameworks grounded in human oversight, explainability, consent, equity impact assessment, and democratic accountability, especially in contexts where infrastructural inequality makes technological adoption uneven and potentially exclusionary.
Keywords: algorithmic governance; ethics; inequality; university governance; AI accountability; surveillance; digital justice

1. Introduction

The public discussion of artificial intelligence in higher education has been dominated by questions of assessment, authorship, and productivity. These are real concerns, but they are not the whole story. Universities are also being transformed by less visible forms of algorithmic decision-making: predictive analytics, student-risk scoring, automated feedback, surveillance-oriented proctoring, admissions screening, and data systems that rank, compare, and intervene in the name of institutional optimisation. Once these systems shape who is admitted, who is flagged, who is monitored, and who is deemed in need of intervention, the issue ceases to be merely pedagogical. It becomes a matter of governance.

Wang (2024) argues that algorithmic decisions in education function increasingly as de facto policy decisions because they are often made through technical systems or by external private actors, yet have public consequences with limited transparency or democratic oversight. In higher education, this observation is particularly important. Universities make distributive decisions that affect life trajectories: admission, progression, classification, remediation, credential value, and access to opportunity. When these decisions are mediated by opaque systems, the risks are not abstract. They concern due process, fairness, privacy, and the moral legitimacy of institutional authority.

This paper advances three propositions. First, AI in universities should be treated as an institutional governance question rather than a mere matter of tool adoption. Second, algorithmic systems tend to reproduce social inequalities precisely when they are presented as neutral instruments of efficiency. Third, the governance problem is likely to be more acute in unequal higher education systems, including many across Africa and the wider Global South, where digital asymmetries, weak regulation, and uneven institutional capacity shape the conditions under which AI is deployed. The goal here is not to reject AI categorically, but to insist that any serious university response must begin with the ethics of power.

2. From educational tools to governance infrastructures

One of the most misleading habits in university AI discourse is the tendency to treat intelligent systems as discrete tools. A chatbot, an assessment assistant, a writing aid, or an automated dashboard may appear to be a bounded intervention. In practice, however, AI often arrives as part of a wider infrastructure of datafication. Williamson, Bayne and Shay (2020) describe the datafication of teaching as a process through which educational activity becomes increasingly measurable, comparable, and governable through digital traces. What matters is not simply that universities collect more data, but that those data begin to reorganise institutional judgment.

This infrastructural perspective helps explain why apparently narrow technological changes have broader political consequences. A learning analytics dashboard may not merely inform staff; it can shift attention toward behaviours that are easiest to quantify. A predictive system designed to identify students “at risk” may channel resources through deficit categories shaped by historical biases. A generative AI tool incorporated into feedback workflows may alter assumptions about expertise, labour, and pedagogical accountability. Once systems mediate institutional attention in this way, they do not sit neutrally alongside governance. They become part of governance itself.

Recent global evidence suggests that this transformation is already under way. UNESCO’s 2025 survey of 400 higher education institutions linked to UNESCO Chairs and UNITWIN networks found widespread AI use in research, teaching, and administration, but uneven confidence in its implications for pedagogy, human rights, democracy, and social justice. One in four respondents reported that their universities had already encountered ethical issues related to AI, while institutional policies remained varied and incomplete. The significance of these findings lies not simply in uptake, but in the fact that adoption is progressing faster than governance capacity.

The governance challenge is therefore double. Universities are not only using AI within existing structures; they are being reorganised by it. The more institutions rely on systems that classify, predict, and steer behaviour, the more difficult it becomes to distinguish administrative convenience from institutional legitimacy. This is why the language of “innovation” is insufficient. The core question is not whether AI can improve higher education, but under what conditions it can do so without displacing accountability, eroding trust, or deepening inequality.

3. Privacy, surveillance, and the erosion of informed educational consent

Privacy in higher education has often been treated as a compliance matter rather than an educational value. AI intensifies the costs of that assumption. Jones (2019) argued that learning analytics in higher education should be governed through informed consent models that respect student privacy and autonomy, rather than through opaque institutional extraction of behavioural data. The continuing relevance of that argument is clear. Universities now gather data not only from virtual learning environments, but from attendance systems, plagiarism software, library interactions, engagement dashboards, proctoring tools, and third-party platforms that often operate beyond the student’s meaningful understanding.

Student perceptions reinforce the seriousness of the issue. Jones et al. (2020) found that students often experience learning analytics environments as pervasive forms of tracking, expressing concern that they are being monitored in ways that are difficult to contest or fully comprehend. This is not simply a matter of discomfort. It speaks to the mismatch between institutional claims of support and the lived experience of surveillance. When universities describe extensive data collection as being “for student success,” they may obscure the unequal power relationship between those who collect the data and those who are made visible by it.

The ethical stakes rise further with generative and predictive AI systems, because the boundary between support and intervention becomes blurred. UNESCO’s 2025 report AI and education: Protecting the rights of learners explicitly warns that rapid digitalisation and generative AI create new risks around privacy, safety, governance, and equity, especially for vulnerable groups. Without robust safeguards, institutions may normalise intrusive data practices while presenting them as modernisation. This is especially problematic when students have limited alternative options, weak bargaining power, or little clarity about what data are collected, shared, inferred, or retained.

Universities should therefore stop framing privacy as a secondary concern that can be balanced away in the name of efficiency. Educational relationships depend on trust, and trust cannot be sustained where surveillance is normalised without informed participation. A university committed to intellectual freedom should be especially cautious about systems that make students permanently legible while leaving institutional power comparatively opaque.

4. Bias, sorting, and the quiet reproduction of inequality

The promise of algorithmic systems often rests on a claim of improved objectivity. Yet the history of data-driven decision-making shows that technical systems frequently reproduce the biases embedded in the data, categories, and institutional practices from which they arise. Martin (2019) argues that the ethical implications of algorithms cannot be separated from questions of accountability, because their outputs affect people in ways that are shaped by prior design choices, hidden assumptions, and unequal impacts. In universities, these effects can appear in admissions ranking, assessment support, student support triage, and conduct monitoring.

Education is especially sensitive because classifications can influence future livelihoods. The European Union’s AI Act recognises this explicitly. Under its risk-based framework, AI systems used in education or vocational training for admission, assignment, evaluation of learning outcomes, assessing educational level, or monitoring prohibited behaviour during tests are treated as high-risk because they may shape an individual’s educational and professional course of life. This is a striking regulatory acknowledgment that educational AI is not benign by default. It can affect rights, opportunities, and the conditions of social mobility.

Research on learning analytics makes similar concerns visible from within higher education practice. Heiser et al. (2023) found that students, diversity and inclusion leaders, and administrators perceive significant risks of bias and inequity in the use of learner data, with differences in data literacy itself producing inequality in how those systems are understood and challenged. Khalil, Slade and Prinsloo (2024), reviewing learning analytics and disabled students, show that inclusiveness remains underdeveloped in much of the field, even though disabled and disadvantaged students are among those most likely to be affected by misclassification, inaccessible design, or rigid assumptions about engagement.

The problem is not that algorithms always discriminate in the same way. It is that they sort according to institutional logics that may already be unequal. A system trained on historical data may learn patterns of success that reflect prior exclusion rather than genuine capability. Students from rural schools, low-connectivity environments, non-dominant language backgrounds, or marginalised social positions may appear less engaged, less prepared, or more risky not because of inherent deficiency, but because institutional datasets encode unequal starting points. Once these classifications are operationalised, inequality is translated into administrative action.

That is why algorithmic bias in higher education should not be treated merely as a technical defect awaiting better training data. It is also a sociological and political problem. Universities that adopt AI systems without interrogating the values embedded in their categories may simply automate older hierarchies under a newer vocabulary of precision.

5. Regulation, accountability, and the limits of institutional self-assurance

Most universities currently respond to AI through policies, principles, or advisory guidance. These are necessary, but they are not sufficient. Morley et al. (2020) showed that the broader field of AI ethics has long suffered from a gap between abstract principles and operational practice. Mökander and Floridi (2021) similarly argue that trustworthy AI requires ethics-based auditing that is continuous, system-oriented, and linked to public incentives and institutional oversight. Higher education has largely not yet closed this gap. Many institutions have statements about fairness, transparency, or responsible use, but fewer have established auditable governance structures that can test whether those principles are actually being realised.

The European Commission’s implementation materials for the AI Act indicate how regulatory thinking is evolving. The Act entered into force on 1 August 2024, with phased application for different categories of systems, and it places strong emphasis on risk management, human oversight, transparency, and training obligations for high-risk deployments. Even for institutions outside Europe, the significance is intellectual as much as legal. It marks a shift from assuming educational AI is a discretionary innovation to treating certain uses as matters of fundamental rights and institutional accountability.

Universities should take the underlying lesson seriously. Governance cannot be outsourced to vendors, and responsibility cannot be dissolved into software procurement language. When an institution deploys a system that influences admissions, assessment, student support, or staff monitoring, it assumes responsibility for how that system operates within its own moral and legal order. This includes responsibility for training, oversight, appeal mechanisms, documentation, and the ability to explain outcomes in language that affected persons can understand.

Institutional self-assurance is particularly dangerous where AI systems are introduced under pressure to improve rankings, reduce costs, demonstrate innovation, or manage large student populations. In such settings, the temptation is to confuse administrative scale with ethical adequacy. Yet scale can also intensify harm. The more widely a flawed system is embedded, the more deeply its biases become normalised as routine procedure.

6. The African university and the unequal politics of AI adoption

Any governance discussion that ignores the geography of inequality will remain incomplete. UNESCO’s 2021 guidance for policy-makers and its later work on AI in education consistently stress that AI should be governed in human-centred and equitable ways, rather than widening existing divides. The 2025 rights-based report deepens this warning by noting that as of 2024 around 2.6 billion people globally still lacked internet access, with vulnerable populations disproportionately affected. This matters profoundly for higher education in Africa, where institutions often operate under unequal conditions of connectivity, device access, platform dependence, and staff capacity.

In such contexts, AI governance cannot be modelled simply on assumptions drawn from well-resourced systems. Questions that appear secondary in affluent institutions become foundational elsewhere: Who pays for access to tools? In which languages do systems function effectively? What happens when vendor platforms mediate core educational processes but local institutions cannot meaningfully audit them? How do universities protect students when data governance rules are weak, infrastructure is uneven, and dependency on external providers is high?

The challenge is not just adoption, but asymmetry. African universities may be encouraged to implement AI in the name of modernisation while lacking the bargaining power to shape design standards, governance conditions, or local relevance. This creates a form of digital dependency in which institutions consume systems built elsewhere, according to categories they did not define, for purposes they only partly control. Under such conditions, the ethics of AI in higher education cannot be separated from the politics of epistemic sovereignty.

A more defensible path would treat AI governance as part of wider institutional strengthening. This means investing in data protection, staff capability, multilingual inclusion, procurement scrutiny, and local ethical review rather than simply accelerating platform adoption. It also means refusing the assumption that technological novelty is equivalent to educational progress. In unequal settings, the first duty of governance is often to prevent the automation of exclusion.

7. Toward a university framework for algorithmic accountability

If universities are to remain trustworthy institutions in the age of AI, they need governance frameworks that go beyond aspiration. At minimum, five principles should guide such a framework.

First, human oversight must be substantive, not ceremonial.

Human review is often invoked as a safeguard, but it is meaningless if staff neither understand the system nor possess the authority to challenge it. Oversight should involve documented capacity to interpret outputs, suspend their use, and justify decisions independently of the machine.

Second, explainability must be connected to contestability.

It is not enough for a vendor to provide technical documentation. Students and staff need accessible explanations of how systems affect them and meaningful routes to challenge decisions or classifications. Without contestability, transparency remains symbolic.

Third, equity impact assessment should precede deployment.

Before universities implement AI systems in admissions, assessment, monitoring, or student support, they should conduct structured equity review focused on disadvantaged, disabled, linguistically marginalised, and low-connectivity groups. The relevant question is not whether the system works on average, but for whom it fails and with what consequences.

Fourth, data governance must be public-facing.

Students should know what is collected, how it is used, who can access it, how long it is retained, and whether third parties are involved. Universities cannot ask students to trust systems that they are not permitted to see or understand.

Fifth, governance should be democratic rather than exclusively managerial.

AI policy should involve faculty, students, disability advocates, legal and ethics specialists, and, where appropriate, labour representatives. Algorithmic systems reshape academic life too deeply to be governed solely through procurement offices or executive directives.

These principles are not exhaustive, but they mark a move away from reactive policy writing toward institutional accountability. They also align with UNESCO’s ethical emphasis on human rights, transparency, fairness, and human oversight. What is needed now is not another cycle of AI enthusiasm followed by belated correction, but a governance culture capable of asking difficult questions before systems become normalized.

8. Conclusion

The university is often imagined as a place where intelligence is cultivated, judged, and shared. In the age of AI, that self-understanding is under pressure. Institutions are no longer dealing only with new tools of writing or research support. They are confronting systems that sort, predict, monitor, and increasingly participate in decisions with lasting human consequences. Under these conditions, the governance of AI in higher education is not peripheral to the university’s mission. It goes to the heart of what sort of institution the university still wishes to be.

This paper has argued that algorithmic power in higher education must be understood through the lens of inequality, surveillance, and accountability. The central risk is not simply technical failure. It is the quiet normalization of systems that make some people more visible, more manageable, and more classifiable than others while shielding institutional power behind computational authority. Universities that fail to confront this risk may become more efficient while becoming less just.

A serious response requires more than ethical language. It requires governance arrangements that treat AI as a public question within the university: subject to scrutiny, contestation, and moral limits. In that sense, the future of intelligent universities will not be decided by how much automation they can absorb, but by how firmly they can insist that technological capability remain answerable to human dignity, educational fairness, and democratic accountability.

References

  1. Alfiras, M.I.I., Alshehri, M.M., Alsubaie, N.M., Alghamdi, H.A., Alqarni, S.A. and Alharthi, R.T. (2025) ‘Ethics and governance of generative AI in education: a systematic review on responsible adoption’, Discover Education, 4, Article 408. Available at: https://doi.org/10.1007/s44217-025-01051-y
  2. European Commission (2025) AI Act. Shaping Europe’s Digital Future. Available at: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
  3. European Commission (2026) Navigating the AI Act. Shaping Europe’s Digital Future. Available at: https://digital-strategy.ec.europa.eu/en/faqs/navigating-ai-act
  4. Heiser, R., Dello Stritto, M.E., Brown, A. and Croft, B. (2023) ‘Amplifying student and administrator perspectives on equity and bias in learning analytics: alone together in higher education’, Journal of Learning Analytics, 10(1), pp. 8–23. Available at: https://learning-analytics.info/index.php/JLA/article/view/7775
  5. Jones, K.M.L. (2019) ‘Learning analytics and higher education: a proposed model for establishing informed consent mechanisms to promote student privacy and autonomy’, International Journal of Educational Technology in Higher Education, 16, Article 24. Available at: https://doi.org/10.1186/s41239-019-0155-0
  6. Jones, K.M.L., Asher, A., Goben, A., Perry, M.R., Salo, D., Briney, K.A. and Robertshaw, M.B. (2020) ‘“We’re being tracked at all times”: student perspectives of their privacy in relation to learning analytics in higher education’, Journal of the Association for Information Science and Technology, 71(9), pp. 1044–1059. Available at: https://doi.org/10.1002/asi.24358
  7. Khalil, M., Slade, S. and Prinsloo, P. (2024) ‘Learning analytics in support of inclusiveness and disabled students: a systematic review’, Journal of Computing in Higher Education, 36(1), pp. 202–219. Available at: https://doi.org/10.1007/s12528-023-09363-4
  8. Martin, K. (2019) ‘Ethical implications and accountability of algorithms’, Journal of Business Ethics, 160(4), pp. 835–850. Available at: https://doi.org/10.1007/s10551-018-3921-3
  9. McConvey, K. and Guha, S. (2024) ‘“This is not a data problem”: algorithms and power in public higher education in Canada’, in Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems. New York: Association for Computing Machinery, pp. 1–14. Available at: https://doi.org/10.1145/3613904.3642451
  10. Miao, F., Holmes, W., Huang, R. and Zhang, H. (2021) AI and education: guidance for policy-makers. Paris: UNESCO. Available at: https://unesdoc.unesco.org/ark:/48223/pf0000376709
  11. Mökander, J. and Floridi, L. (2021) ‘Ethics-based auditing to develop trustworthy AI’, Minds and Machines, 31(2), pp. 323–327. Available at: https://doi.org/10.1007/s11023-021-09557-8
  12. Morley, J., Floridi, L., Kinsey, L. and Elhalal, A. (2020) ‘From what to how: an initial review of publicly available AI ethics tools, methods and research to translate principles into practices’, Science and Engineering Ethics, 26(4), pp. 2141–2168. Available at: https://doi.org/10.1007/s11948-019-00165-5
  13. OECD (2026) OECD Digital Education Outlook 2026: Exploring effective uses of generative AI in education. Paris: OECD Publishing. Available at: https://doi.org/10.1787/062a7394-en
  14. Thompson, T.L. and Prinsloo, P. (2023) ‘Returning the data gaze in higher education’, Learning, Media and Technology, 48(1), pp. 153–165. Available at: https://doi.org/10.1080/17439884.2022.2092130
  15. UNESCO (2021) Recommendation on the Ethics of Artificial Intelligence. Paris: UNESCO. Available at: https://www.unesco.org/en/legal-affairs/recommendation-ethics-artificial-intelligence
  16. UNESCO (2025a) ‘UNESCO survey: Two-thirds of higher education institutions have or are developing guidance on AI use’, 2 September. Available at: https://www.unesco.org/en/articles/unesco-survey-two-thirds-higher-education-institutions-have-or-are-developing-guidance-ai-use
  17. UNESCO (2025b) AI and education: Protecting the rights of learners. Paris: UNESCO. Available at: https://www.unesco.org/en/articles/ai-and-education-protecting-rights-learners
  18. Wang, Y. (2024) ‘Algorithmic decisions in education governance: implications and challenges’, Discover Education, 3, Article 229. Available at: https://doi.org/10.1007/s44217-024-00337-x
  19. Williamson, B., Bayne, S. and Shay, S. (2020) ‘The datafication of teaching in higher education: critical issues and perspectives’, Teaching in Higher Education, 25(4), pp. 351–365. Available at: https://doi.org/10.1080/13562517.2020.1748811
Back to Section Back to Home