The New Digital Divide in Higher Education: AI as Public Capability or Private Advantage? – PA TIMES Online
The views expressed are those of the author and do not necessarily reflect the views of ASPA as an organization.
By Wilson Wong
September 19, 2025

In the study of Public Administration, the most important story about Generative AI (GAI) in education is not novelty or efficiency but inequality. Without deliberate policy, GAI will widen the gaps that already separate students, institutions and nations. It will concentrate advantages among those with access to tools and equipped with indispensable human capacities for the AI era while leaving others to navigate an AI-shaped future with yesterday’s skills. The result will not be a marginal difference in classroom experiences; it will be a systemic divergence in educational outcomes, labor market mobility and economic well-being.
That warning is based on my new research article: Wilson Wong, A. Aristidou, & K. Scheuermann (2025). The future of learning or the future of dividing? Exploring the impact of general artificial intelligence on higher education. Analyzing policy documents and core curricula across top Asian universities, we find that the integration of GAI is sharply uneven. Fewer than half of the top institutions publicly articulate GAI policies; comprehensive curriculum reforms are rarer still. Even where policy exists, it tends to emphasize integrity and access rules over curriculum-wide capability building.
The implications of GAI extend far beyond campus. GAI is a general-purpose technology that lowers the cost of knowledge but raises the premium on acquiring the complementary capacities: AI literacy, ethical judgment, human–AI collaboration and human-distinctive skills such as creativity, critical thinking and empathy. When these capacities are not offered universally in education, they will be acquired disproportionately by already advantaged students and institutions. That translates into compounding inequities: unequal access to AI, unequal learning outcomes, unequal employment opportunities and ultimately unequal income status.
These educational disparities carry predictable downstream effects. Employers will reward graduates who have practiced prompt design, evaluative judgment and cross-disciplinary AI use. Placement rates, wage trajectories and occupational status will reflect those differences. Systems that move quickly to embed GAI across teaching and learning will build AI-competent workforces. Similarly, regions with universities that institutionalize AI competence will capture a larger share of high-value industries and entrepreneurship. Meanwhile, students who encounter AI chiefly as a prohibited shortcut will be disadvantaged in workplaces where AI is a default collaborator. Over time, middle-skill pathways may shrink, social mobility may slow and frustration may grow against the outdated education system.
None of this is inevitable, but drift is dangerous. Much current policy energy is consumed by academic integrity and bans, which address real risks but can inadvertently deepen divides. Treating GAI primarily as a threat displaces attention from the core equity problem: unequal opportunity to learn with and from AI. The longer institutions defer system-wide curriculum updates and equitable access policies, the more each incoming cohort reproduces the divide.
Policymakers will recognize the digital divide from prior general-purpose technologies: early adopters with complementary capabilities capture outsized gains; latecomers face steeper, costlier transitions. What is different now is the speed of diffusion and the pervasive reach of AI across disciplines. In higher education, the policy window is measured in academic cycles, not decades. That makes national and institutional strategy, not isolated experiments, the decisive factor.
Our article offers a structured framework for what universal preparation should entail: AI ethics, AI literacy, human–AI collaboration and human-distinctive capacities, and documents how far current practice falls short. It also details the cross-country disparities and institutional logics that are driving divergence. The imminent policy agenda is less about specifying tools than about aligning incentives, protections and accountability with equity in mind.
The path forward is clear in principle, even if execution will be demanding. First, treat AI competence as a universal learning outcome: every undergraduate in every discipline should graduate able to use, question and manage AI, not just those who choose an elective. Second, provide equitable access to tools and training so benefits do not depend on personal subscriptions. Third, invest in faculty at scale with time, training and incentives to redesign courses and assessments for an AI-rich environment. Fourth, protect and cultivate human judgment, creativity, empathy and leadership through intentional pedagogy so AI augments rather than substitutes for people. Fifth, continuously monitor data for disparate impacts across student groups and adjust policy accordingly.
The point of this op-ed is not to prescribe a blueprint. Our article already provides the empirical evidence and capability framework that policymakers can translate into concrete, context-specific policy designs. Rather, it is to highlight the core choice now facing policymakers: will AI become a public capability or remain a private advantage? If we default to fragmented, voluntary adoption at the individual or institutional level, we should expect the educational system to transmit and amplify the AI shock, not buffer or reverse it. The consequences of that choice will surface in the labor market, in regional economies and in levels of public trust.
GAI will not automatically democratize learning. It will do what powerful technologies do in the absence of public purpose: intensify existing structural inequities. The time to set that purpose is now, before unequal AI access and capacity become our new digital divide.
Author: Wilson Wong is the Founding Director and an Associate Professor of Data Science and Policy Studies, School of Governance and Policy Science, at The Chinese University of Hong Kong. He is also a Senior Research Fellow at the School of Management, University College London, and a Center for Advanced Study in the Behavioral Sciences Fellow at Stanford University. His major research areas include AI and Big Data, digital governance, ICT and comparative public administration. Email: [email protected]
