AI literacy is not a luxury, but Brussels is failing to see the danger
AI’s promise is “best realized only when its benefits are shared by humanity”. It sounds self-evident, but this statement is in fact just a non-binding declaration endorsed by the EU and 87 countries at last month’s AI Impact Summit in New Delhi.
But in Brussels, policymakers risk undermining the EU’s pioneering trust-based approach to AI governance as the pressure to simplify away obligations grows. And while the Commission’s proposed amendments face scrutiny, one critical yet often overlooked area of concern is the weakened provisions on fostering AI literacy – the knowledge required to make informed decisions on AI usage.
Rather than an obligation for AI model providers and deployers alike, this would turn voluntary, with the onus shifting to the Commission and member state authorities.
AI literacy should not be an afterthought. As increasingly sophisticated AI is adopted in the workplace, education and personal life, agency and accountability will remain a precondition for citizens, firms, and public authorities to meaningfully hold AI systems to account. Trained on datasets subject to bias in how, where and which data was collected, AI must be used with care, to avoid reproducing such patterns.
Perhaps most egregiously, in 2019 in the Netherlands, it was revealed that a machine learning model used by the government to spot childcare benefits had incorrectly flagged over 30,000 cases. The city of Amsterdam later developed a more sophisticated welfare AI system, built specifically to mitigate caseworkers’ observed bias by omitting sensitive variables. Despite these measures, this tool was also ultimately pulled, after audits during a pilot found it created unforeseen issues. Without the know-how to scrutinise such tools, even well-intentioned deployments can go wrong.
The vast scale of today’s large language models, trained on hundreds of billions of parameters and which underpin today’s ubiquitous chatbots, makes their output especially hard to audit. Beyond recognising synthetic media and understanding AI’s limits – particularly as these narrow – the need for literacy is perhaps best illustrated by AI agents. These systems are given privileged access to user files, personal information, and increasingly, the internet, to autonomously plan and execute complex tasks, such as negotiating a car purchase.
The risks of this were illuminated by the recent drama surrounding Moltbook – a Reddit-like forum populated by AI agents which left tens of thousands of users’ personal data vulnerable to theft, due to near-absent security (not for lack of expert warnings). Moltbook serves as a preview of what happens when complex AI ecosystems are deployed faster than users can understand them.
More shocking was Grok, which was used by thousands to ‘nudify’ images of women and children, prompting investigations by multiple member states and the Commission. But enforcement after the fact is no substitute for an informed public that understands what AI is capable of – and what to refuse.
And though at least 20% of EU firms already use AI, nearly half the bloc’s population still lack even basic digital skills. The EU has already made closing these gaps a priority – on AI specifically, the EU maintains a repository of literacy initiatives – but these developments underscore its urgency.
As uncertainty grows about AI’s expanding capabilities, the EU’s best shot at gaining a competitive advantage and enabling societal resilience is to foster an AI ecosystem rooted in trust – while breaking down internal barriers to investment.
In industrial policy terms, this means adopting a diffuse approach: prioritising open source, placing compute closer to research hubs and clean energy, and spearheading alternative technical approaches, as others have argued – keeping value close to home.
Deploying the EU’s high-quality data would further allow a ‘fast-follower’ model to emerge, whereby the Union could benefit from foundational investments into frontier AI made elsewhere. Building society-wide AI literacy should be central to this, and should involve both the public and private sectors.
But despite a partially restored focus on AI safety – harkening back to the goals of the Bletchley Park and Seoul Summits, which established the network of AI Safety Institutes – New Delhi did not catalyse new, concrete global governance efforts. Instead, it followed in the adoption-focused footsteps set by the 2025 Paris ‘Action’ Summit, moving attention away from regulation and towards investment.
The US went even further, rejecting any form of international AI governance, even as UN Secretary-General Antonio Guterres called for a global €3 billion fund to build skills and inclusive ecosystems for equitable AI diffusion. Meanwhile, Indian companies signed partnerships with AI giants and Commission Executive Vice-President Henna Virkkunen announced a deal to bolster Europe’s connection to India’s talent base and continue joint AI governance efforts.
As the drive for simplification continues apace in Brussels, the need for binding, effective mandates to build AI literacy only grows. But it has been left to civil society and defenders of safe and trustworthy AI adoption to remind EU legislators of that.
This op-ed was originally published on Euractiv. Read the original here.
Samuel Goodger is a policy analyst in the European Policy Centre’s Health and Societal Resilience programme.
The support the European Policy Centre receives for its ongoing operations, or specifically for its publications, does not constitute an endorsement of their contents, which reflect the views of the author only. Supporters and partners cannot be held responsible for any use that may be made of the information contained therein.