
It’s time to talk about “AI literacy”…literally
The EU AI Act, published almost a year ago, leaves many AI systems unaffected by key provisions until August of 2027 – but that leisurely time line does not apply to noteworthy mandates that have been in effect since February of this year, specifically Article 4 with its dangerously simple title of “AI Literacy”.
That entire article, not just its title, uses words that invite reflection – and that will surely inspire passionate debate about who must do what, and about who will be empowered or alienated by those actions.
Article 4 is short enough to be quoted here in full:
Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.
I used the phrase “dangerously simple”, above, because even a casual reader may wonder about the meaning of several of those words and phrases – notably:
- “ensure”
- “best extent”
- “sufficient”
- “taking into account”
- “considering”
- “on whom”
as well as what’s clearly the most obvious WINDiness (Word In Need of Definition) in that article – “literacy”.
We can leave the rest of that bullet list for another day, because deciding the meaning of words like these is the routine outcome of adjudicating actual cases. Until there’s case law, any statute is like the “pirate’s code” as described by Captain Barbossa in the movie, “Pirates of the Caribbean”: “more like guidelines than actual rules.” But we’d better have some agreement about the meaning and the impact of “literacy”, if Article 4 is to mean anything at all – and if we’re going to start forming our views and our plans for who will be empowered, versus who will be alienated, as AI grows in capability and spreads in application.
There are plenty of places where “literacy” is defined as merely “the ability to read and write” – which might, by simplest possible analogy, suggest that “AI literacy” is merely the ability to write a prompt and the ability to read a response from a generative AI tool. This will probably be found, early in the history of EU Act adjudication, not to be a “sufficient” level of literacy: the word will probably become, quite soon, understood to mean something more like UNESCO’s definition: “a means of identification, understanding, interpretation, creation, and communication”.
Because (as UNESCO further notes) the world of today and tomorrow is “increasingly digital, text-mediated, information-rich and fast-changing,” it plausibly follows that the literacy of an AI-everywhere world must also include “ability to access, manage, understand, integrate, communicate, evaluate and create information safely and appropriately through digital technologies” – but is even that merely “digital literacy”? What more does someone need, for them to be said to have “AI literacy”?
We get the beginning of an answer from the World Economic Forum with its description of the “AILit Framework”: a cooperative effort involving the European Commission and the Organization for Economic Co-operation and Development, along with other contributing organizations and practitioners. The framework, per WEF’s summary, “defines AI literacy as a blend of knowledge, skills and attitudes that enable learners to engage with AI responsibly and effectively” – which might just be another statement of simple but uselessly vague words, except that it’s further developed into four domains (engagement, creation, management, and solution design) and further supported by specific statements of needed competencies. Now we’re starting to have a program that can be put into practice, and held to standards of mastery.
We also see this kind of structure and specificity in a more concise form, in a newly released Salesforce handbook of “AI Literacy and Compliance”: a 16-page compilation of pointers to specific resources on three different levels (Beginner, Builder, and Scientist/Practitioner), which Salesforce explicitly offers as an element of its response to Article 4.
One last caveat
Because I have a suspicious mind, I want to offer one final caveat on the notion of “literacy” being merely the basic skills of encoding and decoding. In 19th century Russia, the nobility and the military officer class routinely spoke French to each other, using the Russian language only to deal with servants and with rank-and-file soldiers. Religious writings and proceedings were routinely limited to Latin even into the 20th century, with “vernacular” versions a subject of controversy.
My general point here is that if an elite class uses an argot of exclusivity, whether intentionally or only through a failure to make inclusion a priority, any achievement of “literacy” in the larger population may be deceptive. People may overestimate their own understanding of what’s happening, and of how things work; they may fail to recognize when their own interests are not being served. Extension of this social pattern to an AI-everywhere world doesn’t seem like a stretch – and a phrase like Article 4’s “sufficient level of AI literacy” does not seem like a robust protection against it.
AI is already past the point of being a laboratory for computer scientists, a playground for enthusiasts, a frontier for enterprise innovators. We’re past “the end of the beginning”, in Churchill’s famous phrase: it’s time to confront and conquer the challenges of (i) what it means to be “AI literate”, (ii) what the consequences will be if we don’t democratize that literacy, and (iii) what it’s going to take to make these things happen in our schools, our workplaces, and our institutions of society.
It’s the opportunity, as well as the responsibility, of “providers and deployers of AI systems” to look ahead – and to visualize the faster adoption, the more effective use, the greater trust, the higher perceived value, and ultimately the stronger prospects for an AI economy that can flow from “considering the persons” on both the supply side, and the demand side, of AI’s exponential curves.