Trustless, Not Truthless: Strengthening Media Literacy for the Web3 Era
Aristotle once said, “It is the mark of an educated mind to be able to entertain a thought without accepting it.” In today’s context, the mark of an educated mind is one able to entertain an information environment increasingly saturated with false, misleading, or even harmful content without accepting it. At a time of deepening ideological divides, media consolidation, and eroding trust in global institutions, this blog post describes how Web3 can come to epitomize an era of greater autonomy, transparency, and accountability—as long as users have the critical thinking skills necessary to navigate the new digital terrain. In fact, Web3’s structure might even be uniquely suited to facilitate the media literacy capabilities of its users.
“Media literacy” was first defined in 1992 as “having the ability to access, analyze, evaluate, create, and act using all forms of communication.” The definition for media literacy has since broadened to include understanding the systems in which media messages exist, their influence on our beliefs and behaviors, and the creation of responsible, thoughtful, and safe content. On the other hand, the federal government defines “digital literacy” as the ability to use digital technology to locate, evaluate, organize, create, and share information and encourages considerate and informed participation online. Given that these terms are often conflated, this blog post frames “digital literacy” as a natural extension of media literacy, serving as an umbrella term that encompasses new competencies required for navigating emerging technologies and overcoming challenges associated with digitization and decentralization.
What is Web3 and why is it uniquely suited for fostering media literacy skills?
Web1 was the earliest version of the Internet (1990s to early 2000s), characterized by “read-only” static websites. Web2 refers to the second generation of the internet where we see user-generated content, social networking, and centralized control by a handful of powerful technology companies like Meta, YouTube, TikTok, X (formerly Twitter), and Google. Today’s Web2 landscape presents significant risks to media literacy—these companies offer free services in exchange for user data, but their ad-based business models prioritize engagement over accuracy, amplifying emotionally charged and often misleading content to maximize profit. Despite growing concerns over the immense power and influence among a handful of companies, antitrust enforcement has not kept pace with consolidation in digital markets.
In Web2, centralized oversight and opaque algorithms allow toxic and misleading content to flourish, eroding trust and undermining informed engagement. Algorithms also enable communicators to test which headlines or messages perform best in real time, further incentivizing sensationalism over substance. Compounding these challenges, artificial intelligence is upending the information ecosystem and transforming how people access news. AI tools now generate article summaries of entire news stories, and often come first in search results—shaping public understanding without human interference. As a result, those with low levels of digital and media literacy may be ill-equipped to confirm accuracy or legitimacy of information served through AI summaries, especially when those summaries do not link to original reporting. This is especially true for adults with low literacy, who also tend to be frequent social media users, making them vulnerable to inaccuracies and making them conduits for the spread of misinformation.
Web2 platforms like Meta, YouTube, and X (formerly Twitter) have begun experimenting with peer-driven fact-checking features like Community Notes, but these tools expose the growing pains of user-driven models. Although the introduction of peer-moderation tools like Community Notes is intended to signal a commitment to accuracy and transparency, their real priorities still center on keeping users engaged rather than informed. Emotionally charged content gets more clicks and shares, so it’s promoted more heavily.
Web3 potentially shifts power over how content is organized from Big Tech companies to individuals. As users graduate from being subjects of moderation to co-overseers of the information ecosystem, critical thinking and discernment are critical. Web3’s decentralized architecture embeds media literacy into the infrastructure itself and makes the provenance of digital content—who created it, how or whether it has been modified, and when it was published—both visible and verifiable.
Additional autonomy turns off potential Web3 users and exacerbates digital divides
In Web2, users are typically treated as passive consumers of algorithmically curated content, but Web3 presents a new participatory, transparent, and self-governed digital environment. The challenge we face isn’t just identifying incorrect health advice; rather, it’s determining whether a video of a politician is AI-generated or real, or whether a “breaking news” post is credible journalism or a coordinated disinformation campaign. In the Web3 context, those who are better equipped to identify manipulated media and ragebait can organize their feeds to prioritize quality information. At the same time, disparities in media literacy capabilities may exacerbate the digital divide, as users who are not equipped to navigate decentralized platforms and control their feeds remain more vulnerable to toxic information systems.
New platforms like Mastodon, Bluesky, Gab and other blockchain-based sites are free from centralized control, so there’s no one person or group to blame for moderation. These forums allow individuals to decide for themselves what they’d like from their internet experience, and establish their own content and engagement policies in accordance to their values and preferences. The transparency and onus on users themselves increases legitimacy and lowers suspicion of hidden agendas or ideological bias by Big Tech gatekeepers.
Despite all of its promise, Web3 isn’t necessarily user-friendly. Increased freedom comes with increased responsibility—users on these new sites are now tasked with curating their “ideal internet” and customizing their feeds and digital interactions. Many Web2 users are unfamiliar with the added responsibilities associated with decentralized platforms, and the lack of intuitive design or familiar features in Web3 often deters them from managing feeds, participating in collaborative moderation, or making an account altogether. The success of (decentralized) peer moderation depends on trust, visibility, and feedback loops. Without the internal fact-checking teams for major Web2 platforms, Web3 ecosystems risk being flooded with misinformation and bad actors—especially if peer-driven systems fall behind the speed of information sharing. They also risk losing relevance altogether if users feel their contributions have no impact on the democratic Internet they hoped to create.
Congress should focus on the literacy we need for the internet we want
Policymakers may feel pressured to regulate how platforms deal with information, whether through algorithm mandates or Section 230 sunsetting threats. However, instead of regulating how users consume information, they should focus on the promising applications of Web3 technologies in helping people navigate our toxic information systems, and prioritize how to support and uplift these new technologies rather than controlling how platforms handle their content. To rebuild public trust, support informed regulation, and enable researchers, civil society, and the public to better understand how information both flows and is controlled online, Congress should focus on media literacy-related legislation, such as:
- Passing the Investing in Digital Skills Act, Adult Education WORKS Act, and supporting media literacy education efforts tailored to distinct populations like youth, older adults, and communities with varying digital access or political exposure;
- Funding developing tools like the Misinformation Susceptibility Test (MIST) and other frameworks that measure misinformation exposure and resilience;
- Directing the National Institutes for Health, National Science Foundation, U.S. Department of Education, Federal Communications Commission, and others to conduct longitudinal research on how trust, digital habits, and demographic factors shape misinformation vulnerability in decentralized spaces; and,
- Requiring digital platforms to periodically disclose how they moderate, label, and prioritize content.
Web3 is not a panacea, but a rapidly evolving frontier that brings both new opportunities and complex challenges. For Big Tech’s business model and global success, virality trumps accuracy. Unlike Web2, where platforms filter and prioritize content, Web3 places that responsibility directly on users. In decentralized environments, where centralized enforcement is absent or limited, media literacy becomes the critical safeguard. Greater freedom means greater vulnerability to misinformation, manipulation, or exclusion. Without engaged communities or platform support, peer moderation can fail. Strong media literacy skills are crucial to help people navigate this era of autonomy and participate in democratized digital spaces, where it falls on them to navigate, verify, and contribute to trustworthy digital ecosystems. A decentralized internet is only truly empowering if all users are equipped to make sense of it.