What Does the Digital Inclusion Community Have to Say About AI?
You’re reading the Benton Institute for Broadband & Society’s Weekly Digest, a recap of the biggest (or most overlooked) broadband stories of the week. The digest is delivered via e-mail each Friday.

On a chilly early-February week in Chicago, Illinois, digital inclusion advocates, practitioners, and researchers gathered for the annual Net Inclusion conference, hosted by the National Digital Inclusion Alliance (NDIA). At Net Inclusion, members of the digital inclusion community share resources, brainstorm new initiatives, and teach each other about best practices to close the digital divide.
A dominant theme at Net Inclusion 2026 was artificial intelligence (AI). The conference featured four workshops and three panel sessions about AI. Previous years had featured some AI content, but this year was notable for an uptick in specificity—practitioners want to know how their peers are actually engaging with AI in teaching and learning—and for a more holistic focus on the broader impacts of AI on digital equity. Our team attended a handful of the workshops and panels to learn more about community priorities.
Ensuring Inclusive AI Literacy Programs
In a session about ensuring AI literacy efforts are tailored to the needs of the diverse communities the programs are designed to serve, practitioners emphasized the importance of community-led, participatory AI curriculum design.
Panelists Michelle Du and Antrita Manduva, co-founders of the Chicago grassroots initiative DemystifyAI, seek to fill the gap in AI literacy efforts by providing a tailored curriculum to digital literacy organizations. In this way, DemystifyAI supports practitioners poised to teach AI literacy courses who may not have the capacity to develop teaching materials themselves. DemystifyAI has developed a base curriculum from which they curate an AI literacy program based on a specific nonprofit and its community’s needs.

The organization’s work is informed by its direct content delivery, allowing curriculum developers to understand the impact and effectiveness of their programs in real-world settings. For example, when working with community members in Chicago’s Chinatown neighborhood, DemystifyAI realized that DeepSeek is the more popular AI platform among program participants, enabling educators to adjust their training to meet real needs and preferences better.
For Generations Online (GoL) Founder and CEO Tobey Dichter, the solution for seniors was to curate GoL’s own AI chatbot (IMA Computer). An organization that prioritizes digital literacy training and affordable device access for senior citizens, GoL quickly realized that getting seniors on board with AI meant creating an experience specifically for them.
IMA Computer is GoL’s teaching version of an AI chatbot, powered by ChatGPT, designed to orient seniors to using OpenAI’s ChatGPT platform. IMA Computer comes with educational materials on what AI is and practical use cases, specifically designed to resonate with seniors. For example, IMA Computer can help seniors organize their medical appointment schedule or brainstorm ways to connect with friends.
One of the chief concerns of seniors online is privacy and scam prevention. IMA Computer ensures, at multiple stages of onboarding, that seniors using this platform and others understand the risks and have the tools to stay safe in their AI use.
The emphasis of this session was clear: AI is complex, and communities want to know how to use it in ways that are most relevant to them. Community-serving organizations looking to help can do so best by listening and ensuring their work is specific and reactive, which builds crucial trust between educators and learners.
Navigating AI Literacy Curricula for Kids
A session on evolving youth digital literacy efforts opened with an exercise that asked attendees to express their personal feelings about embracing AI, ranging from strongly agreeing to strongly disagreeing with the burgeoning age of AI.
The majority of the room fell somewhere in the agree-or-disagree space, with fewer attendees strongly agreeing or strongly disagreeing. Almost all attendees agreed on one thing: preparing kids to understand and use AI tools safely is challenging in a rapidly changing environment lacking regulation and trusted guidance.
The panel was moderated by Ilana Lowery, Director of School Partnerships and Policies at Common Sense Media, and included:
- Sue Thotz, Director of Outreach at Common Sense Media;
- Stacey Wedlake, Research Scientist at the University of Washington;
- Atef Siddiqui, YourTechQ Youth Leader; and
- Ayana Davies, Chicago Public Schools Generative AI Specialist.
According to Common Sense, 70 percent of teens use AI, and over half use AI for homework help. However, only 37 percent of parents with teens who use AI know their children are using it.
Together, the panelists displayed a breadth of knowledge about the challenges of teen AI use, literacy, and safety. Lowery and Thotz from Common Sense Media highlighted the organization’s Digital Literacy & Well-Being curriculum repository and their experiences working with school districts on AI literacy.
Davies works in one such district as a Chicago Public Schools employee. She remarked on the power dynamics of getting educators, students, and families up to speed on rapidly developing AI technologies while also developing AI curriculum and standards amidst pressure from AI companies. The challenge for schools is figuring out which AI-powered tools are useful and safe, and how to incorporate them into K-12 curricula while AI developers compete with each other to get their products into the classroom.
Wedlake from the University of Washington highlighted the Empowering Informed Communities (EIC) project, which creates research-based information literacy resources for public libraries to help librarians and their communities navigate complex information environments. The resources under this initiative include AI tool primers, AI safety lessons, and media literacy guides.
Siddiqui’s work with YourTechQ empowers high school students to conduct training workshops and provide one-on-one digital skills instruction to seniors in their communities. Siddiqui emphasized how keeping programs local boosts trust among older learners, especially when handling AI.
The panelists addressed AI literacy education as a multifaceted challenge with complex power dynamics, risks, and rewards at play. However, with respect to students’ needs and safety, and sufficient community trust, stakeholders are making strong efforts to navigate the ever-changing landscape of AI education.
Navigating Data Center Development
Sessions on broader impacts of AI moved the focus away from tool adoption and use to towards other thorny topics. A workshop about data centers and community benefits agreements featured three speakers:
- Amanda Sweet of the Nebraska Library Commission;
- Jordana Barton-García of Connect Humanity; and
- Steven Renderos of MediaJustice.
Sweet’s presentation offered basic information about data centers to orient participants to the current landscape, as well as an overview of common concerns across communities hosting hyperscale data center projects. Barton-García spoke about recent experience in the Rio Grande Valley, including concerns about data centers’ water consumption.
Renderos’ presentation took a different stance, emphasizing how communities of color in the South with legacies of environmental racism have resisted data center projects.
This session drew attention to the plurality of perspectives that the digital equity community may hold about AI and data centers, more specifically: from seeing the technology as inevitable and requiring a pragmatic response to seeing technology as something that the community can shape or exercise agency over to everything in between. These perspectives guide the actions leaders take: will they attempt to negotiate benefits and reduce harms, organize people to oppose data center projects altogether, or identify other paths to intervene?
Exploring an AI Civil Rights Law
A session about model AI civil rights legislation developed by the Lawyers’ Committee for Civil Rights Under Law sparked a different kind of conversation about AI and its societal impacts.
The session addressed AI governance and the protections that government should provide in the face of risks and harms associated with AI adoption and diffusion. Gillian Cassell-Stiga of the Lawyers’ Committee for Civil Rights spoke with Tsion Tesfaye of the NDIA about the model legislation, intended to address situations in which algorithmic decision-making might discriminate against protected classes in employment, housing, access to credit, and other critical areas of individuals’ lives. The model legislation aims to increase transparency, provide mechanisms for accountability, and otherwise protect civil rights.
Audience questions raised concerns about how people might even begin to detect AI-related violations of civil rights, which makes these sorts of harms even more daunting. AI literacy may also entail identifying when another party has used AI, a task made more difficult by a lack of governance that would mandate disclosure.
What We’ll Be Watching
Attending these workshops and sessions heightened the sense that the digital equity community is eager for more measured discussion of AI—not everyone agrees about how, when, and where to use it, nor about its overall influence on society and the economy. But practitioners want to make thoughtful decisions about use and non-use, and they are motivated to shape technology to address harm.
There is a sense of urgency to teach others to use AI safely and thoughtfully, and to understand and address harms, whether they relate to the health and safety of individuals or communities. We see the digital equity community engaged in reflection rather than naive boosterism and expect that AI topics will remain popular for professional development, public discourse, and experimentation.
Dr. Caroline Stratton is the Director of Research and Grace Tepper is an Editor and Researcher at the Benton Institute for Broadband & Society.
Quick Bits
Weekend Reads
ICYMI from Benton
Upcoming Events
Mar 23––Public Wireless Supply Chain Innovation Fund Listening Session (NTIA)
Mar 23––AI Infrastructure, Data Center Development and BEAD (National Telecommunications and Information Administration)
Mar 25––2026 Tech & Telecom Policy Outlook (Hispanic Tech & Telecommunications Partnerships)
Mar 25––On the Record with NTIA’s Senior Advisor for Spectrum (Georgetown University)
Mar 26––March 2026 Open Federal Communications Commission Meeting (Federal Communications Commission)
Mar 26––The Telecommunications Act of 1996: 30 Years Later (House Commerce Committee)