Friday, February 27, 2026

Weekly Digest

 You’re reading the Benton Institute for Broadband & Society’s Weekly Digest, a recap of the biggest (or most overlooked) broadband stories of the week. The digest is delivered via e-mail each Friday.

Round-Up for the Week of February 23-27, 2026

 

AI Literacy Framework

The U.S. Department of Labor (DOL) is aiming to guide and encourage expanded artificial intelligence (AI) literacy training across public workforce and education systems. DOL created a new framework to be a resource for AI program design. The framework provides a working definition for AI literacy, a detailed set of foundational content areas of AI literacy, and effective delivery principles for AI education and skill development training.

DOL defines AI literacy as:

“A foundational set of competencies that enable individuals to use and evaluate AI technologies responsibly, with a primary focus on generative AI, which is increasingly central to the modern workplace.”

DOL also says that the meaning of “literacy” in the context of this framework is “a foundational level of knowledge and skill that all workers and students should have as AI becomes embedded across the economy.” 

Foundational Content Areas

DOL identifies five core AI content areas relevant to U.S. residents today: 1) Understanding AI Principles; 2) Explore AI Uses; 3) Direct AI Effectively; 4) Evaluate AI Outputs; and 5) Use AI Responsibly.

1. Understanding AI Principles

AI literacy includes developing a clear grasp of what artificial intelligence is and how it works. This foundation helps demystify AI, supports more confident and accurate use, and enables workers to apply, prompt, and evaluate AI systems more effectively across a wide range of workplace scenarios.

Content areas include:

  • Pattern recognition and probabilistic outputs: AI systems generate responses by identifying statistical patterns in data, which can result in different outputs from the same input.
  • Capabilities and modalities: Common AI capabilities include generating text, analyzing data, and recognizing images, across different input and output formats such as text, audio, or visual content.
  • Training and inference: Training builds the AI model using large datasets, while inference is how the model generates outputs in real-time workplace applications.
  • Hallucinations and accuracy limits: AI can produce confident but incorrect outputs, making it critical to verify results and avoid overreliance.
  • Human design and oversight: Every AI system reflects human decisions about data, goals, and parameters, requiring users to understand where human judgment is still essential.

2. Explore AI Uses

Workers should understand how AI is being used across real-world workplace settings. Because AI use varies widely by industry, occupation, and context, exploration builds familiarity and judgment, helping workers recognize when and how to apply AI effectively, and where human input remains essential.

Content areas include:

  • Productivity tools: Using AI to draft written documents, create draft presentation outlines, or analyze reports, allowing workers to move more efficiently through common tasks in a workflow.
  • Information support: Leveraging AI to answer questions, surface relevant background information, or create learning content tailored to specific workplace needs.
  • Creative assistance: Generating initial drafts of marketing copy, naming ideas, graphic options, or other creative assets that workers can then refine and improve.
  • Task-specific applications: Applying AI to solve targeted problems, such as writing code snippets, transcribing audio, automating data entry, or organizing complex schedules.
  • Decision-support systems: Using AI tools to generate recommendations, risk assessments, or forecasts that help inform and augment human decision-making.

3. Direct AI Effectively

Because most AI tools depend heavily on the input they receive, users must learn how to provide clear instructions, include necessary context, and guide the system toward better outcomes.

Content areas include:

  • Contextual framing: Providing background information, intended audience, tone, or specific goals helps shape the AI’s response to better match the user’s needs in different workplace scenarios.
  • Prompting techniques: Structuring prompts clearly, using step-by-step instructions, and specifying formats or outputs allows workers to unlock more advanced or precise capabilities of the AI system.
  • Supplying relevant input data: workers should understand when and how to include the most relevant data, supporting materials, or examples to improve accuracy and usefulness of AI outputs.
  • Iterating on outputs: Effective users treat AI interactions as an ongoing process, using follow-up prompts to clarify, refine, or reframe results until they meet the desired standard or purpose.
  • Avoiding vague or misleading prompts: Workers should recognize how prompt clarity and word choice affect outcomes and adjust their approach accordingly to avoid ambiguity.

4. Evaluate AI Outputs

Assessing the quality and usefulness of AI-generated outputs is crucial. While AI can accelerate work and surface helpful insights, the results it produces still require thoughtful review. This evaluation skill ensures that workers remain in control of the process and that AI is used as a support tool, but not a final authority.

Content areas include:

  • Verifying factual accuracy: Workers must cross-check AI-generated outputs against trusted sources or known information to identify false claims, outdated references, or fabricated content.
  • Assessing completeness and clarity: Outputs should be reviewed to ensure they fully address the task or question, and are expressed in a clear, actionable, or usable form for the intended audience.
  • Spotting gaps or logical errors: Users should be able to identify missing steps, flawed logic, or faulty assumptions that may make the output unreliable or misleading.
  • Aligning with strategic intent: Outputs should be evaluated based on whether they achieve the desired goal, support the right message, and are fit for purpose in a specific task or workflow.
  • Applying human judgment: Workers should understand how to layer in their own expertise, context, and discretion when deciding how to interpret, use, or revise AI-generated content.

5. Use AI Responsibly

As AI tools become more embedded in daily workflows, workers must understand the boundaries of appropriate use, both to safeguard information and to ensure outputs are applied ethically and effectively.

Content areas include:

  • Protecting sensitive information: Workers should understand what types of data should not be entered into AI tools and how to prevent accidental disclosure of confidential information.
  • Following workplace policies and rules: Users must be aware of and follow any organizational policies around AI use, including guidance related to specific tools or contexts.
  • Avoiding misuse or harm: Workers should be aware of how AI tools can be used inappropriately, whether for plagiarism, impersonation, or harm, and know how to report issues.
  • Managing context-specific risks: Workers should understand how risk varies across different tasks, audiences, or sectors, and apply greater scrutiny or caution in higher-stakes settings.
  • Maintaining accountability: Workers remain responsible for the decisions and outputs they produce with AI tools and should avoid treating AI responses as final or authoritative without review.

Delivery Principles

The second part of DOL’s framework focuses on how to develop and deliver AI literacy programming. DOL identified seven program delivery recommendations to ensure programs are comprehensive, well-tailored to their audiences, and flexible: 1) Enable Experiential Learning; 2) Embed Learning in Context; 3) Build Complementary Human Skills; 4) Address Prerequisites to AI Literacy; 5) Create pathways for Continued Learning; 6) Prepare Enabling Roles; and 7) Design for Agility.

1. Enable Experiential Learning

AI literacy is most effectively developed through direct, hands-on use. Workers build confidence and understanding not by reading about AI in the abstract, but by using it in real-world contexts to solve actual tasks.

Delivery approaches include:

  • Real-world task integration: Embedding AI tools into day-to-day tasks such as writing, research, or scheduling allows workers to gain familiarity in authentic scenarios.
  • Interactive prompt exercises: Providing practice with different types of prompts, including poorly written examples, helps workers see how phrasing, specificity, and structure affect outcomes.
  • Live feedback and iteration: Structuring exercises where users receive real-time feedback on AI outputs encourages experimentation and helps reinforce learning by doing.
  • Side-by-side human comparisons: Asking participants to compare AI-generated work to human-created work (or to their own previous outputs) builds judgment and discernment.
  • Progressive difficulty levels: Designing training activities that begin with simple use cases and advance toward more complex workflows helps scaffold learning and build momentum.

2. Embed Learning in Context

AI literacy becomes more impactful when it is delivered in ways that are directly relevant to the worker’s job, industry, or existing training experience. Embedding AI literacy into familiar settings helps reduce friction, increase uptake, and reinforce how AI fits into existing workflows. Contextualized learning also supports retention by anchoring new concepts to real-world scenarios that workers understand, making the content feel more actionable and less abstract.

Delivery approaches include:

  • Industry-specific examples: Aligning instruction with the tools, use cases, and terminology most relevant to a given sector, such as healthcare, manufacturing, transportation, or retail.
  • Occupational tasks and workflows: Teaching AI literacy through real job functions and activities that workers perform, helping them see how AI tools can support their specific day-to-day tasks.
  • Employer-specific alignment: Embedding content within the systems, culture, and goals of a particular employer, including their internal AI tools, policies, and broader strategic objectives.
  • Training program integration: Delivering AI literacy as part of existing Registered Apprenticeships, CTE curricula, short-term credentialing programs, or reskilling efforts to reinforce task relevance.
  • Cohort-specific considerations: Adjusting delivery style, pace, and references to match workers’ experience, familiarity with technology, or career stage to maximize relevance.

3. Build Complementary Human Skills

AI tools do not function as standalone capabilities with fixed value. They are amplifiers of human input, and their effectiveness depends heavily on the skills, knowledge, and judgment of the people who design, manage, and interact with them. AI literacy efforts are best delivered when they demonstrate AI’s augmentation of human capabilities such as critical thinking, creativity, communication, and domain expertise.

Delivery approaches include:

  • Critical thinking integration: Design learning experiences that pair AI use with exercises in problem-solving, reinforcing human judgment as central to AI-supported decisions.
  • Creative development exercises: Encourage workers to use AI tools to brainstorm, generate variations, or remix ideas, then apply their own creativity to select, refine, or improve the results.
  • Communication refinement: Use AI to draft content, while teaching workers how to revise AI-generated material for tone, clarity, persuasiveness, or appropriateness for the audience.
  • Values-based decision scenarios: Practice navigating ambiguous situations where humans must apply a combination of organizational, legal, or personal values to act on AI outputs.
  • Domain expertise amplification: Emphasize how the value of AI increases when workers bring in subject-matter knowledge or workflow understanding to shape and assess results.

4. Address Prerequisites to AI Literacy

AI literacy efforts can only be successful if learners have the foundational tools and access needed to engage with training. This may include digital literacy skills, device access, or broadband connectivity, especially in settings where AI tools require stable internet access or use non-intuitive interfaces. Programs should proactively identify and address these barriers, ensuring that participants have what they need to complete training and apply AI tools confidently in their daily work. By treating these prerequisites as integral to program design, AI literacy efforts can reach more people and deliver better outcomes.

AI literacy efforts can only be successful if learners have the foundational tools and access needed to engage with training. This may include digital literacy skills, device access, or broadband connectivity

Delivery approaches include:

  • Evaluate baseline readiness: Start with simple diagnostics to evaluate whether participants have the digital familiarity needed to begin using AI tools effectively and identify any barriers.
  • Integrate digital literacy skills: Offer light-touch refreshers or resources on digital literacy skills for participants who need to brush up on device use, app navigation, or browser tools.
  • Consider options for access support: Where device or broadband gaps exist, explore practical solutions such as public computer labs, mobile-first content, or asynchronous formats.
  • Consider bandwidth flexibility: Consider training materials that may be more compatible with low-bandwidth environments and mobile devices, where feasible.
  • Acknowledge different starting points: Build delivery models that accommodate a range of skill levels and learning speeds without assuming prior experience.

5. Create Pathways for Continued Learning

As AI tools evolve and become more integrated into the workplace, workers will need clear opportunities to deepen their skills, pursue specialized training, or transition into AI-related occupations. Connecting workers to next-step resources ensures that AI literacy is not a one-time event, but a sustained capability that grows alongside the technology.

Delivery approaches include:

  • Advance to AI proficiency: Help participants move from basic AI literacy usage to more advanced AI proficiency, including more direct management of complex AI systems.
  • Encourage builder and entrepreneurship pathways: Support workers who want to go beyond using AI tools to build their own AI-powered solutions, including through entrepreneurship.
  • Design stackable learning models: Structure training in layers that build from foundational literacy to greater skills in areas like data handling, AI tool configuration, or prompt engineering.
  • Offer occupation-specific progressions: Align continued learning with the specific tasks, tools, and responsibilities associated with different job roles or career stages.
  • Support pathways into AI-related careers: Highlight next steps for workers interested in transitioning toward AI-centric occupations, such as AI product specialists, prompt engineers, or data analysts.

6. Prepare Enabling Roles

AI literacy efforts are more successful when the people supporting workers, such as managers, trainers, mentors, or career counselors, are equipped with the right knowledge and tools to guide others effectively. These individuals are not just secondary learners; they require tailored approaches to AI literacy that reflect their unique roles in enabling others.

Delivery approaches include:

  • Train-the-trainer models: Equip instructors, coaches, or facilitators with targeted AI literacy content and methods to deliver, reinforce, and contextualize learning for others.
  • Manager upskilling: Provide AI literacy focused on use cases relevant to team oversight, change management, and integrating AI tools into daily operations.
  • Career navigation support: Tailor AI literacy for career counselors or mentors so they can guide learners on how AI tools impact job search, career growth, and evolving skill needs.
  • Peer learning champions: Identify and train peer leaders with the right framing to serve as accessible, informal sources of support and enthusiasm within teams.
  • HR and L&D alignment: In a corporate setting, ensure those leading key learning functions understand how to embed AI literacy across onboarding, upskilling, and internal mobility pathways.

7. Design for Agility

AI technologies evolve at a pace unlike previous workplace tools. New capabilities, platforms, and use cases emerge every few months, while older tools become obsolete just as quickly. For workforce programs, this means that AI literacy cannot be treated as a fixed curriculum. Training must be designed with built-in mechanisms for adaptation, so content and delivery stay current with the technology landscape.

Delivery approaches include:

  • Continuous content updates: Build delivery systems that allow for regular refreshes of tools, examples, and instructional content to reflect current AI capabilities.
  • Feedback-driven iteration: Use learner input and real-world outcomes to revise delivery methods and content based on what’s working in practice.
  • Modular content design: Structure training in flexible units that can be swapped, expanded, or reordered as new needs or technologies emerge.
  • Responsive use case selection: Revisit and revise scenarios periodically to ensure alignment with the latest workplace applications of AI.
  • Outcome-driven iteration: Evaluate whether participants are gaining practical, transferable AI skills, and use those insights to adapt and refine delivery strategies.

A Framework Made to Evolve

With this framework, DOL hopes to establish a clear starting point, while also committing to evolving it over time to ensure continued relevance. DOL plans to explore ways to identify and share promising AI literacy models, tools, and training approaches that help translate this framework into action.

Quick Bits

Weekend Reads

ICYMI from Benton

Upcoming Events

Mar 3––Less Hype, More Help: AI That Improves Safety, Productivity, and Care (Senate Commerce Committee)

Mar 5––Telecom Act at 30: Universal Service as the North Star (Benton Institute for Broadband & Society)

Mar 5––Context Matters: Building Trust in Digital Content (Information Technology and Innovation Foundation)

Mar 11––The State of State Privacy (Information Technology & Innovation Foundation)

Mar 18––The Telecom Act at 30 (Technology Policy Institute)

Mar 26––March 2026 Open Federal Communications Commission Meeting (Federal Communications Commission)

Source link