Abstract

Introduction:

Digital competence is critical in health professions education. This study examined how structured, hands-on engagement with AI influences master’s students’ perspectives on its use in practice and study, employing the DigCompEdu framework and considering innovation competence as a learner-oriented skill.

Materials and methods:

Pre-and post-surveys were conducted in an interprofessional health–technology course (baseline n = 47; end-survey free-text n = 34). The baseline items assessed prior exposure and expectations, while the end-survey narratives were analysed using reflexive thematic analysis (inductive, semantic). DigCompEdu served as a guide for interpretation rather than as a strict codebook. Innovation competence was defined as the ability to identify opportunities, experiment with small changes while ensuring they work, and collaborate across disciplines.

Results:

Five key themes emerged, reflecting students’ future orientation: (1) Future clinical integration—AI integrated into routine work (such as decision support, documentation, and monitoring), while maintaining human judgement; (2) Learning trajectories—Ongoing use of AI for tasks like scoping, structuring, and summarising, along with verification routines and boundaries to prevent over-reliance; (3) Professional identity & interprofessional roles—AI viewed as a collaborator with a readiness to bridge clinical and engineering perspectives; (4) Ethical guidelines and supports—Calls for guidelines, increased governance literacy, and the provision of time and tools for verification; and (5) Innovation in practice—Small-scale AI-supported innovations tested through iterative trials. These themes primarily aligned with A6 (Facilitating learners’ digital competence), particularly A6.1 (information and media literacy), A6.3 (digital content creation), A6.4 (responsible use), A6.5 (digital problem solving), along with contributions from A1 (Professional engagement), with contributions from A2/A3; A4 (Assessment) was less represented. The baseline context highlighted limited prior exposure to AI, with 64% of the participants reporting little or no AI experience.

Conclusion:

Short, hands-on AI experiences, when combined with clear ethical guidelines, can foster the development of skills needed for the future and contribute to an AI-enhanced professional identity. Programmes should align their learning outcomes with the DigCompEdu framework. It is essential to pair hands-on AI with verification processes and brief primers on governance and ethics. Additionally, programmes should practice service and process innovation, such as Plan-Do-Study-Act cycles, implementation canvases, and usability walk-throughs, through interprofessional co-design. Addressing the A4 gap (Assessment) with low-stakes activities that provide AI-feedback compared to traditional rubrics is also necessary. These strategies will help graduates become not only proficient in using the tool, but also ethically grounded, capable of innovation, and prepared to career in AI-enabled healthcare.

1 Introduction

1.1 Digital competence as a core requirement in healthcare

Recent literature emphasizes that digital competence has become a fundamental requirement for modern health professionals, and this necessity increasingly extends to those in engineering fields involved with healthcare technology. The European Union explicitly identifies digital competence as one of the key competencies for lifelong learning, highlighting its critical importance in professional life, particularly in healthcare (Mainz et al., 2024).

A 2024 scoping review by Mainz et al. (2024) found broad consensus in the healthcare sector regarding the necessity of digital literacy and competence across various health professions. This competency is crucial for effective performance in contemporary healthcare settings, where digital tools and data play integral roles in clinical decision-making, patient monitoring, and health information management. Furthermore, engineering students involved in developing and implementing health technologies must achieve a high level of digital competence, akin to that of students in health-related programmes. They are likely to engage in creating innovations such as patient-monitoring devices, health informatics software, or AI-driven diagnostic tools. Mastery of digital skills allows these future engineers to design solutions that meet clinical needs and smoothly integrate into healthcare workflows, ultimately benefiting patient care.

International frameworks now position digital competence at the core of health-profession education. The European Commission’s Digital Competence Framework for Educators (DigCompEdu) encourages educators, including those in health, to continuously enhance both their digital skills and pedagogical approaches. It emphasizes areas such as professional collaboration (e.g., sharing and co-creating knowledge) and ongoing professional learning with digital resources (Redecker and Punie, 2017; The Joint Research Centre: EU Science Hub, 2017).

International frameworks now place digital competence at the centre of health-profession education. The European Commission’s DigCompEdu framework continues to encourage educators to develop both their digital practices and pedagogy (Redecker and Punie, 2017; The Joint Research Centre: EU Science Hub, 2017). In parallel, the citizen framework has been updated to DigComp 3.0 (27 Nov 2025), which keeps the overall structure while updating competence wording, revising proficiency levels (now four), adding learning outcomes, and integrating AI competence transversally across all 21 competencies (The Joint Research Centre: EU Science Hub, 2025). Its development was guided by content themes: AI (including generative AI), cybersecurity, digital rights/choice/responsibilities, wellbeing in digital environments, and tackling mis/disinformation.

It also sets out application themes: digital competence as part of lifelong learning; recognition of prerequisites for reaching a basic level; recognition of differences in needs across individuals and over time; and the need for flexible, agile application of the framework. Together, these frameworks underline the need for continuous upskilling in health education, with AI embedded across domains rather than treated as a stand-alone topic.

Despite progressive policy mandates, implementation lags: many health programmes struggle to keep pace with AI and related technologies, so students often graduate with limited hands-on experience, an implementation gap confirmed by recent reviews showing non-standardised AI training and inconsistent coverage of ethics, data literacy, and practical tool use (Rincón et al., 2025; Shishehgar et al., 2025). Authoritative stakeholders echo this concern: a 2024 National Academy of Medicine convening noted minimal formal guidance in medical and nursing curricula despite growing AI investment in health systems, and similar gaps persist in public health education where machine-learning and large-language-model training remains scarce (National Academy of Medicine, 2024). In response, bodies such as the WHO call for deliberate curriculum redesign, embedding AI literacy, data science/informatics, and AI ethics; expanding experiential learning; and investing in faculty development, to prepare an “AI-ready” workforce. The present study contributes to this agenda by showing how structured, hands-on AI experiences, interpreted through DigCompEdu, can surface concrete design targets (e.g., strengthening A4-oriented assessment) to close the policy–practice gap.

1.2 Disruptive technologies and the need for innovation competence

Emerging literature links an innovation mindset with digital competence in healthcare. Disruptive technologies can augment (not replace) professional practice by elevating uniquely human skills—problem-solving, critical thinking, and adaptive creativity—provided clinicians are competent and confident in integrating these tools into care (Meskó et al., 2017). As Meskó et al. note, digital tools improve outcomes only when cultural and human factors are addressed; clinicians need creative problem-solving skills and openness to work effectively with AI. Accordingly, we treat innovation competence, particularly at the service/process level, as integral to DigCompEdu A6 (information and media literacy, responsible use, digital problem solving) and A1 (professional engagement).

Innovation in healthcare ecosystems involves the co-creation and orchestration of technological, organisational, and process solutions that enhance service delivery, stakeholder engagement, and value creation across networked actors (Adner, 2017; Pikkarainen et al., 2017). A useful distinction separates service/process innovation (new ways of organising care, workflows, and delivery) from product/technology innovation (new tools, devices, or software) (Greenhalgh et al., 2004; Omachonu and Einspruch, 2010). Both matter, but they engage different stakeholders and competencies.

Service innovation in healthcare is often enabled by digital platforms and data-driven capabilities, addressing evolving patient needs and system-level challenges (Lusch and Nambisan, 2015; Pikkarainen et al., 2022). Such innovation is increasingly central to professional practice and frequently conditions the successful integration of new technologies (Shaw et al., 2018). Creating an innovative culture relies on empowered teams, effective communication, and supportive leadership; recent work describes healthcare innovation as a dynamic interplay between technological advances and evolving delivery systems (Kosiol et al., 2024). By contrast, product/technology innovation (e.g., drugs, devices, AI algorithms) is typically led by industry and engineering teams, with clinicians participating as users, co-designers, or testers (Alami et al., 2020). Given this division of labour, the primary innovation focus for health professionals is often service/process improvement, while high-tech product development proceeds through interdisciplinary, technology-led approaches.

1.3 This study

These distinctions have significant implications for education and professional identity. Graduates need to be prepared to innovate in practice by adapting workflows, improving care processes, and critically integrating tools, without necessarily writing code for devices or algorithms. Studies on AI-enabled decision support show identity tensions among clinicians. While some feel threatened by a perceived loss of autonomy and expertise, others view AI as a tool that enhances their capabilities (Ackerhans et al., 2024). Therefore, educators and leaders should foster a professional identity that values continuous learning, interprofessional collaboration, and innovation, equipping future practitioners to lead rather than resist digital transformation.

This study examines how master’s-level students in an interdisciplinary health-technology course envision the role of AI in their future clinical practice and academic studies after structured, hands-on engagement with the technology. We use the DigCompEdu framework to analyse students’ future-oriented perspectives regarding digital-competence areas and the supports they consider necessary for the development of professional identity, innovation in practice, and readiness for the future. To address this interdisciplinary relevance, we designed and conducted a course that brought together students from both health and engineering disciplines., ensuring future clinicians and technology developers alike cultivate these essential skills.

  • RQ1. How do students anticipate applying AI in future clinical work and academic learning, and which DigCompEdu-aligned competence areas are reflected in these expectations?

  • RQ2. What ethical/safety conditions and institutional supports do students identify as essential for the responsible future use of AI?

  • RQ3. How can students’ course use of AI be understood as evidence of innovation competence (i.e., the capacity to identify opportunities, iteratively improve workflows, and collaborate across disciplines to implement change)?

2 Materials and methods

2.1 Study design and course overview

In this study, a pre- and post-survey evaluation of a master course is used. The Interactions in Health and Technology course focuses on interdisciplinary collaboration, technological innovation, and user-centred design in healthcare. Students work in groups (4–6) to develop case proposals addressing health-technology issues, often engaging with external stakeholders. The course includes lectures, hands on workshops, and case-based learning, emphasizing AI integration, ethical considerations, and sustainable healthcare solutions. Key topics include technological research, system design, user engagement, and innovation, with contributions from health services and industry professionals. Through a mix of theory and hands-on work, students gain insights into public and private stakeholder roles while addressing real-world healthcare challenges.

2.2 Data collection

The data were collected electronically via a pre-survey and a post-survey. The pre-survey took place about 2 weeks before the course. Data on the students’ professional background, previous experiences with working interdisciplinary and with technology innovation and experience using AI, was reported.

The post-survey was conducted immediately after the course. This survey was a course evaluation and additionally contained the following open-ended questions related to AI’s role in their future clinical practice and studies:

  • What were the most important lessons you learned about using AI during the course?

  • How did AI influence group work at various stages of the course?

  • What surprised you the most about the use of AI during the course?

  • Can you think of any downsides of using AI for your studies?

  • How will your experience with AI in this course influence its use in your future studies?

  • In what ways do you think AI will transform future work in your discipline?

2.3 Data analysis

We analysed open-ended responses from the end-of-course survey (n = 34), using reflexive thematic analysis (RTA) (Braun and Clarke, 2019; Braun and Clarke, 2021) from a constructivist position. Our approach was primarily inductive and semantic, focusing on participants’ explicit meanings. Analysis proceeded iteratively through (i) familiarisation (repeated reading, note-taking), (ii) initial coding of all relevant segments in Excel, (iii) collation of codes into candidate themes, (iv) review of themes against coded data and the full dataset, and (v) defining and naming themes by specifying each theme’s central organising concept. To support transparency and reflexivity, the primary analyst maintained analytic memos, an audit trail, and evolving theme maps. All co-authors were given access to the anonymised raw data and coding materials and were invited to interrogate and, where appropriate, challenge emerging interpretations in regular “critical-friend” discussions; some co-authors independently read the full dataset or subsets to probe alternative explanations and refine theme boundaries. We actively sought variation and contrasting examples and selected illustrative quotations to convey both typicality and range.

The DigCompEdu framework was used as a sensitising framework at the interpretation/reporting stage (not as a deductive codebook during coding) (The Joint Research Centre: EU Science Hub). The DigCompEdu describes 22 competencies, organized in the following six areas (see Figure 1): Professional engagement (A1); Digital resources (A2); Teaching and learning (A3); Assessment (A4); Empowering learners (A5); Facilitating learners’ digital competence (A6). We translated typical educator-oriented areas into learner-facing competence descriptors to help articulate and name patterns of shared meaning without constraining inductive coding. To illustrate themes, we include 1–2 anonymised verbatim quotations per theme in the Results. Quotes were minimally edited for readability (ellipses for omissions; brackets for clarifications) and, where applicable, translated into English from Norwegian. In Table 1, the column “Evidence from data (illustrative)” presents participant-proximal, synthesised data extracts, short paraphrases that preserve participants’ phrasing and meaning. These extracts provide descriptive evidence (not interpretive claims).

Baseline item Percentage
Students with little or no prior AI experience 64%
Students’ confidence in explaining what AI is 28%
Believed AI will be important in future healthcare 71%
Wanted to learn more about AI in their own profession 77%
Expected course to help them think critically about AI in practice 83%
Believed ethical considerations around AI should be part of curriculum 75%

Baseline familiarity, expectations, and learning needs related to AI (N = 47).

For RQ1 (anticipated uses of AI), we coded all segments that described intended or envisioned applications of AI in study/clinical work (e.g., evidence triage, documentation support, scoping/structuring), grouped these into patterns of practice, and then situated them at the interpretation stage within relevant DigCompEdu areas (primarily A6, with contributions from A1–A3). For RQ2 (risks and conditions for responsible use), we coded statements naming downsides (privacy, bias, accuracy/“hallucinations,” over-reliance) alongside enabling conditions (guidance, governance literacy, hands-on training, time/tools for verification) and interpreted them against A6.4 Responsible use (and A1 for organisational supports). For RQ3 (innovation competence), we operationalised a learner-facing, service/process view, the ability to identify opportunities, run small tests of change, verify/mitigate risks, and collaborate across disciplines, and used these facets as interpretive prompts (not a deductive codebook) when defining and naming themes, consistent with DigCompEdu (A6/A1) and PDSA-based quality-improvement approaches (Redecker and Punie, 2017; Taylor et al., 2014; Langley et al., 2009).

2.4 Ethical considerations

Norwegian Agency for Shared Services in Education and Research (SIKT) approval (with ref. number 397521) was applied and obtained for the study. Participants were informed of their right to withdraw at any time without any consequences. Written informed consent was obtained from participants.

3 Results

Descriptive information on participants’ prior AI experience was collected in the pre-survey (n = 47). Most participants had backgrounds in radiography and biomedical laboratory science, with additional representation from pharmacy, nursing, physiotherapy, radiation therapy, medicine, biomedicine/biotechnology/molecular biology, public health/health economics & policy, and related design/engineering disciplines (e.g., industrial design, chemical engineering).

Among the 40 respondents who answered the career-alignment question, most expected the course to support their career plans by building competence in AI-enabled diagnostics and clinical workflows (e.g., CT/MR, documentation, automation), strengthening interprofessional collaboration, and providing tools for innovation/service design (user needs, usability, implementation). A smaller group were uncertain about direct relevance (notably in biomedical laboratory science), while several already using AI anticipated clearer governance/verification practices and potential pathways in policy, research, and digitalisation.

Table 1 summarizes students’ initial familiarity with artificial intelligence (AI), their expectations for its role in future healthcare, and their interest in learning more about AI in relation to their profession. The data highlights a general optimism about AI’s relevance, coupled with limited prior experience and a strong desire for structured learning and ethical guidance. We present five themes; DigCompEdu area codes are shown in brackets (Table 2). Labels use official DigCompEdu terminology. To provide participant voice without overloading Table 2, we include 1–2 anonymised exemplar quotations under each theme.

Theme Primary DigCompEdu area(s) Key sub-competencies (examples) Evidence from data (illustrative) Gaps/opportunities
1. Future clinical integration A6 Facilitating learners’ digital competence; A1 Professional engagement A6.1 Information and media literacy; A6.5 Digital problem solving; A1.2 Professional collaboration Save time on documentation/triage to spend more time with patients; keep a human check and translate AI output into safe decisions Develop local adaptation workflows and accountability maps; rehearse appraisal of model limits in clinical context
2. Learning trajectories A3 Teaching and learning; A2 Digital resources; A6 A3.4 Self-regulated learning; A2.1 Selecting digital resources; A6.1 Information and media literacy; A6.5 Digital problem solving Use AI to scope/structure and summarise, then verify/cite; set clear limits to avoid over-reliance; carry these routines into CPD Introduce a concise “prompt → verify” log; require dual-source checks and citation hygiene
3. Professional identity and interprofessional roles A1 Professional engagement; A6 A1.2 Collaboration; A1.4 Digital CPD; A6.2 Digital communication and collaboration See AI as a collaborator; act as translators between clinical and engineering perspectives while keeping patient-centred judgement Make interprofessional co-design explicit (mixed teams, mentor continuity, short sprint with checkpoints)
4. Ethical guardrails and supports A6; A1 A6.4 Responsible use (safety); A6.1 Information and media literacy; A1.3 Reflective practice Privacy, bias, accuracy/‘hallucinations,’ over-reliance; want clear guidance, governance literacy, and time/tools to verify with human oversight. Add a brief governance primer (risk categories, documentation); provide time/tools for verification and audit trails
5. Innovation-in-practice (service/process) A6; A1 A6.5 Digital problem solving; A1.2 Collaboration Try small workflow changes, templates, triage, decision scaffolds—with quick tests/iteration and collaboration with engineers Rehearse PDSA-style cycles; use a simple implementation canvas; run quick usability walk-throughs

Themes aligned to DigCompEdu (areas and example sub-competencies), with evidence and gaps.

DigCompEdu areas: A1 Professional engagement; A2 Digital resources; A3 Teaching and learning; A4 Assessment; A5 Empowering learners; A6 Facilitating learners’ digital competence. Some sub-competencies are presented as learner-facing analogues for clarity in a student dataset.

3.1 Theme 1: future clinical integration [A6, A1]

Students envisioned AI becoming embedded in everyday clinical work, such as streamlining documentation, triaging information, and supporting decision-making pathways. They expected clinicians to act as informed evaluators, translating algorithmic outputs into clinically responsible actions while preserving professional accountability.

“AI can support monitoring, interpret medical images, assist in surgery, and help track patient flow in the emergency department.”

“Recognizing more pathology on radiologic images and providing faster on-site results… could reduce radiologists’ workload.”

3.2 Theme 2: learning trajectories [A3, A2, A6]

By the end of the course, many reported concrete plans for using AI to scope and structure their learning tasks, complemented by verification routines and citation management. Students framed AI as a study aid that supports their academic reasoning and self-regulation rather than replacing it.

I’ll use AI more in my studies, but in a critical way.”

This experience encourages me to integrate AI for research, organisation and idea generation to improve efficiency in my studies.”

3.3 Theme 3: professional identity and interprofessional roles [A1, A6]

The exposure gained during the course led some to reframe AI from being merely a ‘tool’ to being a ‘collaborator’ that facilitates communication across professional boundaries. Students saw themselves as translators between clinical and engineering perspectives, emphasizing collaboration while retaining a patient-centred approach to decision-making.

“Automate hospital processes to free time for demanding tasks and shorten patient wait times.”

“Streamline booking, coordination and paperwork, fewer phone calls and a more efficient workflow.”

3.4 Theme 4: ethical guardrails and supports [A6, A1]

Optimism about AI was tempered by caution. Students frequently raised concerns about privacy, bias, accuracy (including ‘hallucinations’), and the risk of over-reliance on AI. They highlighted the need for governance literacy, institutional guidance, and adequate time and resources for verification to enable responsible and well-supervised use of AI.

“AI will have a big influence, but safety questions must be solved before it’s safe to use.”

“I learned how to use it properly and safely, e.g., never share sensitive information with ChatGPT.”

3.5 Theme 5: innovation in practice (service/process) [A6, A1]

In line with RQ3, students’ adjustments to their workflow using AI, such as templating notes, triaging information, and scaffolding decision paths were seen as micro-innovations. They considered iteration, testing, and collaboration with technical peers essential for ensuring that AI remains useful, safe, and centred on patient care.

“Increase efficiency and free time for patients, AI could manage booking, provide patient-centred information, and help coordination.”

“Automate routine tasks and improve data analysis, enhancing decisions and letting professionals focus on critical and creative work.”

3.6 Alignment to the DigCompEdu framework

We aligned each theme to the DigCompEdu framework to indicate where competencies cluster and where gaps remain (Table 1). In Table 1, the ‘Evidence from data (illustrative)’ column presents participant-proximal, synthesised extracts (concise paraphrases of respondents’ wording).

As Table 1 shows, evidence concentrated in A6 (Facilitating learners’ digital competence) and A1 (Professional engagement), whereas A4 (Assessment) was sparsely represented, highlighting a need to strengthen evaluative judgement and feedback practices.

4 Discussion

This study examined how master’s-level students, after structured engagement with AI in an interprofessional health-technology course, envision the AI’s role in their future clinical practice and studies. As summarised in Table 1, students’ accounts mapped chiefly to A6 (Facilitating learners’ digital competence) and A1 (Professional engagement), with contributions from A2 (Digital resources) and A3 (Teaching and learning); A4 (Assessment) was under-represented, signalling a concrete opportunity to strengthen evaluative judgement and feedback practices. Together with prior research on digital competence and AI in health education, this framing clarifies the competencies and supports students deem necessary for professional identity development and future readiness (Mainz et al., 2024; Meskó et al., 2017).

Students explored five themes: future clinical integration, learning trajectories, professional identity (including interprofessional roles), ethical guidelines, and innovation in practice. They anticipated the near-term integration of AI tools and described practical learning paths that combine AI-assisted scoping and structuring with clear verification processes. Students positioned themselves as translators between clinical and technical perspectives, while also advocating for institutional support. They framed AI-enabled workflow adjustments as micro-innovations, developed through iterative testing with a focus on patient-centred care.

Importantly, issues of safety and ethics belong under A6.4 (Responsible use) rather than A4 (Assessment). In our data, “responsible use” appeared in learner-facing terms, privacy, bias, accuracy/“hallucinations,” and over-reliance, highlighting needs for governance literacy, explicit oversight, and time/tools for verification within courses. In sum, the findings map chiefly to A6/A1, with A4 under-represented; the three RQs are elaborated below.

4.1 Addressing the A4 assessment gap

Students engaged most with A6 (specifically, A6.1, A6.3, A6.4, A6.5) and A1, while A4 (Assessment) remained under-represented. Rather than a shortcoming, this is a constructive diagnostic that can guide curriculum design. Low-stakes activities, such as utilizing AI-feedback vs. rubric comparison, criteria calibration with exemplars, and conducting brief reflect-revise justifications, can enhance coverage of A4 and develop evaluative judgement without undermining human oversight (Redecker and Punie, 2017; Rincón et al., 2025).

The recently launched DigComp 3.0 redefines AI as a transversal competence that cuts across other digital domains, cybersecurity, rights/responsibility, wellbeing, and mis/disinformation (The Joint Research Centre: EU Science Hub, 2025). This shifts the focus from merely ‘using a tool’ to exercising judgement, governance, and safe practice within each of these domains. In health-profession education, this means embedding AI literacy in these relating areas. For example, pairing the use of models with privacy/security routines; situating prompting and verification within information/media literacy; and aligning assessment with responsibility and critical appraisal rather than treating AI as a stand-alone topic.

4.2 Service/process innovation as a core graduate capability

Students demonstrated nascent service/process innovation through AI-supported micro-innovations (templating, triaging, decision-pathway scaffolds), paired with explicit verification routines and plans for health–engineering collaboration. Read through an ecosystem lens, these practices reflect co-creation and orchestration of service change that clinicians often lead and map to DigCompEdu A1 (Professional engagement) and A6 (digital problem solving/responsible use) (Lusch and Nambisan, 2015; Pikkarainen et al., 2022).

A useful distinction remains service/process innovation (new ways of organising care, workflows, and delivery) versus product/technology innovation (new tools, devices, software). The former is typically clinician-led within local ecosystems; the latter usually requires engineering/industry partnership and platform capabilities (Greenhalgh et al., 2004; Omachonu and Einspruch, 2010). To cultivate the service-innovation capability, programmes can embed short Plan–Do–Study–Act (PDSA) cycles, use a simple implementation canvas (problem → proposed change → verification plan → risks/oversight), and run brief usability walk-throughs with end users. Each micro-change should include lightweight metrics (e.g., time-on-task, error/omission rate, patient understanding) and A6.4 Responsible use checks (privacy, bias, documentation). This moves beyond “implementation” toward digital-health service design, while reserving product-level development for co-design with engineering/industry (Kosiol et al., 2024).

4.3 Professional identity in an AI era

Students view AI as a collaborator rather than just a tool, which supports their identities as AI-augmented professionals who retain judgement, communication skills, and empathy (Ackerhans et al., 2024; Cruess et al., 2014). To normalise this identity without undermining human agency, programmes can embed brief, structured reflections. Such as error audits, accountability maps, and short memos on “where my professional judgement adds value,” linked to A1 (Professional engagement) and A6.4 (Responsible use) (Braun and Clarke, 2019; Braun and Clarke, 2021).

4.4 Interprofessional readiness

Given the mixed cohort, students anticipate collaboration between health and engineering regarding usability, integration, and governance. Short co-design sprints mirror real-world deployment, build a common language, and cultivate habits that make AI both usable and safe (Greenhalgh et al., 2004; Guidance WHO, 2021; Rincón et al., 2025). This interprofessional stance aligns with A1 (Collaboration/Professional engagement) while product-level development remains reserved for partnerships with engineering and industry.

4.5 Ethical guardrails and institutional supports for responsible AI use

Students’ optimism is consistently paired with ethical caution, particularly regarding privacy, bias, and conditions for deployment (A6.4 Responsible use), rather than considering them barriers. In the context of DigCompEdu, these concerns fall under A6.4 (Responsible use) and point to specific supports: guidelines, regulatory/governance literacy, hands-on training, and time/tools for verification (Guidance WHO, 2021; Vuorikari et al., 2022).

A practical programme-level support framework could include: (i) concise data-handling and prompt-safety guidance; (ii) brief model/feature cards that state known limitations; (iii) embedded verification routines (dual-source checks, error logs); (iv) short governance primers (risk categories, documentation, accountability); and (v) targeted faculty development. These measures operationalise responsible use and align with sector recommendations (Rincón et al., 2025; Mainz et al., 2024).

4.6 Implications

To convert cautious optimism into practical competence, programmes should combine hands-on AI with verification routines, governance literacy, assessment activities focused on A4, such as comparing AI-feedback to rubrics and calibrating criteria, interprofessional co-design, and faculty development. This approach embeds the DigCompEdu competencies A6 (information & media literacy, responsible use, digital problem solving) and A1 into everyday learning. It responds to health-education and workforce bodies’ calls for structured, experiential AI training (Guidance WHO, 2021; Redecker and Punie, 2017). As a result, graduates will not only be more competent with tools but also ethically grounded and prepared for the future.

4.7 Study limitations

In interpreting these findings, several limitations of the study should be acknowledged. First, the study relied on self-reported data rather than objective measures of behaviour or performance. While self-reports capture students’ perceptions and intentions, they are subject to biases, such as, social desirability or recall bias that may not accurately reflect actual engagement or learning outcomes (Chan and Hu, 2023). Second, the participants were drawn from a single master’s-level course at one institution, which limits the generalizability of the results beyond this context (Divya et al., 2022). The specific educational setting and student population in this course may not represent other programmes or institutions, so caution is warranted when extrapolating these findings to broader health professions education. Finally, the short timeframe between the baseline and end-of-course surveys (within one academic term) captures only immediate changes in attitudes, not long-term effects. It is unclear whether the observed shifts in perspective would persist or translate into practice over time, underscoring the need for longitudinal studies to track sustained impacts (Chan and Hu, 2023).

4.8 Future directions for research and policy

Priority next steps are (1) to follow graduates longitudinally to test whether AI-integrated education predicts responsible use (A6.4), evaluative judgement (A4), and effective adoption in practice, triangulating self-report with observed tasks, artefact analysis, and OSCE-style assessments; (2) comparing pedagogical models (dedicated AI courses vs. integrated modules; simulation-based vs. classroom) using common outcomes (AI literacy, information/media practices, verification behaviour, ethical reasoning) (Redecker and Punie, 2017; Rincón et al., 2025); (3) conducting multi-site/international replications to examine curricular and cultural effects; (4) operationalizing service-innovation competence by measuring micro-change cycles (e.g., PDSA), interprofessional co-design, and lightweight metrics (time-on-task, error/omission rates, documentation quality, patient understanding); and (5) evaluating A4-oriented assessment designs (AI-feedback-vs-rubric comparisons, criteria calibration, reflect-revise justifications) for their impact on students’ evaluative judgement (Redecker and Punie, 2017; Rincón et al., 2025).

Instead of creating new frameworks, it is advisable to align curricula and accreditation with DigCompEdu and the recent DigComp 3.0, which treats AI as a transversal competence that intersects cybersecurity, rights/responsibility, wellbeing, and mis/disinformation (The Joint Research Centre: EU Science Hub, 2017, 2025). Specifically, oversight bodies should: (a) specify core AI/digital-health competencies mapped to DigCompEdu areas and embed them in programme learning outcomes and assessment blueprints; (b) require faculty development so educators can confidently supervise AI-assisted learning; and (c) provide resource for governance and ethics primers, time/tools for verification, and interprofessional co-design to reflect real deployment (Bekiaridis and Attwell, 2024; Redecker and Punie, 2017; Vuorikari et al., 2022). At the institutional level, clear AI-use policies, seed funding for curriculum innovation, and platforms for sharing exemplars/rubrics will help programmes keep pace while safeguarding quality and equity (Mainz et al., 2024; Shishehgar et al., 2025).

5 Conclusion

Structured, hands-on engagement with AI can reshape how master’s students in an interprofessional health technology course envision their future practice and studies. Viewed through DigCompEdu, students’ accounts align primarily with A6 (Facilitating learners’ digital competence), particularly in areas like information and media literacy, digital content creation, responsible use, and digital problem solving—and to A1 (Professional engagement), with contributions from A2/A3; A4 (Assessment) was under-represented. For RQ1, students anticipate using AI for evidence triage, documentation support, and structuring their studies while retaining human judgement. For RQ2, they specify conditions for responsible use, clear guidance, governance literacy, hands-on training, and time/tools for verification. For RQ3, they demonstrate service/process innovation via AI-supported micro-changes and verification routines, express their intention to co-design with engineers, and articulate an AI-augmented professional identity that safeguards empathy, communication, and accountability.

For educators and programme leaders, the implications are both practical and immediate. It’s essential to combine authentic AI use with verification routines and brief governance/ethics primers. Additionally, we should practice service and process innovation through methods like short Plan-Do-Study-Act cycles, a simple implementation canvas, and usability walk-throughs. To address a significant gap in the framework, we must incorporate low-stakes, A4-oriented assessment activities. Examples include comparing AI-generated feedback against rubrics, and calibrating criteria to strengthen evaluative judgement. These steps also align with the objectives of DigComp 3.0, which emphasizes that AI should be regarded as a transversal competence integrated across related digital domains. This reinforces the value of integrating AI alongside privacy and security practices, rights/responsibility, wellbeing, and mis/disinformation rather than treating it as a stand-alone topic. Together, these approaches will help graduates become not only proficient with tools, but ethically grounded, capable of AI-enabled healthcare in the future.

Statements

Data availability statement

Anonymised materials supporting the findings, including the survey instruments, coding framework, theme maps, and a de-identified set of exemplar quotations, are available from the corresponding author on reasonable request.

Ethics statement

The studies involving humans were approved by Norwegian Agency for Shared Services in Education and Research (SIKT) approval (with ref. number 397521). The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.

Author contributions

YR: Writing – review & editing, Validation, Formal analysis, Methodology, Conceptualization, Supervision, Data curation, Project administration, Resources, Writing – original draft, Software, Investigation, Visualization, Funding acquisition. ML: Methodology, Writing – review & editing, Writing – original draft. SJ: Writing – original draft, Writing – review & editing. WA: Conceptualization, Writing – original draft, Writing – review & editing, Validation, Methodology. MP: Writing – review & editing, Writing – original draft.

Funding

The author(s) declared that financial support was not received for this work and/or its publication.

Conflict of interest

The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declared that Generative AI was used in the creation of this manuscript. Generative AI (ChatGPT) was used only for language polishing and formatting suggestions. All study design, data analysis, interpretation, and final text were produced and verified by the authors.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

  • AckerhansS.HuynhT.KaiserC.SchultzC. (2024). Exploring the role of professional identity in the implementation of clinical decision support systems—a narrative review. Implement. Sci.19:11. doi: 10.1186/s13012-024-01339-x,

  • AdnerR. (2017). Ecosystem as structure: an actionable construct for strategy. J. Manag.43, 3958. doi: 10.1177/0149206316678451

  • AlamiH.LehouxP.AuclairY.De GuiseM.GagnonM.-P.ShawJ.et al. (2020). Artificial intelligence and health technology assessment: anticipating a new level of complexity. J. Med. Internet Res.22:e17707. doi: 10.2196/17707,

  • BekiaridisG.AttwellG. (2024). Integrating artificial intelligence in vocational and adult education: a supplement to the DigCompEdu framework. Ubiquity Proc.4:20. doi: 10.5334/uproc.142

  • BraunV.ClarkeV. (2019). Reflecting on reflexive thematic analysis. Qual. Res. Sport Exerc. Health11, 589597. doi: 10.1080/2159676x.2019.1628806

  • BraunV.ClarkeV. (2021). One size fits all? What counts as quality practice in (reflexive) thematic analysis?Qual. Res. Psychol.18, 328352. doi: 10.1080/14780887.2020.1769238

  • ChanC. K. Y.HuW. (2023). Students’ voices on generative AI: perceptions, benefits, and challenges in higher education. Int. J. Educ. Technol. High. Educ.20:43. doi: 10.1186/s41239-023-00411-8

  • CruessR. L.CruessS. R.BoudreauJ. D.SnellL.SteinertY. (2014). Reframing medical education to support professional identity formation. Acad. Med.89, 14461451. doi: 10.1097/ACM.0000000000000427,

  • DivyaG.KundalV. K.DebnathP. R.KundalR. (2022). Awareness of medical research among the resident doctors in a tertiary Care Hospital in India. J. Indian Assoc. Pediatr. Surg.27, 673676. doi: 10.4103/jiaps.jiaps_13_22,

  • GreenhalghT.RobertG.MacfarlaneF.BateP.KyriakidouO. (2004). Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q.82, 581629. doi: 10.1111/j.0887-378X.2004.00325.x,

  • Guidance WHO (2021). Ethics and governance of artificial intelligence for healthGeneva: World Health Organization.

  • KosiolJ.SilvesterT.CooperH.AlfordS.FraserL. (2024). Revolutionising health and social care: innovative solutions for a brighter tomorrow – a systematic review of the literature. BMC Health Serv. Res.24:809. doi: 10.1186/s12913-024-11099-5,

  • LangleyG. J.MoenR. D.NolanK. M.NolanT. W.NormanC. L.ProvostL. P. (2009). The improvement guide: a practical approach to enhancing organizational performance: John Wiley & Sons. San Francisco, CA: Jossey-Bass.

  • LuschR. F.NambisanS. (2015). Service innovation. MIS Q.39, 155175. doi: 10.25300/MISQ/2015/39.1.07

  • MainzA.NitscheJ.WeirauchV.MeisterS. (2024). Measuring the digital competence of health professionals: scoping review. JMIR Med. Educ.10:e55737. doi: 10.2196/55737,

  • MeskóB.DrobniZ.BényeiÉ.GergelyB.GyőrffyZ. (2017). Digital health is a cultural transformation of traditional healthcare. mHealth3:38. doi: 10.21037/mhealth.2017.08.07,

  • National Academy of Medicine (2024). Digital health action collaborative. Workforce implications of artificial intelligence in health and medicine [Online]. Available online at: https://nam.edu/wp-content/uploads/2025/02/Nam-Lc_10.30-Meeting-Summary.pdf (Accessed August 6, 2025).

  • OmachonuV. K.EinspruchN. G. (2010). Innovation in healthcare delivery systems: a conceptual framework. Innov. J. Public Sect. Innov. J.15, 120.

  • PikkarainenM.ErvastiM.Hurmelinna-LaukkanenP.NättiS. (2017). Orchestration roles to facilitate networked innovation in a healthcare ecosystem. Ottawa, Canada: Talent First Network.

  • PikkarainenM.KemppainenL.XuY.JanssonM.AhokangasP.KoivumäkiT.et al. (2022). Resource integration capabilities to enable platform complementarity in healthcare service ecosystem co-creation. Balt. J. Manag.17, 688704. doi: 10.1108/bjm-11-2021-0436

  • RedeckerC.PunieY. (2017). European framework for the digital competence of educators: DigCompEdu. Eur 28775 En ed. Luxembourg: Publications Office of the European Union.

  • RincónE. H. H.JimenezD.AguilarL. A. C.FlórezJ. M. P.TapiaÁ. E. R.PeñuelaC. L. J. (2025). Mapping the use of artificial intelligence in medical education: a scoping review. BMC Med. Educ.25:526. doi: 10.1186/s12909-025-07089-8,

  • ShawJ.AgarwalP.DesveauxL.PalmaD. C.StamenovaV.JamiesonT.et al. (2018). Beyond “implementation”: digital health innovation and service design. NPJ Digit. Med.1:48. doi: 10.1038/s41746-018-0059-8,

  • ShishehgarS.Murray-ParahiP.AlsharaydehE.MillsS.LiuX. (2025). Artificial intelligence in health education and practice: a systematic review of health students’ and academics’ knowledge, perceptions and experiences. Int. Nurs. Rev.72:e70045. doi: 10.1111/inr.70045,

  • TaylorM. J.McnicholasC.NicolayC.DarziA.BellD.ReedJ. E. (2014). Systematic review of the application of the plan–do–study–act method to improve quality in healthcare. Bmj Qual. Saf.23, 290298. doi: 10.1136/bmjqs-2013-001862,

  • The Joint Research Centre: EU Science Hub (2017). Digital competence framework for educators (DigCompEdu) [Online]. Available online at: https://joint-research-centre.ec.europa.eu/digcompedu_en (Accessed January 25, 2026).

  • The Joint Research Centre: EU Science Hub (2025). Current developments on DigComp (2024–2025) [Online]. Available online at: https://joint-research-centre.ec.europa.eu/projects-and-activities/education-and-training/digital-transformation-education/digital-competence-framework-citizens-digcomp/current-developments-digcomp-2024-2025_en (Accessed November 5. 2025).

  • VuorikariR.KluzerS.PunieY. (2022). DigComp 2.2: The digital competence framework for citizens-with new examples of knowledge, skills and attitudesLuxembourg: Publications Office of the European Union.

Summary

Keywords

artificial intelligence, assessment and evaluative judgement, digital competence, health professions education, interprofessional learning

Citation

Røe Y, Lukic M, Johansen S, Admiraal W and Pikkarainen M (2026) Building digital competence for future healthcare through hands-on AI experience. Front. Educ. 11:1752683. doi: 10.3389/feduc.2026.1752683

Received

23 November 2025

Revised

29 January 2026

Accepted

03 February 2026

Published

04 March 2026

Volume

11 – 2026

Edited by

Jennifer Apolinário-Hagen, Heinrich Heine University of Düsseldorf, Germany

Reviewed by

Karen Jean Day, The University of Auckland, New Zealand

Luís Duarte Andrade Ferreira, Agência Regional para o Desenvolvimento da Investigação Tecnologia e Inovação (ARDITI), Portugal

Updates

Copyright

*Correspondence: Yngve Røe, yngveroe@oslomet.no

Disclaimer

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Source link