The DT is a generic instrument able to integrate different technologies applicable across the disease (and indeed health) spectrum underlying future scenarios of medicine. A significant level of acceptance for DTs as a transformative technology in medicine exists. This finding aligns well with the broader trend of increasing acceptance of AI in healthcare11. However, this acceptance is accompanied by specific conditions and concerns, which are reflected in our study’s recommendations. DTs are widely viewed as tools that should augment healthcare professionals rather than replace them. Respondents emphasize their role in improving fundamental aspects of medical practice, such as coordinating treatment and identifying health risks. However, the perception of DTs as a personal health management tool remains limited. A strong preference for maintaining human medical expertise highlights the need for careful integration of DT technologies within existing healthcare frameworks.

Notably, there is strong opposition to any mandatory implementation of DTs. Individuals value the option of receiving traditional, non-digital healthcare, even if it may not be as optimized as AI-driven alternatives. This preference carries implications for healthcare policy and medical education, as future professionals must be trained to navigate both digital and conventional care pathways12.

While the potential benefits of DTs are widely acknowledged, concerns regarding privacy and data security remain prevalent. A majority of participants support anonymized health data sharing for medical advancements, but simultaneously express fears about coercion to share personal data. Financial considerations also play a role in public perception. If DTs enhance medical care quality, most respondents find financial compensation for these services acceptable. However, skepticism persists regarding the role of private entities, such as tech and pharmaceutical companies, in developing and deploying DT technology. Instead, trust is primarily placed in public institutions, universities, and hospitals, reinforcing the need for state-led digital health initiatives. This echoes previous data13.

Our research highlights distinct priorities among stakeholder groups, which must be carefully balanced. Patient-consumers want to be empowered to make well-informed choices while being able to maintain justified trust towards other healthcare stakeholders; healthcare professionals emphasize the need for the integration of DT services in interprofessional teams and international interoperability while allowing their patients to decline DT services; developers aim for good access to (training) data through open access schemes, reasonable reimbursement for their products as well as for clear regulation that is responsive to new developments; finally, regulators and payors are thinking of a centralized data infrastructure, access to anonymized data in case of significant public health benefit, and benchmarks for quality and security. Interestingly, these goals do not necessarily conflict. There might be tensions—e.g., between patient-consumers and regulators on the question of when the anonymous release of data for public health interests is justified, or between payors and developers on what reimbursement schemes for DT services are appropriate, or between physicians and payors on patients’ ability to opt out of DT services that have been proven to be cost-effective. Still, it seems that being aware of these groups’ key concerns should allow for implementing DT services in a way that does justice to patients’ rights, enables healthcare professionals to provide efficient care of high quality, sets incentives for innovation and sustained interest of companies in the field and contributes to a sustainable, fair health system for all.

We take the findings as support for our recommendations as follows: people are generally positive towards DTs, but they want to keep agency with respect to whether or not to use this technology, which should be enabled. Healthcare professionals are expected to be sufficiently competent in using this technology for the benefit of patients, which should be fostered in curricula. People are, in principle, willing to provide the necessary resources (data, financial compensation models) to develop DTs; but private actors, who may actually develop those technologies, are confronted with considerable skepticism. To counteract this skepticism, the state and public institutions should have a key role in developing a health system that critically relies on DTs; including preserving options that do not need such technologies.

DT technology represents a clear departure from the core idea of today’s healthcare system, which states that physicians and the research system that supports them are the ultimate arbiter not only on how to treat patients, but more importantly, on defining what constitutes a disease in the first place. This conceptualization of disease and treatment rationales is by necessity a human affair, typically the result of consensus-finding processes among physicians organized in specialist societies. While data-driven approaches, particular evidence-based medicine, are attempts to objectivize the process, a large swath of medicine is still a very human intuition-driven enterprise with vast differences in available care pathways and ultimately health outcomes14. The core of this enterprise is an international, national, regional, or individual understanding of disease and treatment strategies. By necessity, the AI tools that are being introduced today reflect this situation, that is, the AI tools, trained on current medical data, are a mere reflection of current medical concepts, definitions, and approaches15,16. Constrained as such, current AI cannot evolve beyond what is happening in medicine today. In contrast, DT technology, in its most advanced state, applies a predominately data-driven approach for predicting a future state given a patient’s current health data. At its core, a DT should “know” the principles of how processes at different scales evolve over time, but it will not be constrained by them. Rather, DTs will use machine-derived states to trigger corrective actions in the form of health interventions. It is unclear at this point whether humans will be able to understand the meaning of these states, and whether these states can be translated into current medical concepts. This begs the question: if DTs will emerge as a key tool in healthcare, in the form of personal digital replicas guiding our treatment choices, what is the role of the traditional bearer of medical knowledge and wisdom, the physician? Can physicians remain the epistemic authority in healthcare, given that they operate using a scheme of thought that may no longer allow them to understand machine predictions of a patient’s health state? How should patients deal with the technology, both in the role of data providers and consumers of DT services? Is there an emergent role for technology companies that build and maintain digital medical twins? And finally, what are sensible regulatory approaches to ensure fair and equitable access to the technology? Our paper provides possible answers to these questions, boldly proposing a future healthcare system with new stakeholders, such as tech companies and novel roles of traditional players like physicians, but also contrasting the current perspective of the population versus a different outlook on technological development. Realistically, changes will appear gradually, as will be the capabilities of the DT technology, which, in its first incarnation, will likely be built on today’s AI capabilities using only little multi-scale simulations. The paper acknowledges this fact by discussing the near-, mid- and long-term effects of DT technology on the healthcare system.

The methods employed in this study have inherent and study-specific limitations. Although we attempted to have multidisciplinary diversity in our 20-member expert group that forecasted future scenarios of medicine, they ultimately represent a group hailing from academic and/or tertiary-level care backgrounds of Zurich universities. This may introduce bias, and therefore, the scenarios may have been different if we had attempted to include other groups, such as primary care physicians or patient-consumers. Similarly, although the second 22-member expert workshop diversified the group to include non-academic/non-physician stakeholders, the resulting policy recommendations may still suffer from selection bias. Although invited, patient representatives unfortunately canceled their participation in the second workshop, which represents a limitation. While the presence of a patient representative would have strengthened the second expert workshop, substantial input was provided by other participants taking the perspective of patients. We also remind that the patient’s view is included in the process earlier through the focus group that consisted of persons with patient/chronic disease experiences. A limitation here is that the six German-speaking patient-consumers only represent a convenience sample and the issues raised may not be representative of the wider Swiss population. This limitation was partially mitigated by the representative survey of the Swiss population that included German-, French-, and Italian-speaking Swiss. Last but not least, the findings of this research are based on Swiss perspectives and may not apply to other healthcare systems.

Source link