There is understandable excitement about the potential benefits of artificial intelligence for health. From promises of longevity to improved diagnosis and quality of care, the hype extends to and well beyond business profitability. Far less attention is paid to the more difficult and consequential question:
What problem are we actually trying to solve with AI, and what might prevent us from solving it well?
These questions were at the centre of discussion at the 21st Asia Conference on Healthcare and Health Insurance on 27 March, where GIFT Managing Director Eric Stryson led a session titled “AI Roadmaps: Policy & Practice for Workforce, Health, Healthcare, and Insurance.”
The session brought together senior leaders from across Asia to examine how AI could responsibly reshape healthcare delivery, public health systems, health insurance, and the wellbeing of the people who make up the workforce.
A key insight emerged: AI’s real value lies not in novelty, but in its ability to improve decision‑making, policy design, and institutional trust, but only if it is deployed with clarity and care.
Diagnostic errors still affect a substantial share of patients worldwide, with up to 15% of diagnoses estimated to be inaccurate, delayed or wrong. Rural facilities across Asia often lack basic point‑of‑care tools, which increases the risk of misdiagnosis at the front line of care. In many ASEAN countries, significant delays in diagnosis remain common, with one recent review from the region reporting that nearly two‑thirds of patients experienced diagnostic delays that contributed to more advanced disease at the time of treatment.
Across Asia‑Pacific, an estimated 475 million people live with mental health conditions, yet treatment gaps can reach 76–90% in lower‑income settings, meaning that the majority of those who need support still do not receive appropriate care. Indeed, even in an advanced city like Hong Kong, mental health services are wholly inadequate. Here are robust, sourced statistics supporting the inadequacy of mental health services in Hong Kong:
- An estimated 1 in 7 people in Hong Kong have a diagnosable mental disorder, yet services are critically under-resourced to meet their needs.
- As recently as 2019 (pre-pandemic), 61% of adults in Hong Kong suffered from poor mental health and, as one would expect, the pandemic years compounded this significantly.
- Among children and adolescents, 24.4% experienced at least one mental health issue in the past year, yet the vast majority received no professional care.
In financial systems, late payment and inconsistent settlement of obligations remain widespread: one study found that one in four companies in Asia‑Pacific were paid more than 90 days after invoicing, and regulators in multiple Asian markets have had to introduce penalty interest rates to counter persistent delays in insurance claim payments. At the same time, rapid changes in service delivery and technology mean that many health workers in low‑ and middle‑income settings face shifting roles and skill requirements, with reskilling efforts often trailing behind structural changes in care models and diagnostic practices.
There is broad consensus that AI has the potential to offer real, scalable ways to address these challenges. Yet the discussion also highlighted how these same tools can deepen inequities if deployed without robust safeguards. Systems ostensibly designed to improve diagnostic accuracy, such as point‑of‑care tools deployed without adequate validation or training, can unintentionally widen urban-rural gaps when they are introduced into under‑resourced clinics without appropriate oversight. Algorithms used to streamline underwriting and other financial decisions have been shown in multiple sectors to replicate or embed existing social and economic biases into pricing and risk assessment unless they are deliberately audited and governed. Similarly, productivity and monitoring technologies in workplaces, if implemented without safeguards, can function less as tools for professional support and more as instruments of surveillance. Left to the vagaries of markets and without strong guardrails AI tools present with negative implications for autonomy, trust and judgement in high–stakes fields like health and social care.
The roadmap matters just as much as the destination.
Responsible AI adoption requires grappling with risks before a single line of code is deployed, rather than trying to manage consequences after harm has already occurred.
Three principles stood out as particularly important for organisations navigating this space:
- Clarity of purpose: AI initiatives must be rooted in a clear, human-centred goal. The most effective organisations are those willing to ask this hard question: what outcome are we trying to achieve for the people affected?
- Anticipate rather than react: Risks such as clinical hallucinations, biased underwriting, and covert emotional monitoring of workers are not hypothetical. Identifying and addressing these risks before deployment is what makes any innovation sustainable and trustworthy.
- Guardrails to protect people: Consent, transparency, human review, and public oversight have become the foundations of responsible AI. Particularly given the essential nature of health-related use cases, and the vulnerability of those involved, AI guardrails must reflect a genuine commitment to the broader social impact of technology at scale.
In the end, the leaders who will shape the next decade in the health and insurance sectors are not those who move the fastest, but those who move with clarity about where they are heading, what could go wrong, and who they are responsible for along the way.
We thank Asia Insurance Review for hosting this timely and important conference, and for convening thoughtful discussions on how AI can be integrated into health and insurance systems with responsibility, foresight, and care.
Understanding diverse stakeholder views is fundamental to GIFT’s work in leadership development and policy advisory work across the region.
For organisations ready to change, the next GIFT Global Leaders Programme (GLP) begins this July.
Rebecca Yip
Rebecca Yip, PhD is a Programme Manager at GIFT HK where she manages executive leadership programmes contributing to redesigning society and enabling greater societal sustainability and resilience. Her interests include global shifts, socioeconomic development and the rise of Asia and its’ relationship with the West.