Advertisements
Feeds:
Posts
Comments

Archive for the ‘An executive’s guide to AI’ Category


Author and Curator: Dror Nir, PhD

In the last couple of years we are witnessing a surge of AI applications in healthcare. It is clear now, that AI and its wide range of health-applications are about to revolutionize diseases’ pathways and the way the variety of stakeholders in this market interact.

Not surprisingly, the developing surge has waken the regulatory watchdogs who are now debating ways to manage the introduction of such applications to healthcare. Attributing measures to known regulatory checkboxes like safety, and efficacy is proving to be a complex exercise. How to align claims made by manufacturers, use cases, users’ expectations and public expectations is unclear. A recent demonstration of that is the so called “failure” of AI in social-network applications like FaceBook and Twitter in handling harmful materials.

‘Advancing AI in the NHS’ – is a report covering the challenges and opportunities of AI in the NHS. It is a modest contribution to the debate in such a timely and fast-moving field!  I bring here the report’s preface and executive summary hoping that whoever is interested in reading the whole 50 pages of it will follow this link: f53ce9_e4e9c4de7f3c446fb1a089615492ba8c

Screenshot 2019-04-07 at 17.18.18

 

Acknowledgements

We and Polygeia as a whole are grateful to Dr Dror Nir, Director, RadBee, whose insights

were valuable throughout the research, conceptualisation, and writing phases of this work; and to Dr Giorgio Quer, Senior Research Scientist, Scripps Research Institute; Dr Matt Willis, Oxford Internet Institute, University of Oxford; Professor Eric T. Meyer, Oxford Internet Institute, University of Oxford; Alexander Hitchcock, Senior Researcher, Reform; Windi Hari, Vice President Clinical, Quality & Regulatory, HeartFlow; Jon Holmes, co-founder and Chief Technology Officer, Vivosight; and Claudia Hartman, School of Anthropology & Museum Ethnography, University of Oxford for their advice and support.

Author affiliations

Lev Tankelevitch, University of Oxford

Alice Ahn, University of Oxford

Rachel Paterson, University of Oxford

Matthew Reid, University of Oxford

Emily Hilbourne, University of Oxford

Bryan Adriaanse, University of Oxford

Giorgio Quer, Scripps Research Institute

Dror Nir, RadBee

Parth Patel, University of Cambridge

All affiliations are at the time of writing.

Polygeia

Polygeia is an independent, non-party, and non-profit think-tank focusing on health and its intersection with technology, politics, and economics. Our aim is to produce high-quality research on global health issues and policies. With branches in Oxford, Cambridge, London and New York, our work has led to policy reports, peer-reviewed publications, and presentations at the House of Commons and the European Parliament. http://www.polygeia.com @Polygeia © Polygeia 2018. All rights reserved.

Foreword

Almost every day, as MP for Cambridge, I am told of new innovations and developments that show that we are on the cusp of a technological revolution across the sectors. This technology is capable of revolutionising the way we work; incredible innovations which could increase our accuracy, productivity and efficiency and improve our capacity for creativity and innovation.

But huge change, particularly through adoption of new technology, can be difficult to  communicate to the public, and if we do not make sure that we explain carefully the real benefits of such technologies we easily risk a backlash. Despite good intentions, the care.data programme failed to win public trust, with widespread worries that the appropriate safeguards weren’t in place, and a failure to properly explain potential benefits to patients. It is vital that the checks and balances we put in place are robust enough to sooth public anxiety, and prevent problems which could lead to steps back, rather than forwards.

Previous attempts to introduce digital innovation into the NHS also teach us that cross-disciplinary and cross-sector collaboration is essential. Realising this technological revolution in healthcare will require industry, academia and the NHS to work together and share their expertise to ensure that technical innovations are developed and adopted in ways that prioritise patient health, rather than innovation for its own sake. Alongside this, we must make sure that the NHS workforce whose practice will be altered by AI are on side. Consultation and education are key, and this report details well the skills that will be vital to NHS adoption of AI. Technology is only as good as those who use it, and for this, we must listen to the medical and healthcare professionals who will rightly know best the concerns both of patients and their colleagues. The new Centre for Data Ethics and Innovation, the ICO and the National Data Guardian will be key in working alongside the NHS to create both a regulatory framework and the communications which win society’s trust. With this, and with real leadership from the sector and from politicians, focused on the rights and concerns of individuals, AI can be advanced in the NHS to help keep us all healthy.

Daniel Zeichner

MP for Cambridge

Chair, All-Party Parliamentary Group on Data Analytics

 

Executive summary

Artificial intelligence (AI) has the potential to transform how the NHS delivers care. From enabling patients to self-care and manage long-term conditions, to advancing triage, diagnostics, treatment, research, and resource management, AI can improve patient outcomes and increase efficiency. Achieving this potential, however, requires addressing a number of ethical, social, legal, and technical challenges. This report describes these challenges within the context of healthcare and offers directions forward.

Data governance

AI-assisted healthcare will demand better collection and sharing of health data between NHS, industry and academic stakeholders. This requires a data governance system that ensures ethical management of health data and enables its use for the improvement of healthcare delivery. Data sharing must be supported by patients. The recently launched NHS data opt-out programme is an important starting point, and will require monitoring to ensure that it has the transparency and clarity to avoid exploiting the public’s lack of awareness and understanding. Data sharing must also be streamlined and mutually beneficial. Current NHS data sharing practices are disjointed and difficult to negotiate from both industry and NHS perspectives. This issue is complicated by the increasing integration of ’traditional’ health data with that from commercial apps and wearables. Finding approaches to valuate data, and considering how patients, the NHS and its partners can benefit from data sharing is key to developing a data sharing framework. Finally, data sharing should be underpinned by digital infrastructure that enables cybersecurity and accountability.

Digital infrastructure

Developing and deploying AI-assisted healthcare requires high quantity and quality digital data. This demands effective digitisation of the NHS, especially within secondary care, involving not only the transformation of paper-based records into digital data, but also improvement of quality assurance practices and increased data linkage. Beyond data digitisation, broader IT infrastructure also needs upgrading, including the use of innovations such as wearable technology and interoperability between NHS sectors and institutions. This would not only increase data availability for AI development, but also provide patients with seamless healthcare delivery, putting the NHS at the vanguard of healthcare innovation.

Standards

The recent advances in AI and the surrounding hype has meant that the development of AI-assisted healthcare remains haphazard across the industry, with quality being difficult to determine or varying widely. Without adequate product validation, including in

real-world settings, there is a risk of unexpected or unintended performance, such as sociodemographic biases or errors arising from inappropriate human-AI interaction. There is a need to develop standardised ways to probe training data, to agree upon clinically-relevant performance benchmarks, and to design approaches to enable and evaluate algorithm interpretability for productive human-AI interaction. In all of these areas, standardised does not necessarily mean one-size-fits-all. These issues require addressing the specifics of AI within a healthcare context, with consideration of users’ expertise, their environment, and products’ intended use. This calls for a fundamentally interdisciplinary approach, including experts in AI, medicine, ethics, cognitive science, usability design, and ethnography.

Regulations

Despite the recognition of AI-assisted healthcare products as medical devices, current regulatory efforts by the UK Medicines and Healthcare Products Regulatory Agency and the European Commission have yet to be accompanied by detailed guidelines which address questions concerning AI product classification, validation, and monitoring. This is compounded by the uncertainty surrounding Brexit and the UK’s future relationship with the European Medicines Agency. The absence of regulatory clarity risks compromising patient safety and stalling the development of AI-assisted healthcare. Close working partnerships involving regulators, industry members, healthcare institutions, and independent AI-related bodies (for example, as part of regulatory sandboxes) will be needed to enable innovation while ensuring patient safety.

The workforce

AI will be a tool for the healthcare workforce. Harnessing its utility to improve care requires an expanded workforce with the digital skills necessary for both developing AI capability and for working productively with the technology as it becomes commonplace.

Developing capability for AI will involve finding ways to increase the number of clinician-informaticians who can lead the development, procurement and adoption of AI technology while ensuring that innovation remains tied to the human aspect of healthcare delivery. More broadly, healthcare professionals will need to complement their socio-emotional and cognitive skills with training to appropriately interpret information provided by AI products and communicate it effectively to co-workers and patients.

Although much effort has gone into predicting how many jobs will be affected by AI-driven automation, understanding the impact on the healthcare workforce will require examining how jobs will change, not simply how many will change.

Legal liability

AI-assisted healthcare has implications for the legal liability framework: who should be held responsible in the case of a medical error involving AI? Addressing the question of liability will involve understanding how healthcare professionals’ duty of care will be impacted by use of the technology. This is tied to the lack of training standards for healthcare professionals to safely and effectively work with AI, and to the challenges of algorithm interpretability, with ”black-box” systems forcing healthcare professionals to blindly trust or distrust their output. More broadly, it will be important to examine the legal liability of healthcare professionals, NHS trusts and industry partners, raising questions

Recommendations

  1. The NHS, the Centre for Data Ethics and Innovation, and industry and academic partners should conduct a review to understand the obstacles that the NHS and external organisations face around data sharing. They should also develop health data valuation protocols which consider the perspectives of patients, the NHS, commercial organisations, and academia. This work should inform the development of a data sharing framework.
  2. The National Data Guardian and the Department of Health should monitor the NHS data opt-out programme and its approach to transparency and communication, evaluating how the public understands commercial and non-commercial data use and the handling of data at different levels of anonymisation.
  3. The NHS, patient advocacy groups, and commercial organisations should expand public engagement strategies around data governance, including discussions about the value of health data for improving healthcare; public and private sector interactions in the development of AI-assisted healthcare; and the NHS’s strategies around data anonymisation, accountability, and commercial partnerships. Findings from this work should inform the development of a data sharing framework.
  4. The NHS Digital Security Operations Centre should ensure that all NHS organisations comply with cybersecurity standards, including having up-to-date technology.
  5. NHS Digital, the Centre for Data Ethics and Innovation, and the Alan Turing Institute should develop technological approaches to data privacy, auditing, and accountability that could be implemented in the NHS. This should include learning from Global Digital Exemplar trusts in the UK and from international examples such as Estonia.
  6. The NHS should continue to increase the quantity, quality, and diversity of digital health data across trusts. It should consider targeted projects, in partnership with professional medical bodies, that quality-assure and curate datasets for more deployment-ready AI technology. It should also continue to develop its broader IT infrastructure, focusing on interoperability between sectors, institutions, and technologies, and including the end users as central stakeholders.
  7. The Alan Turing Institute, the Ada Lovelace Institute, and academic and industry partners in medicine and AI should develop ethical frameworks and technological approaches for the validation of training data in the healthcare sector, including methods to minimise performance biases and validate continuously-learning algorithms.
  8. The Alan Turing Institute, the Ada Lovelace Institute, and academic and industry partners in medicine and AI should develop standardised approaches for evaluating product performance in the healthcare sector, with consideration for existing human performance standards and products’ intended use.
  9. The Alan Turing Institute, the Ada Lovelace Institute, and academic and industry partners in medicine and AI should develop methods of enabling and evaluating algorithm interpretability in the healthcare sector. This work should involve experts in AI, medicine, ethics, usability design, cognitive science, and ethnography, among others.
  10. Developers of AI products and NHS Commissioners should ensure that usability design remains a top priority in their respective development and procurement of AI-assisted healthcare products.
  11. The Medicines and Healthcare Products Regulatory Agency should establish a digital health unit with expertise in AI and digital products that will work together with manufacturers, healthcare bodies, notified bodies, AI-related organisations, and international forums to advance clear regulatory approaches and guidelines around AI product classification, validation, and monitoring. This should address issues including training data and biases, performance evaluation, algorithm interpretability, and usability.
  12. The Medicines and Healthcare Products Regulatory Agency, the Centre for Data Ethics and Innovation, and industry partners should evaluate regulatory approaches, such as regulatory sandboxing, that can foster innovation in AI-assisted healthcare, ensure patient safety, and inform on-going regulatory development.
  13. The NHS should expand innovation acceleration programmes that bridge healthcare and industry partners, with a focus on increasing validation of AI products in real-world contexts and informing the development of a regulatory framework.
  14. The Medicines and Healthcare Products Regulatory Agency and other Government bodies should arrange a post-Brexit agreement ensuring that UK regulations of medical devices, including AI-assisted healthcare, are aligned as closely as possible to the European framework and that the UK can continue to help shape Europe-wide regulations around this technology.
  15. The General Medical Council, the Medical Royal Colleges, Health Education England, and AI-related bodies should partner with industry and academia on comprehensive examinations of the healthcare sector to assess which, when, and how jobs will be impacted by AI, including analyses of the current strengths, limitations, and workflows of healthcare professionals and broader NHS staff. They should also examine how AI-driven workforce changes will impact patient outcomes.
  16. The Federation of Informatics Professionals and the Faculty of Clinical Informatics should continue to lead and expand standards for health informatics competencies, integrating the relevant aspects of AI into their training, accreditation, and professional development programmes for clinician-informaticians and related professions.
  17. Health Education England should expand training programmes to advance digital and AI-related skills among healthcare professionals. Competency standards for working with AI should be identified for each role and established in accordance with professional registration bodies such as the General Medical Council. Training programmes should ensure that ”un-automatable” socio-emotional and cognitive skills remain an important focus.
  18. The NHS Digital Academy should expand recruitment and training efforts to increase the number of Chief Clinical Information Officers across the NHS, and ensure that the latest AI ethics, standards, and innovations are embedded in their training programme.
  19. Legal experts, ethicists, AI-related bodies, professional medical bodies, and industry should review the implications of AI-assisted healthcare for legal liability. This includes understanding how healthcare professionals’ duty of care will be affected, the role of workforce training and product validation standards, and the potential role of NHS Indemnity and no-fault compensation systems.
  20. AI-related bodies such as the Ada Lovelace Institute, patient advocacy groups and other healthcare stakeholders should lead a public engagement and dialogue strategy to understand the public’s views on liability for AI-assisted healthcare.

Advertisements

Read Full Post »


McKinsey Top Ten Articles on Artificial Intelligence: 2018’s most popular articles – An executive’s guide to AI

Reporter: Aviva Lev-Ari, PhD, RN

 

TOP TEN | ARTIFICIAL INTELLIGENCE 2018
The year’s most popular articles on artificial intelligence

An executive’s guide to AI

1. An executive’s guide to AI
Staying ahead in the accelerating artificial-intelligence race requires executives to make nimble, informed decisions about where and how to employ AI in their business. One way to prepare to act quickly: know the AI essentials presented in this guide. More →

Notes from the AI frontier: Applications and value of deep learning

2. Notes from the AI frontier: Applications and value of deep learning
An analysis of more than 400 use cases across 19 industries and nine business functions highlights the broad use and significant economic potential of advanced AI techniques. More →

What AI can and can’t do (yet) for your business

3. What AI can and can’t do (yet) for your business
Artificial intelligence is a moving target. Here’s how to take better aim. More →

4. The economics of artificial intelligence
Rotman School of Management professor Ajay Agrawal explains how AI changes the cost of prediction and what this means for business. More →

5. Notes from the AI frontier: Modeling the impact of AI on the world economy
Artificial intelligence has large potential to contribute to global economic activity. But widening gaps among countries, companies, and workers will need to be managed to maximize the benefits. More →

6. The executive’s AI playbook
It’s time to break out of pilot purgatory and more effectively apply artificial intelligence and advanced analytics throughout your organization. Our interactive playbook can help. More →

7. Artificial intelligence: Why a digital base is critical
Early AI adopters are starting to shift industry profit pools. Companies need strong digital capabilities to compete. More →

8. The promise and challenge of the age of artificial intelligence
AI promises considerable economic benefits, even as it disrupts the world of work. These three priorities will help achieve good outcomes. More →

9. The real-world potential and limitations of artificial intelligence
Artificial intelligence has the potential to create trillions of dollars of value across the economy—if business leaders work to understand what AI can and cannot do. More →

10. How artificial intelligence and data add value to businesses
Artificial intelligence will transform many companies and create completely new types of businesses. The cofounder of Coursera, AI Fund, and Landing.AI shares how businesses can benefit. More →

SOURCE

From: McKinsey Top Ten <publishing@email.mckinsey.com>

Reply-To: support@email.mckinsey.com” <support-HP2v40000016815507486b77a82f4bbc782e8252@email.mckinsey.com>

Date: Thursday, January 3, 2019 at 3:03 PM

To: Aviva Lev-Ari <AvivaLev-Ari@alum.berkeley.edu>

Subject: Artificial Intelligence: 2018’s most popular articles

Read Full Post »