With a recent surge in AI tools and systems over the past several years, the technology is set to revolutionise a number of sectors – with healthcare likely to be one of the most affected.

The Information Commissioner’s Office, or ICO (the entity responsible for regulating personal data and its use in AI), has recently released two sets of guidance key to this area: Regulating AI: The ICO’s strategic approach and Transparency in health and social care. In this article, specialists from our Commercial Healthcare and Information Law and Privacy teams look at what this means for AI in healthcare in the near future.

Why does AI in healthcare matter?

The innovative and novel uses of AI-based diagnostic tools, for example for imaging or pathology studies and in clinical decision making, have already yielded some success by improving patient outcomes and experiences, reducing costs and empowering clinicians to achieve more.

More fundamentally, AI solutions present an opportunity for truly pioneering changes in healthcare, and can feed into every intervention of health care: from triage, to detection, diagnosis, prediction of the progress of a disease and right into the personalisation of treatment.  However, an app or a technology alone will not lead to the potential healthcare revolution. Buy-in from healthcare professionals, patients, and regulators is required, and so concerns around patient trust, privacy and human dignity must be allayed before AI solutions can truly take off in the healthcare market.

Risks of AI in healthcare

Where these concerns about privacy and misuse of personal data can carry regulatory fines of up to the greater of £17.5 million, or 4% global annual turnover, as well as substantial civil and reputational liabilities, there are clear commercial reasons to be concerned about implementation. A breach of the data protection regime can result in fines up to the greater of £17.5 million or 4% global annual turnover, as well as substantial potential civil liability and reputational risk.

The use of AI in healthcare may compromise patient trust and privacy in several different ways, including:

  1. AI may obscure clinical decision-making by becoming a ‘black box’ decision maker, understood by neither patients nor practitioners;
  2. the more data you process, the more chance there will always be of data misuse contrary to the UK GDPR, such as the AI developer using clinical information to develop the AI contrary to agreed practices;
  3. The more data you process, the more chance there is of a data breach, especially where data is being sent between organisations.

Accordingly, the benefits and ethical use of AI solutions used in healthcare should be at the forefront of its development in the sector. This will require better collaboration between clinicians, tech entrepreneurs, policy makers and information governance teams to formulate the innovations and applications of AI in healthcare.

 Key points to consider for AI in healthcare

When implementing AI, we would suggest considering the following as initial key points:

  • Is the AI provider seeking to use your data to train and develop the AI? Do you understand the regulatory/commercial consequences of this?
  • Can you articulate the data protection roles of the parties, and do both parties agree on this?
  • Do you understand the AI well enough to explain its use to data subjects/the ICO?
  • Have you carried out enough due diligence to be satisfied the AI is accurate?
  • Have you considered how you will comply with key data subject rights such as Data Subject Access Requests?
  • Have you reviewed the contractual liabilities of parties if i) the AI is incorrect, and ii) if commissioning the AI leads to a breach of the UK GDPR?

Our use of cookies

We use necessary cookies to make our site work. We'd also like to set optional analytics cookies to help us improve it. We won't set optional cookies unless you enable them. Using this tool will set a cookie on your device to remember your preferences. For more detailed information about the cookies we use, see our Cookies page.

Necessary cookies

Necessary cookies enable core functionality such as security, network management, and accessibility. You may disable these by changing your browser settings, but this may affect how the website functions.

Analytics cookies

We'd like to set Google Analytics cookies to help us to improve our website by collection and reporting information on how you use it. The cookies collect information in a way that does not directly identify anyone.
For more information on how these cookies work, please see our Cookies page.