A UK law firm is bringing legal action on behalf of patients it says had their confidential medical records obtained by Google and DeepMind Technologies in breach of data protection laws.
Mishcon de Reya said today it planned a representative action on behalf of Mr Andrew Prismall and the approximately 1.6 million individuals whose data was used as part of a testing programme for medical software developed by the companies.
It told The Register the claim had already been issued in the High Court.
The law firm said that the tech companies obtained approximately 1.6 million individuals’ confidential medical records without their knowledge or consent.
The Register has contacted Google, DeepMind and the Royal Free Hospital for their comments.
“Given the very positive experience of the NHS that I have always had during my various treatments, I was greatly concerned to find that a tech giant had ended up with my confidential medical records,” lead claimant Prismall said in a statement.
“As a patient having any sort of medical treatment, the last thing you would expect is your private medical records to be in the hands of one of the world’s biggest technology companies.
“I hope that this case will help achieve a fair outcome and closure for all of the patients whose confidential records were obtained in this instance without their knowledge or consent.”
The case is being led by Mishcon partner Ben Lasserson, who said: “This important claim should help to answer fundamental questions about the handling of sensitive personal data and special category data.
“It comes at a time of heightened public interest and understandable concern over who has access to people’s personal data and medical records and how this access is managed.”
The law firm argued that action would be an important step in seeking to address the “very real” public concerns about large-scale access to, and use of, private health data by technology companies. It also raises issues regarding the precise status and responsibility of such technology companies in the data protection context, both in this specific case, and potentially more generally.
In 2017, Google’s use of medical records from the hospital’s patients to test a software algorithm was deemed legally “inappropriate” by Dame Fiona Caldicott, the then National Data Guardian at the Department of Health.
In April 2016, it was revealed that the web giant had signed a deal with the Royal Free Hospital in London to build an application called Streams, which can analyse patients’ details and identify those who have acute kidney damage. The app uses a fixed algorithm, developed with the help of doctors, so not technically AI.
The software – developed by DeepMind, Google’s AI subsidiary – was first tested with simulated data. But it was tested again using 1.6 million sets of real NHS medical files provided by the London hospital. However, not every patient was aware that their data was being given to Google to test the Streams software. Streams had been deployed inwards, and thus now handles real people’s details, but during development, it also used live medical records as well as simulated inputs.
Dame Caldicott told the hospital’s medical director, Professor Stephen Powis, that he overstepped the mark, and that there was no consent given by people to have their information used in this way pre-deployment.
A subsequent Information Commissioner’s Office investigation found several shortcomings in how the data was handled, including that patients were not adequately informed that their data would be used as part of the test.
In a data-sharing agreement uncovered by the New Scientist, Google and its DeepMind artificial intelligence wing were granted access to current and historic patient data at three London hospitals run by the Royal Free NHS Trust. ®