New platform allows AI to learn from health data but not access personal details

Signal of change / New platform allows AI to learn from health data but not access personal details

By Ryan Jones / 22 May 2019

After permission from regulators, Stanford Medical School is now testing an AI system to diagnose eye disease without ever accessing any personal details. The app utilizes technology from Oasis labs, a start-up out of Berkeley, which offers a guarantee that no data can be leaked or misused. There is huge potential for AI to diagnose, understand and cure diseases, but to maximize its capabilities, significant amounts of medical data are needed to train its machine learning algorithms, a lot of which is sensitive private information.

So what?

If brought to scale, this would greatly accelerate and improve our ability to advance drug efficiency, disease prevention and personalized medicine. There are significant legal barriers and justified privacy concerns around making medical data accessible and these must be thoroughly discuessed and addressed before larger access is granted. Many countries, such as the UK’s NHS, have the majority of their countries health data centralized. This would be of huge value for the systems, but equally poses greater risk for potential harm if hacked.

This technology doesn’t only offer promise in healthcare. If successful, its methods could be applied to improve privacy in other spaces with sensitive data such as finance, user buying habits or our search history.

What new risks could come to light? 

Sources

https://www.technologyreview.com/s/613520/how-ai-could-save-lives-without-spilling-secrets/?utm_campaign=the_download.unpaid.engagement&utm_source=hs_email&utm_medium=email&utm_content=72649028&_hsenc=p2ANqtz-9h2DdyqxKftYbUpMQHl_1X0tQHHLpJfvmqg7bq51E662jdN...

What might the implications of this be? What related signals of change have you seen?

Please register or log in to comment.

Suggested