Google is training AI to predict when a patient will die

A woman with advanced stage breast cancer arrived at a city hospital, fluids were already flooding her lungs. She saw two doctors and got a radiology scan. The hospital’s computers read the vital signs and calculated a 9.3 percent chance that he would die.

Google is training AI to predict when a patient will die, Google is training AI to predict when a patient will die, Optocrypto

Then it was Google’s turn. A new type of algorithm created by the company read the woman, 175,639 data points, and assessed her risk of death: 19.9 percent. She passed away in a matter of days.

The overwhelming version of the unidentified woman’s death was released by Google in May in research highlighting the potential of neural networks for health care, a form of artificial intelligence software that is particularly useful for using data to learn and improve automatically. Google created a tool that could predict a number of outcomes for patients, including how long people can stay in hospitals, their chances of re-admission, and the odds that they will soon die.

What impressed medical experts most was Google’s ability to examine data that was previously out of reach: notes hidden in PDF files or scribbles in old charts. The neural network gobbled up all this unmanageable information and then showed its predictions. And it did so much faster and more accurately than existing techniques. The Google system even showed which records led it to conclusions.

Hospitals, physicians and other healthcare providers have been trying for years to make better use of stocks of electronic health records and other patient data. More information shared and highlighted at the right time could save lives, and at least help medical workers spend less time on documentation and more on patient care. But current methods of extracting health data are expensive, cumbersome and slow.

Up to 80 percent of the time spent on current predictive models goes into the “cutting edge work” of making the data presentable, said Nigam Shah, an associate professor at Stanford University who co-authored Google’s research paper, published in the journal Nature. Google’s approach avoids this. “You can throw away the kitchen sink and not have to worry about it,” Shah said.

Google’s next step is to move this predictive system to the clinics, said IA chief Jeff Dean. Dean’s health research unit, sometimes called Medical Brain, is working on a series of AI tools that can predict symptoms and illnesses with a level of accuracy that is met with hope and alarm.

Within the company, there is a lot of excitement about the initiative. “They finally found a new application for AI that has a business promotion.”

Since Google of Alphabet Inc. declared itself an’AI first’ company in 2016, much of its work in this area has been aimed at improving existing Internet services. The advances of the Medical Brain team give Google the opportunity to enter a new market, something that co-founders Larry Page and Sergey Brin have tried over and over again.

Healthcare software is largely hand-coded these days. In contrast, Google’s approach, where machines learn to analyze data on their own, “can surpass everything else,” said Vik Bajaj, a former executive at Verily, a healthcare arm of Alphabet and general manager of the investment firm Foresite Capital. “They understand what problems are worth solving,” he said, “now they’ve done enough small experiments to know exactly what the fruitful directions are.

Dean imagines that the AI system will direct physicians to certain medications and diagnostics. Another Google researcher said existing models overlook obvious medical events, even if a patient had undergone previous surgery. The person described the existing hand-coded models as “an obvious and gigantic obstacle” to health care.

Despite optimism about Google’s potential, using artificial intelligence to improve healthcare outcomes remains a major challenge. Other companies, particularly IBM’s Watson unit, have tried to apply AI to medicine, but have had limited success in saving money and integrating the technology into other systems.

 

Google has always sought access to digital medical records, also with mixed results. For its recent research, the Internet giant made agreements with the University of California, San Francisco and the University of Chicago to obtain 46 billion anonymous patient data. Google’s AI system created predictive models for each hospital, not one that will analyze data between the two, a more difficult problem. A solution for all hospitals would be even more challenging. Google is working to secure new partners to access more records.

A deeper dive into health would only add to the wealth of information Google already has about us. “Companies like Google and other technology giants will have a unique, almost monopolistic ability to capitalize on all the data we generate,” said Andrew Burt, privacy director of the data company Immuta. He and pediatric oncologist Samuel Volchenboum wrote a recent column arguing that governments should prevent this information from becoming “the exclusive property of a few companies,” as in Google’s online advertising.

Google is working carefully when it comes to patient information, especially as public scrutiny of data collection increases. Last year, British regulators slapped DeepMind, another Alphabet AI lab, for testing an application that analyzed public medical records without telling patients that their information would be used in this way.

With the latest study, Google and its hospital partners insist that their data is anonymous, secure and used with the patient’s permission. Volchenboum said the company may find it harder to maintain that data rigour if it expands into smaller hospitals and healthcare networks.

Still, Volchenboum believes that these algorithms could save lives and save money. He expects health records to be mixed with other statistics. Finally, artificial intelligence models may include local weather and traffic information, other factors that influence patient outcomes. “It’s almost as if the hospital were an organism,” he said.

Few companies are better positioned to analyze this organization than Google. The company and its cousin Alphabet are developing devices to track many more biological signals. Even if consumers don’t take mass portable health crawlers, Google has many other data wells to tap into. He knows the weather and traffic. Google Android phones track things like how people walk, valuable information to measure mental decline, and some other ailments. Everything that could be thrown into the medical algorithmic soup.

Medical records are only part of Google IA’s health care plans. His medical brain has deployed AI systems for radiology, ophthalmology and cardiology. They’re also working with dermatology. The staff created an application to detect malignant skin lesions; a product manager walks around the office with 15 fake tattoos on his arms to test it.

Dean, the head of IA, emphasizes that this experimentation is based on serious medical advice, not just curious software encoders. Google is starting a new test in India that uses its AI software to detect images of the eyes for early signs of a condition called diabetic retinopathy. Before publishing it, Google had three retinal specialists discuss the early results of the research, Dean said.

Over time, Google may license these systems to clinics, or sell them through the company’s cloud computing division as a kind of diagnostic as a service. Microsoft Corp., a major rival to the cloud, is also working on predictive artificial intelligence services. To market an offer, Google must first have more records in its hands, which tend to vary widely among healthcare providers. Google could buy them, but that may not be so good for regulators or consumers. Deals with UCSF and the University of Chicago are not commercial.

For now, the company says it’s too early to set up a business model. At Google’s annual developers’ conference in May, Medical Brain member Lily Peng walked through the team’s research to compare humans to detect the risk of heart disease. “Again,” she said. “I want to emphasize that this is at a very early stage.”