Harvard boffins build a multimodal AI system to predict cancer

Multimodal AI models, trained on many types of data, could help doctors more accurately screen patients at risk of developing several different cancers.

Researchers at Harvard University School of Medicine’s Brigham and Women’s Hospital have developed a deep learning model that can identify 14 types of cancer. Most AI algorithms are trained to detect signs of disease from a single source of data, such as medical scans, but this one can take input from multiple sources.

Predicting whether someone is at risk for developing cancer isn’t always so simple, doctors often need to look at various types of information like a patient’s medical history or perform other tests to detect genetic biomarkers.

These results can help doctors determine the best treatment for a patient when monitoring disease progression, but their interpretation of the data can be subjective, Faisal Mahmood, an assistant professor working in the Division of Computational Pathology at Brigham and Women’s Hospital , Explain.

“Experts analyze a lot of evidence to predict a patient’s condition. These early examinations become the basis for decision-making about whether to enroll in a clinical trial or specific treatment regimens. But that means that this multimodal prediction happens at the expert level. . We are trying to solve the problem computationally,” he said in a statement.

Mahmood and his colleagues described how a single global system, made up of many deep learning-based algorithms and trained on multiple forms of data, could diagnose up to 14 different cancers. The researchers used training data from The Cancer Genome Atlas (TCGA), a public resource with data on different types of cancer obtained from more than 5,000 real patients, as well as other data sources.

First, microscopic views of cellular tissues from whole-slide images (WSI) and text-based genomic data were used to train two distinct models. These were then integrated into a single system to predict whether patients are at high or low risk of developing the different types of cancer. The model could even help scientists find or confirm genetic markers associated with certain diseases, the researchers said.

“Using deep learning, multimodal fusion of molecular biomarkers and morphological features extracted from WSIs have potential clinical application not only to improve the accuracy of patient risk stratification, but could also aid in the discovery and validation of multimodal biomarkers where the combinatorial effects of histology and genomic biomarkers are not known,” the team wrote in a paper published in Cancer Cell on Monday.

Mahmood said The register the current study was a proof of concept in the application of multimodal models to predict cancer risk. “We need to train these models with much more data, test these models on large independent test cohorts, and conduct prospective studies and clinical trials to establish the effectiveness of these models in a clinical setting,” he said. concluded. ®