Nadeem lab develops advanced mathematical/ machine learning techniques for analyzing patient data at multiple scales (radiology/ radiation oncology, surgery/ endoscopy, pathology, and molecular — genomics/ proteomics/ transcriptomics/ metabolomics) to improve patient outcomes. The broad idea is to create a unified picture of a patient across different scales to come up with the best recommendation for therapeutic intervention (radiotherapy, chemotherapy, surgery, or immunotherapy) as well as to characterize treatment response and other clinical parameters.
Radiotherapy is the cheapest and the most effective way of treating localized cancer that is widely available. Typically, the patients get high-quality diagnostic and planning CT scans upfront followed by weekly/daily low-quality cone-beam CT scans (acquired during radiotherapy) with several follow-up scans after that. This results in large amounts of longitudinal data with variable image quality that can be used to derive highly accurate biomarkers. We have created new physics-based data augmentation and deep learning models to segment, register, and predict future timepoints from this longitudinal data. We are also developing new techniques to simulate realistic motion given a single static scan to drive deformable image registration evaluation, motion atlas map estimation, physics-based data augmentation, and reduced-margin treatment (dose) planning. Furthermore, we are working on integrating radiology and histology (pathology) images with genomics and proteomics data to differentiate responders and non-responders in immunotherapy clinical trials.
Surgery is another therapeutic intervention that is widely available and can allow precise removal of pre-cancerous as well as malignant lesions via the minimal invasive surgery (MIS) paradigm. MIS entails reaching the target anomaly in the human body (possibly found via prior radiology scan) with minimal incision. To aid the surgeon in traversing to the target anomaly easily while providing optimal coverage, we have developed new deep learning models that integrate radiology and endoscope images to derive accurate depth/optical flow maps from live endoscopy video streams while segmenting/tracking features-of-interest (e.g., haustral folds and polyps during colonoscopy). We are also working on improving the tumor resection margins via prior radiology (CT/PET/MR) scans.
Pathology is the cornerstone of diagnosis and considered the gold standard in medicine. The inter-observer variability among pathologists, though, is one of the highest across medical disciplines. Hematoxylin-and-Eosin (H&E) and immunohistochemistry (IHC) stained slides are the most prevalent modalities used in clinic. In contrast, multiplex staining is a much more informative-yet-expensive research modality that can help improve diagnosis, prognosis, and biomarker derivation. Our work, firstly, focuses on removing the bottlenecks involved in translating the R&D multiplex imaging platforms to clinic and secondly, leverages more informative co-registered multiplex images for more accurate/objective stain-invariant segmentation, classification, and biomarker quantification of H&E and IHC images. We are also working on integrating H&E, IHC, and multiplex images with spatial transcriptomics, scRNA-seq, and flow-cytometry data to predict treatment response as well as other clinical parameters of interest more accurately.
Genomics, proteomics, spatial transcriptomics, and metabolomics provide a detailed molecular signature for individual patients with respect to clinical outcomes. We are currently working on deconvolving spatial transcriptomics (Visium) signal using scRNA-seq data, which will eventually be paired with H&E images to build high-resolution spatial transcriptomics inference deep learning models. We are also creating novel interactive visualization tools to extract correlative as well as causal links across different molecular modalities.