30 research outputs found
Recommended from our members
A High-Performance Inversion Framework for Brain Tumor Growth Models in Personalized Medicine
The precise characterization of aggressive brain tumors remains a challenging problem due to their highly heterogeneous radiographic and molecular presentation. The integration of mathematical models with clini- cal imaging data holds an enormous promise of developing robust predictive and explainable models that quantify cancer growth with the potential to as- sist in diagnosis and treatment. In general, such models are parameterized by many unknown parameters and their estimation can be formally posed as an inverse problem. However, this calibration problem is a formidable task for aggressive brain tumors due to the absence of longitudinal data, resulting in a strongly ill-posed inverse problem. This is further exacerbated by the inherent non-linearity in tumor growth models. Overcoming these difficulties involves the introduction of sophisticated regularization strategies along with compu- tationally efficient algorithms and software. Towards this end, we introduce a fully-automatic inversion framework which provides an entirely new capa- bility to analyze complex brain tumors from a single pretreatment magnetic resonance imaging (MRI) scan. Our framework employs fast algorithms and optimized implementations which exploit distributed-memory parallelism and GPU acceleration to enable reasonable solution times – an important factor for clinical applications. We validate our solver on clinical data and demonstrate its utility in characterizing important biophysics of brain cancer along with its ability to complement other radiographic information in downstream machine learning tasks
Chameleon Devices: Investigating More Secure and Discreet Mobile Interactions via Active Camouflaging
Many users value the ability to have quick and frequent sight of their mobiles when in public settings. However, in doing so, they expose themselves to potential risks, ranging from being targets of robbery to the more subtle social losses through being seen to be rude or inattentive to those around them. In nature, some animals can blend into their environments to avoid being eaten or to reduce their impact on the ecosystem around them. Taking inspiration from these evolved systems we investigate the notion of chameleon approaches for mobile interaction design. Our probes were motivated, inspired and refined through extended interactions with people drawn from contexts with differing ranges of security and privacy concerns. Through deployments on users’ own devices, our prototypes show the value of the concept. The encouraging results motivate further research in materials and form factors that can provide more effective automatic plain-sight hiding
Towards Foundation Models for Scientific Machine Learning: Characterizing Scaling and Transfer Behavior
Pre-trained machine learning (ML) models have shown great performance for a
wide range of applications, in particular in natural language processing (NLP)
and computer vision (CV). Here, we study how pre-training could be used for
scientific machine learning (SciML) applications, specifically in the context
of transfer learning. We study the transfer behavior of these models as (i) the
pre-trained model size is scaled, (ii) the downstream training dataset size is
scaled, (iii) the physics parameters are systematically pushed out of
distribution, and (iv) how a single model pre-trained on a mixture of different
physics problems can be adapted to various downstream applications. We find
that-when fine-tuned appropriately-transfer learning can help reach desired
accuracy levels with orders of magnitude fewer downstream examples (across
different tasks that can even be out-of-distribution) than training from
scratch, with consistent behavior across a wide range of downstream examples.
We also find that fine-tuning these models yields more performance gains as
model size increases, compared to training from scratch on new downstream
tasks. These results hold for a broad range of PDE learning tasks. All in all,
our results demonstrate the potential of the "pre-train and fine-tune" paradigm
for SciML problems, demonstrating a path towards building SciML foundation
models. We open-source our code for reproducibility.Comment: 16 pages, 11 figure