30 research outputs found

    Chameleon Devices: Investigating More Secure and Discreet Mobile Interactions via Active Camouflaging

    Get PDF
    Many users value the ability to have quick and frequent sight of their mobiles when in public settings. However, in doing so, they expose themselves to potential risks, ranging from being targets of robbery to the more subtle social losses through being seen to be rude or inattentive to those around them. In nature, some animals can blend into their environments to avoid being eaten or to reduce their impact on the ecosystem around them. Taking inspiration from these evolved systems we investigate the notion of chameleon approaches for mobile interaction design. Our probes were motivated, inspired and refined through extended interactions with people drawn from contexts with differing ranges of security and privacy concerns. Through deployments on users’ own devices, our prototypes show the value of the concept. The encouraging results motivate further research in materials and form factors that can provide more effective automatic plain-sight hiding

    Towards Foundation Models for Scientific Machine Learning: Characterizing Scaling and Transfer Behavior

    Full text link
    Pre-trained machine learning (ML) models have shown great performance for a wide range of applications, in particular in natural language processing (NLP) and computer vision (CV). Here, we study how pre-training could be used for scientific machine learning (SciML) applications, specifically in the context of transfer learning. We study the transfer behavior of these models as (i) the pre-trained model size is scaled, (ii) the downstream training dataset size is scaled, (iii) the physics parameters are systematically pushed out of distribution, and (iv) how a single model pre-trained on a mixture of different physics problems can be adapted to various downstream applications. We find that-when fine-tuned appropriately-transfer learning can help reach desired accuracy levels with orders of magnitude fewer downstream examples (across different tasks that can even be out-of-distribution) than training from scratch, with consistent behavior across a wide range of downstream examples. We also find that fine-tuning these models yields more performance gains as model size increases, compared to training from scratch on new downstream tasks. These results hold for a broad range of PDE learning tasks. All in all, our results demonstrate the potential of the "pre-train and fine-tune" paradigm for SciML problems, demonstrating a path towards building SciML foundation models. We open-source our code for reproducibility.Comment: 16 pages, 11 figure
    corecore