242 research outputs found
Linear Mode Connectivity in Sparse Neural Networks
With the rise in interest of sparse neural networks, we study how neural
network pruning with synthetic data leads to sparse networks with unique
training properties. We find that distilled data, a synthetic summarization of
the real data, paired with Iterative Magnitude Pruning (IMP) unveils a new
class of sparse networks that are more stable to SGD noise on the real data,
than either the dense model, or subnetworks found with real data in IMP. That
is, synthetically chosen subnetworks often train to the same minima, or exhibit
linear mode connectivity. We study this through linear interpolation, loss
landscape visualizations, and measuring the diagonal of the hessian. While
dataset distillation as a field is still young, we find that these properties
lead to synthetic subnetworks matching the performance of traditional IMP with
up to 150x less training points in settings where distilled data applies.Comment: Published in NeurIPS 2023 UniReps Worksho
UniCat: Crafting a Stronger Fusion Baseline for Multimodal Re-Identification
Multimodal Re-Identification (ReID) is a popular retrieval task that aims to
re-identify objects across diverse data streams, prompting many researchers to
integrate multiple modalities into a unified representation. While such fusion
promises a holistic view, our investigations shed light on potential pitfalls.
We uncover that prevailing late-fusion techniques often produce suboptimal
latent representations when compared to methods that train modalities in
isolation. We argue that this effect is largely due to the inadvertent
relaxation of the training objectives on individual modalities when using
fusion, what others have termed modality laziness. We present a nuanced
point-of-view that this relaxation can lead to certain modalities failing to
fully harness available task-relevant information, and yet, offers a protective
veil to noisy modalities, preventing them from overfitting to task-irrelevant
data. Our findings also show that unimodal concatenation (UniCat) and other
late-fusion ensembling of unimodal backbones, when paired with best-known
training techniques, exceed the current state-of-the-art performance across
several multimodal ReID benchmarks. By unveiling the double-edged sword of
"modality laziness", we motivate future research in balancing local modality
strengths with global representations.Comment: Accepted NeurIPS 2023 UniReps, 9 pages, 4 table
One-loop effects on the T parameter in the universal custodial Randall-Sundrum model
See full textEThOS - Electronic Theses Online ServiceGBUnited Kingdo
GraFT: Gradual Fusion Transformer for Multimodal Re-Identification
Object Re-Identification (ReID) is pivotal in computer vision, witnessing an
escalating demand for adept multimodal representation learning. Current models,
although promising, reveal scalability limitations with increasing modalities
as they rely heavily on late fusion, which postpones the integration of
specific modality insights. Addressing this, we introduce the \textbf{Gradual
Fusion Transformer (GraFT)} for multimodal ReID. At its core, GraFT employs
learnable fusion tokens that guide self-attention across encoders, adeptly
capturing both modality-specific and object-specific features. Further
bolstering its efficacy, we introduce a novel training paradigm combined with
an augmented triplet loss, optimizing the ReID feature embedding space. We
demonstrate these enhancements through extensive ablation studies and show that
GraFT consistently surpasses established multimodal ReID benchmarks.
Additionally, aiming for deployment versatility, we've integrated neural
network pruning into GraFT, offering a balance between model size and
performance.Comment: 3 Borderline Reviews at WACV, 8 pages, 5 figures, 8 table
Neural Architecture Codesign for Fast Bragg Peak Analysis
We develop an automated pipeline to streamline neural architecture codesign
for fast, real-time Bragg peak analysis in high-energy diffraction microscopy.
Traditional approaches, notably pseudo-Voigt fitting, demand significant
computational resources, prompting interest in deep learning models for more
efficient solutions. Our method employs neural architecture search and AutoML
to enhance these models, including hardware costs, leading to the discovery of
more hardware-efficient neural architectures. Our results match the
performance, while achieving a 13 reduction in bit operations compared
to the previous state-of-the-art. We show further speedup through model
compression techniques such as quantization-aware-training and neural network
pruning. Additionally, our hierarchical search space provides greater
flexibility in optimization, which can easily extend to other tasks and
domains.Comment: To appear in 3rd Annual AAAI Workshop on AI to Accelerate Science and
Engineering (AI2ASE
The Chameleon Team
Project Leaders: Barbara Buffaloe, Katie Grantham Lough, Luke Wesselschmidt, Jacqueline McDermott-Kelty, Rashad Abdul-Majid, Bryan Glass, Heather BensonProposal for the 2008 project: "The Chameleon Team." The University of Missouri?Columbia (MU) and Missouri University of Science and Technology (S&T) have teamed to develop an exciting energy conservation product. The Chameleon project will produce an artificially intelligent residential energy management system designed to blend into its environment. Upon successful completion of this project, the Chameleon home automation system will enable the average homeowner to conserve energy and save money by simply having the system installed in their home and not changing any of their daily activities. This total budget of the design, development, and implementation of Chameleon�s prototypes is well over the budget for this funding opportunity, this proposal will focus on the educational partnerships required to develop the user interface for the system. This multi?university undergraduate student project incorporates engineering, architectural studies, and interior design students to develop a seamlessly integrated and highly functioning home automation system that requires no technical skills to operate. The underlying technology that enables the project is the IT capabilities of both universities which will enable weekly video?conference design meetings as well as internet accessible energy monitoring data available in real �time. In addition, students on both campuses utilize computer programs specific to their disciplines and learn program associated with other disciplines due to the multidisciplinary efforts required. For example, S&T students use the computer program, Maui Solar, to estimate the size and placement of solar panels for home energy production. MU students often suggest solar energy production on their concept designs but do not know the details of how and where to place the modules. Working together with the computer program, students from both campuses are learning the importance of each disciplines� core software programs. The Chameleon team�s proposal for the Interdisciplinary Innovation Fund meets the requirement from the MU Information Technology Committee. The student led team is working to make the UM system a leader in energy conservation through the use of cutting edge technology and multidisciplinary design efforts that make the technology available to the average homeowner.MU Interdisciplinary Innovations Fun
Rounding of low serum creatinine levels and consequent impact on accuracy of bedside estimates of renal function in cancer patients
To compare glomerular filtration rate measured by technetium-99m ([Tc(99m)]) DTPA clearance with estimated creatinine clearance (CrCl) (Cockcroft and Gault (C&G) method) in patients with serum creatinine (Scr) levels 100 ml min(-1). This work indicates that when bedside estimates of renal function are calculated using the C&G formula actual Scr should be used first to estimate CrCl. If the resultant CrCl is </=100 ml min(-1), then the Scr should be rounded up to 0.06 mmol l(-1) and CrCl recalculated. Further assessment of this approach is warranted in a larger cohort of patients
- …