151 research outputs found

    Design and Evaluation of a Discrete Wavelet Transform Based Multi-Signal Receiver

    Get PDF
    General purpose receivers of today are designed with a broad bandwidth so that the receiver can accept a wide range of signal frequencies. These receivers usually accept one signal along with any interference that is included. To increase the signal detection capabilities of the wideband receiver, a design for a receiver that can detect two signals is needed. One of the requirements for this receiver is that the second weak signal needs to be processed in a timely manner so that the receiver can recognize it. To remedy the problem, a module was developed using wavelet-based techniques to remove spurs from the incoming signals to allow easier detection. The main basis for this concentration on wavelets comes from the way wavelets break down signals into portions (called resolutions) that allow easier determination of detail importance. Utilizing the multi-resolution attributes of the discrete wavelet transform, a way to remove signal spurs is made possible. When removing the signal noise from the signal, the two signal dynamic range of the system is increased, as this module is applied to multiple receiver systems for comparison of performance. Implementation of this system was originally done in C as well as MATLAB, but later is being implemented in VHDL with simulations done for verification of functionality

    Foundation Model's Embedded Representations May Detect Distribution Shift

    Full text link
    Distribution shifts between train and test datasets obscure our ability to understand the generalization capacity of neural network models. This topic is especially relevant given the success of pre-trained foundation models as starting points for transfer learning (TL) models across tasks and contexts. We present a case study for TL on a pre-trained GPT-2 model onto the Sentiment140 dataset for sentiment classification. We show that Sentiment140's test dataset MM is not sampled from the same distribution as the training dataset PP, and hence training on PP and measuring performance on MM does not actually account for the model's generalization on sentiment classification.Comment: 14 pages, 8 figures, 5 table

    Coverage and error models of protein-protein interaction data by directed graph analysis

    Get PDF
    Directed graph and multinomial error models were used to assess and characterize the error statistics in all published large-scale datasets for Saccharomyces cerevisia

    Efficient kernel surrogates for neural network-based regression

    Full text link
    Despite their immense promise in performing a variety of learning tasks, a theoretical understanding of the effectiveness and limitations of Deep Neural Networks (DNNs) has so far eluded practitioners. This is partly due to the inability to determine the closed forms of the learned functions, making it harder to assess their precise dependence on the training data and to study their generalization properties on unseen datasets. Recent work has shown that randomly initialized DNNs in the infinite width limit converge to kernel machines relying on a Neural Tangent Kernel (NTK) with known closed form. These results suggest, and experimental evidence corroborates, that empirical kernel machines can also act as surrogates for finite width DNNs. The high computational cost of assembling the full NTK, however, makes this approach infeasible in practice, motivating the need for low-cost approximations. In the current work, we study the performance of the Conjugate Kernel (CK), an efficient approximation to the NTK that has been observed to yield fairly similar results. For the regression problem of smooth functions and classification using logistic regression, we show that the CK performance is only marginally worse than that of the NTK and, in certain cases, is shown to be superior. In particular, we establish bounds for the relative test losses, verify them with numerical tests, and identify the regularity of the kernel as the key determinant of performance. In addition to providing a theoretical grounding for using CKs instead of NTKs, our framework provides insights into understanding the robustness of the various approximants and suggests a recipe for improving DNN accuracy inexpensively. We present a demonstration of this on the foundation model GPT-2 by comparing its performance on a classification task using a conventional approach and our prescription.Comment: 29 pages. software used to reach results available upon request, approved for release by Pacific Northwest National Laborator

    Minibatching Offers Improved Generalization Performance for Second Order Optimizers

    Full text link
    Training deep neural networks (DNNs) used in modern machine learning is computationally expensive. Machine learning scientists, therefore, rely on stochastic first-order methods for training, coupled with significant hand-tuning, to obtain good performance. To better understand performance variability of different stochastic algorithms, including second-order methods, we conduct an empirical study that treats performance as a response variable across multiple training sessions of the same model. Using 2-factor Analysis of Variance (ANOVA) with interactions, we show that batch size used during training has a statistically significant effect on the peak accuracy of the methods, and that full batch largely performed the worst. In addition, we found that second-order optimizers (SOOs) generally exhibited significantly lower variance at specific batch sizes, suggesting they may require less hyperparameter tuning, leading to a reduced overall time to solution for model training.Comment: 14 pages, 6 figures, 5 table

    Computational and Systems Biology Advances to Enable Bioagent-Agnostic Signatures

    Full text link
    Enumerated threat agent lists have long driven biodefense priorities. The global SARS-CoV-2 pandemic demonstrated the limitations of searching for known threat agents as compared to a more agnostic approach. Recent technological advances are enabling agent-agnostic biodefense, especially through the integration of multi-modal observations of host-pathogen interactions directed by a human immunological model. Although well-developed technical assays exist for many aspects of human-pathogen interaction, the analytic methods and pipelines to combine and holistically interpret the results of such assays are immature and require further investments to exploit new technologies. In this manuscript, we discuss potential immunologically based bioagent-agnostic approaches and the computational tool gaps the community should prioritize filling

    A Colorimetric Chemosensor Based on a Nozoe Azulene That Detects Fluoride in Aqueous/Alcoholic Media

    Get PDF
    Colorimetry is an advantageous method for detecting fluoride in drinking water in a resource-limited context, e. g., in parts of the developing world where excess fluoride intake leads to harmful health effects. Here we report a selective colorimetric chemosensor for fluoride that employs an azulene as the reporter motif and a pinacolborane as the receptor motif. The chemosensor, NAz-6-Bpin, is prepared using the Nozoe azulene synthesis, which allows for its rapid and low-cost synthesis. The chemosensor gives a visually observable response to fluoride both in pure organic solvent and also in water/alcohol binary solvent mixtures

    Focused Ion Beam Microfabrication

    Get PDF
    Contains an introduction, reports on x research projects and a list of publications.Defense Advanced Research Projects Agency/U.S. Army Research Office Grant DAAL-03-92-G-0217National Science Foundation Grant ECS 89-21728Defense Advanced Research Projects Agency/U.S. Army Research Office (ASSERT Program) Grant DAAL03-92-G-0305Semiconductor Research CorporationNational Science Foundation Grant DMR 92-02633U.S. Army Research Office Grant DAAL03-90-G-0223U.S. Navy - Naval Research Laboratory/Micrion Contract M0877
    • …
    corecore