3,717 research outputs found

    High Performance Data Acquisition and Analysis Routines for the Nab Experiment

    Get PDF
    Probes of the Standard Model of particle physics are pushing further and further into the so-called “precision frontier”. In order to reach the precision goals of these experiments, a combination of elegant experimental design and robust data acquisition and analysis is required. Two experiments that embody this philosophy are the Nab and Calcium-45 experiments. These experiments are probing the understanding of the weak interaction by examining the beta decay of the free neutron and Calcium-45 respectively. They both aim to measure correlation parameters in the neutron beta decay alphabet, a and b. The parameter a, the electron-neutrino correlation coefficient, is sensitive to λ, the ratio of the axial-vector and vector coupling strengths in the decay of the free neutron. This parameter λ, in tandem with a precision measurement of the neutron lifetime τ , provides a measurement of the matrix element Vud from the CKM quark mixing matrix. The CKM matrix, as a rotation matrix, must be unitary. Probes of Vud and Vus in recent years have revealed tension in this unitarity at the 2.2σ level. The measurement of a via decay of free cold neutrons serves as an additional method of extraction for Vud that is sensitive to a different set of systematic effects and as such is an excellent probe into the source of the deviation from unitarity. The parameter b, the Fierz interference term, appears as a distortion in the mea- sured electron energy spectra from beta decay. This parameter, if non-zero, would indicate the existence of Scalar and/or Tensor couplings in the Weak interaction which according to the Standard Model is purely Vector minus Axial-Vector. This is therefore a search for physics beyond the standard model, BSM, physics search. The Nab and Calcium-45 experiments probe these parameters with a combination of elegant experimental design and brute force collection and analysis of large amounts of digitized detector data. These datasets, particularly in the case of the Nab experiment, are anticipated to span multiple petabytes of data and will require high performance online analysis and precision offline analysis routines in order to reach the experimental goals. Of particular note are the requirements for better than 3 keV energy resolution and an understanding of the uncertainty in the mean timing bias for the detected particles within 300 ps. Presented in this dissertation is an overview of the experiments and their design, a description of the data acquisition systems and analysis routines that have been developed to support the experiments, and a discussion of the data analysis performed for the Calcium-45 experiment

    An architecture for efficient gravitational wave parameter estimation with multimodal linear surrogate models

    Get PDF
    The recent direct observation of gravitational waves has further emphasized the desire for fast, low-cost, and accurate methods to infer the parameters of gravitational wave sources. Due to expense in waveform generation and data handling, the cost of evaluating the likelihood function limits the computational performance of these calculations. Building on recently developed surrogate models and a novel parameter estimation pipeline, we show how to quickly generate the likelihood function as an analytic, closed-form expression. Using a straightforward variant of a production-scale parameter estimation code, we demonstrate our method using surrogate models of effective-one-body and numerical relativity waveforms. Our study is the first time these models have been used for parameter estimation and one of the first ever parameter estimation calculations with multi-modal numerical relativity waveforms, which include all l <= 4 modes. Our grid-free method enables rapid parameter estimation for any waveform with a suitable reduced-order model. The methods described in this paper may also find use in other data analysis studies, such as vetting coincident events or the computation of the coalescing-compact-binary detection statistic.Comment: 10 pages, 3 figures, and 1 tabl

    Scaling full seismic waveform inversions

    Get PDF
    The main goal of this research study is to scale full seismic waveform inversions using the adjoint-state method to the data volumes that are nowadays available in seismology. Practical issues hinder the routine application of this, to a certain extent theoretically well understood, method. To a large part this comes down to outdated or flat out missing tools and ways to automate the highly iterative procedure in a reliable way. This thesis tackles these issues in three successive stages. It first introduces a modern and properly designed data processing framework sitting at the very core of all the consecutive developments. The ObsPy toolkit is a Python library providing a bridge for seismology into the scientific Python ecosystem and bestowing seismologists with effortless I/O and a powerful signal processing library, amongst other things. The following chapter deals with a framework designed to handle the specific data management and organization issues arising in full seismic waveform inversions, the Large-scale Seismic Inversion Framework. It has been created to orchestrate the various pieces of data accruing in the course of an iterative waveform inversion. Then, the Adaptable Seismic Data Format, a new, self-describing, and scalable data format for seismology is introduced along with the rationale why it is needed for full waveform inversions in particular and seismology in general. Finally, these developments are put into service to construct a novel full seismic waveform inversion model for elastic subsurface structure beneath the North American continent and the Northern Atlantic well into Europe. The spectral element method is used for the forward and adjoint simulations coupled with windowed time-frequency phase misfit measurements. Later iterations use 72 events, all happening after the USArray project has commenced, resulting in approximately 150`000 three components recordings that are inverted for. 20 L-BFGS iterations yield a model that can produce complete seismograms at a period range between 30 and 120 seconds while comparing favorably to observed data

    Space Station communications and tracking systems modeling and RF link simulation

    Get PDF
    In this final report, the effort spent on Space Station Communications and Tracking System Modeling and RF Link Simulation is described in detail. The effort is mainly divided into three parts: frequency division multiple access (FDMA) system simulation modeling and software implementation; a study on design and evaluation of a functional computerized RF link simulation/analysis system for Space Station; and a study on design and evaluation of simulation system architecture. This report documents the results of these studies. In addition, a separate User's Manual on Space Communications Simulation System (SCSS) (Version 1) documents the software developed for the Space Station FDMA communications system simulation. The final report, SCSS user's manual, and the software located in the NASA JSC system analysis division's VAX 750 computer together serve as the deliverables from LinCom for this project effort

    Containing Analog Data Deluge at Edge through Frequency-Domain Compression in Collaborative Compute-in-Memory Networks

    Full text link
    Edge computing is a promising solution for handling high-dimensional, multispectral analog data from sensors and IoT devices for applications such as autonomous drones. However, edge devices' limited storage and computing resources make it challenging to perform complex predictive modeling at the edge. Compute-in-memory (CiM) has emerged as a principal paradigm to minimize energy for deep learning-based inference at the edge. Nevertheless, integrating storage and processing complicates memory cells and/or memory peripherals, essentially trading off area efficiency for energy efficiency. This paper proposes a novel solution to improve area efficiency in deep learning inference tasks. The proposed method employs two key strategies. Firstly, a Frequency domain learning approach uses binarized Walsh-Hadamard Transforms, reducing the necessary parameters for DNN (by 87% in MobileNetV2) and enabling compute-in-SRAM, which better utilizes parallelism during inference. Secondly, a memory-immersed collaborative digitization method is described among CiM arrays to reduce the area overheads of conventional ADCs. This facilitates more CiM arrays in limited footprint designs, leading to better parallelism and reduced external memory accesses. Different networking configurations are explored, where Flash, SA, and their hybrid digitization steps can be implemented using the memory-immersed scheme. The results are demonstrated using a 65 nm CMOS test chip, exhibiting significant area and energy savings compared to a 40 nm-node 5-bit SAR ADC and 5-bit Flash ADC. By processing analog data more efficiently, it is possible to selectively retain valuable data from sensors and alleviate the challenges posed by the analog data deluge.Comment: arXiv admin note: text overlap with arXiv:2307.03863, arXiv:2309.0177

    Bitcoding the brain. Integration and organization of massive parallel neuronal data.

    Get PDF
    corecore