234 research outputs found

    Development and Application of the Spherical Harmonic Veto Definer for Gravitational-Wave Transient Search

    Get PDF
    The rapid analysis of gravitational-wave data is not trivial for many reasons, such as the non-Gaussian non-stationary nature of LIGO detector noise and the lack of exhaustive waveform models. Non-Gaussian non-stationary noise and instrumental artifacts are known as ’glitches’. X-Pipeline Spherical Radiometer (X-SphRad) is a software package designed for performing autonomous searches for un-modelled gravitational- wave bursts. X-SphRad has an approach based on spherical radiometry, that transforms time-series data streams into the spherical harmonic domain. Spherical harmonic coefficients show potential in discriminating glitches from signals. For my Ph.D. thesis, I evaluated and implemented a tool for glitch rejection called Spherical Harmonic Veto Definer (SHaVeD). SHaVeD is a Matlab script that loads spherical harmonic coefficients computed by X-SphRad, and performs statistics that computes a threshold to apply. The threshold is used to identify every glitch’s GPS time and create a cut of one second around it. SHaVeD saves this information in a two-column file where the first column is the GPS starting time of the cut and the second is the final time. X-SphRad can include SHaVeD as a data quality to veto glitches. The tool is tested with X-SphRad and the coherent WaveBurst (cWB) pipeline over the O2 observation run. Results have shown how the inclusion of SHaVeD in the analysis could allow a lowering of some thresholds used in this type of research. Tests show how SHaVeD has reduced the amplitude of the loudest false event by a factor of 3, meaning that it rejected false events in a volume 9 times greater than usual

    Present and Future of Gravitational Wave Astronomy

    Get PDF
    The first detection on Earth of a gravitational wave signal from the coalescence of a binary black hole system in 2015 established a new era in astronomy, allowing the scientific community to observe the Universe with a new form of radiation for the first time. More than five years later, many more gravitational wave signals have been detected, including the first binary neutron star coalescence in coincidence with a gamma ray burst and a kilonova observation. The field of gravitational wave astronomy is rapidly evolving, making it difficult to keep up with the pace of new detector designs, discoveries, and astrophysical results. This Special Issue is, therefore, intended as a review of the current status and future directions of the field from the perspective of detector technology, data analysis, and the astrophysical implications of these discoveries. Rather than presenting new results, the articles collected in this issue will serve as a reference and an introduction to the field. This Special Issue will include reviews of the basic properties of gravitational wave signals; the detectors that are currently operating and the main sources of noise that limit their sensitivity; planned upgrades of the detectors in the short and long term; spaceborne detectors; a data analysis of the gravitational wave detector output focusing on the main classes of detected and expected signals; and implications of the current and future discoveries on our understanding of astrophysics and cosmology

    Towards a holographic description of pulsar glitch mechanism

    Get PDF
    This work aims to review the progress in understanding the underlining physics of pulsar glitches: beginning from the pedagogical development of the subject to eventually motivating the use of AdS/CFT techniques in studying a certain class of condensed matter systems. The foundation of this work is built upon the Gross Pitaevskii (GP) model of super-fluidity applied to the interior matter of neutron stars, where the condensate wave function acts as the order parameter of the macroscopic coherence theory. The excitation modes of the field equations are found to be solitonic vortices, which then go on to present a theoretical basis to the plausible theories of pulsar glitches involving vortex dynamics. The second major thrust of this thesis is in reviewing the application of AdS/CFT in study of strongly-coupled condensed matter systems, with special attention to the models of holographic superfluidity that admit vortex-like solutions. The basic identification of the characteristic free energy configuration of global vortices in the AdS/CFT prescription enables to motivate its use in studying the pulsar glitch mechanism. The last part of this work traces the conclusions of this review and attempts to present the current state-of-progress of the field with its extensive domain of purview and open lines of inquiry

    Analysis and Design of Resilient VLSI Circuits

    Get PDF
    The reliable operation of Integrated Circuits (ICs) has become increasingly difficult to achieve in the deep sub-micron (DSM) era. With continuously decreasing device feature sizes, combined with lower supply voltages and higher operating frequencies, the noise immunity of VLSI circuits is decreasing alarmingly. Thus, VLSI circuits are becoming more vulnerable to noise effects such as crosstalk, power supply variations and radiation-induced soft errors. Among these noise sources, soft errors (or error caused by radiation particle strikes) have become an increasingly troublesome issue for memory arrays as well as combinational logic circuits. Also, in the DSM era, process variations are increasing at an alarming rate, making it more difficult to design reliable VLSI circuits. Hence, it is important to efficiently design robust VLSI circuits that are resilient to radiation particle strikes and process variations. The work presented in this dissertation presents several analysis and design techniques with the goal of realizing VLSI circuits which are tolerant to radiation particle strikes and process variations. This dissertation consists of two parts. The first part proposes four analysis and two design approaches to address radiation particle strikes. The analysis techniques for the radiation particle strikes include: an approach to analytically determine the pulse width and the pulse shape of a radiation induced voltage glitch in combinational circuits, a technique to model the dynamic stability of SRAMs, and a 3D device-level analysis of the radiation tolerance of voltage scaled circuits. Experimental results demonstrate that the proposed techniques for analyzing radiation particle strikes in combinational circuits and SRAMs are fast and accurate compared to SPICE. Therefore, these analysis approaches can be easily integrated in a VLSI design flow to analyze the radiation tolerance of such circuits, and harden them early in the design flow. From 3D device-level analysis of the radiation tolerance of voltage scaled circuits, several non-intuitive observations are made and correspondingly, a set of guidelines are proposed, which are important to consider to realize radiation hardened circuits. Two circuit level hardening approaches are also presented to harden combinational circuits against a radiation particle strike. These hardening approaches significantly improve the tolerance of combinational circuits against low and very high energy radiation particle strikes respectively, with modest area and delay overheads. The second part of this dissertation addresses process variations. A technique is developed to perform sensitizable statistical timing analysis of a circuit, and thereby improve the accuracy of timing analysis under process variations. Experimental results demonstrate that this technique is able to significantly reduce the pessimism due to two sources of inaccuracy which plague current statistical static timing analysis (SSTA) tools. Two design approaches are also proposed to improve the process variation tolerance of combinational circuits and voltage level shifters (which are used in circuits with multiple interacting power supply domains), respectively. The variation tolerant design approach for combinational circuits significantly improves the resilience of these circuits to random process variations, with a reduction in the worst case delay and low area penalty. The proposed voltage level shifter is faster, requires lower dynamic power and area, has lower leakage currents, and is more tolerant to process variations, compared to the best known previous approach. In summary, this dissertation presents several analysis and design techniques which significantly augment the existing work in the area of resilient VLSI circuit design

    Design, Analysis and Test of Logic Circuits under Uncertainty.

    Full text link
    Integrated circuits are increasingly susceptible to uncertainty caused by soft errors, inherently probabilistic devices, and manufacturing variability. As device technologies scale, these effects become detrimental to circuit reliability. In order to address this, we develop methods for analyzing, designing, and testing circuits subject to probabilistic effects. Our main contributions are: 1) a fast, soft-error rate (SER) analyzer that uses functional-simulation signatures to capture error effects, 2) novel design techniques that improve reliability using little area and performance overhead, 3) a matrix-based reliability-analysis framework that captures many types of probabilistic faults, and 4) test-generation/compaction methods aimed at probabilistic faults in logic circuits. SER analysis must account for the main error-masking mechanisms in ICs: logic, timing, and electrical masking. We relate logic masking to node testability of the circuit and utilize functional-simulation signatures, i.e., partial truth tables, to efficiently compute estability (signal probability and observability). To account for timing masking, we compute error-latching windows (ELWs) from timing analysis information. Electrical masking is incorporated into our estimates through derating factors for gate error probabilities. The SER of a circuit is computed by combining the effects of all three masking mechanisms within our SER analyzer called AnSER. Using AnSER, we develop several low-overhead techniques that increase reliability, including: 1) an SER-aware design method that uses redundancy already present within the circuit, 2) a technique that resynthesizes small logic windows to improve area and reliability, and 3) a post-placement gate-relocation technique that increases timing masking by decreasing ELWs. We develop the probabilistic transfer matrix (PTM) modeling framework to analyze effects beyond soft errors. PTMs are compressed into algebraic decision diagrams (ADDs) to improve computational efficiency. Several ADD algorithms are developed to extract reliability and error susceptibility information from PTMs representing circuits. We propose new algorithms for circuit testing under probabilistic faults, which require a reformulation of existing test techniques. For instance, a test vector may need to be repeated many times to detect a fault. Also, different vectors detect the same fault with different probabilities. We develop test generation methods that account for these differences, and integer linear programming (ILP) formulations to optimize test sets.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/61584/1/smita_1.pd

    Development of a Long-Period Torsion Balance for Tests of Einstein\u27s Equivalence Principle and a Search for Normal Mode Torsional Oscillations of the Earth

    Get PDF
    This thesis describes the development of a torsion balance experiment designed to test Einstein\u27s equivalence principle with unprecedented sensitivity, while also taking a novel approach to directly observe the normal mode torsional oscillations of the Earth. Accordingly, a model of the signal expected from a potential equivalence principle violation has been developed, as well as a multi-slit auto-collimating optical lever which possesses a resolution on the order of a nanoradian and a range of observation of 10 milliradians and is used to monitor the torsion balance. A torsion balance with a natural torsional frequency of ~104 Hz, signi_cantly below the frequency of the longest of the Earth\u27s normal modes, was designed, built, and operated in a remote laboratory at Washington University\u27s Tyson Research Center. More than 1800 hours of data was collected and used to evaluate the performance of this prototype instrument and characterize the conditions in the Tyson laboratory

    Secure Physical Design

    Get PDF
    An integrated circuit is subject to a number of attacks including information leakage, side-channel attacks, fault-injection, malicious change, reverse engineering, and piracy. Majority of these attacks take advantage of physical placement and routing of cells and interconnects. Several measures have already been proposed to deal with security issues of the high level functional design and logic synthesis. However, to ensure end-to-end trustworthy IC design flow, it is necessary to have security sign-off during physical design flow. This paper presents a secure physical design roadmap to enable end-to-end trustworthy IC design flow. The paper also discusses utilization of AI/ML to establish security at the layout level. Major research challenges in obtaining a secure physical design are also discussed

    Precision navigation for aerospace applications

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2004.Vita.Includes bibliographical references (p. 162). Includes bibliographical references (p. 162).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Navigation is important in a variety of aerospace applications, and commonly uses a blend of GPS and inertial sensors. In this thesis, a navigation system is designed, developed, and tested. Several alternatives are discussed, but the ultimate design is a loosely-coupled Extended Kalman Filter using rigid body dynamics as the process with a small angle linearization of quaternions. Simulations are run using real flight data. A bench top hardware prototype is tested. Results show good performance and give a variety of insights into the design of navigation systems. Special attention is given to convergence and the validity of linearization.by Andrew K. Stimac.S.M

    Beyond unwanted sound : noise, affect and aesthetic moralism

    Get PDF
    PhD ThesisThis thesis uses Baruch Spinoza’s notion of affect to critically rethink the correlation between noise, ‘unwantedness’ and ‘badness’. Against subject-oriented definitions, which understand noise to be constituted by a listener; and object-oriented definitions, which define noise as a type of sound; I focus on what it is that noise does. Using the relational philosophy of Michel Serres in combination with Spinoza’s philosophy of affects, I posit noise as a productive, transformative force and a necessary component of material relations. I consider the implications of this affective and relational model for two lineages: what I identify as a ‘conservative’ politics of silence, and a ‘transgressive’ politics of noise. The former is inherent to R. Murray Schafer’s ‘aesthetic moralism’, where noise is construed as ‘bad’ to silence’s ‘good’. Instead, I argue that noise’s ‘badness’ is secondary, relational and contingent. This ethico-affective understanding thus allows for silence that is felt to be destructive and noise that is pleasantly serendipitous. Noise’s positively productive capacity can be readily exemplified by the use of noise within music, whereby noise is used to create new sonic sensations. An ethicoaffective approach also allows for an affirmative (re)conceptualization of noise music, which moves away from rhetoric of failure, taboo and contradiction. In developing a relational, ethico-affective approach to noise, this thesis facilitates a number of key conceptual shifts. Firstly, it serves to de-centre the listening subject. According to this definition, noise does not need to be heard as unwanted in order to exist; indeed, it need not be heard at all. Secondly, this definition no longer constitutes noise according to a series of hierarchical dualisms. Consequently, the structural oppositions of noise/signal, noise/silence and noise/music are disrupted. Finally, noise is understood to be ubiquitous and foundational, rather than secondary and contingent: it is inescapable, unavoidable and necessary
    • …
    corecore