8,863 research outputs found

    Fast, scalable, Bayesian spike identification for multi-electrode arrays

    Get PDF
    We present an algorithm to identify individual neural spikes observed on high-density multi-electrode arrays (MEAs). Our method can distinguish large numbers of distinct neural units, even when spikes overlap, and accounts for intrinsic variability of spikes from each unit. As MEAs grow larger, it is important to find spike-identification methods that are scalable, that is, the computational cost of spike fitting should scale well with the number of units observed. Our algorithm accomplishes this goal, and is fast, because it exploits the spatial locality of each unit and the basic biophysics of extracellular signal propagation. Human intervention is minimized and streamlined via a graphical interface. We illustrate our method on data from a mammalian retina preparation and document its performance on simulated data consisting of spikes added to experimentally measured background noise. The algorithm is highly accurate

    Efficient Simulation of Structural Faults for the Reliability Evaluation at System-Level

    Get PDF
    In recent technology nodes, reliability is considered a part of the standard design ¿ow at all levels of embedded system design. While techniques that use only low-level models at gate- and register transfer-level offer high accuracy, they are too inefficient to consider the overall application of the embedded system. Multi-level models with high abstraction are essential to efficiently evaluate the impact of physical defects on the system. This paper provides a methodology that leverages state-of-the-art techniques for efficient fault simulation of structural faults together with transaction-level modeling. This way it is possible to accurately evaluate the impact of the faults on the entire hardware/software system. A case study of a system consisting of hardware and software for image compression and data encryption is presented and the method is compared to a standard gate/RT mixed-level approac

    Interferometers as Probes of Planckian Quantum Geometry

    Full text link
    A theory of position of massive bodies is proposed that results in an observable quantum behavior of geometry at the Planck scale, tPt_P. Departures from classical world lines in flat spacetime are described by Planckian noncommuting operators for position in different directions, as defined by interactions with null waves. The resulting evolution of position wavefunctions in two dimensions displays a new kind of directionally-coherent quantum noise of transverse position. The amplitude of the effect in physical units is predicted with no parameters, by equating the number of degrees of freedom of position wavefunctions on a 2D spacelike surface with the entropy density of a black hole event horizon of the same area. In a region of size LL, the effect resembles spatially and directionally coherent random transverse shear deformations on timescale L/c\approx L/c with typical amplitude ctPL\approx \sqrt{ct_PL}. This quantum-geometrical "holographic noise" in position is not describable as fluctuations of a quantized metric, or as any kind of fluctuation, dispersion or propagation effect in quantum fields. In a Michelson interferometer the effect appears as noise that resembles a random Planckian walk of the beamsplitter for durations up to the light crossing time. Signal spectra and correlation functions in interferometers are derived, and predicted to be comparable with the sensitivities of current and planned experiments. It is proposed that nearly co-located Michelson interferometers of laboratory scale, cross-correlated at high frequency, can test the Planckian noise prediction with current technology.Comment: 23 pages, 6 figures, Latex. To appear in Physical Review

    Improving Data Quality by Leveraging Statistical Relational\ud Learning

    Get PDF
    Digitally collected data su\ud ↵\ud ers from many data quality issues, such as duplicate, incorrect, or incomplete data. A common\ud approach for counteracting these issues is to formulate a set of data cleaning rules to identify and repair incorrect, duplicate and\ud missing data. Data cleaning systems must be able to treat data quality rules holistically, to incorporate heterogeneous constraints\ud within a single routine, and to automate data curation. We propose an approach to data cleaning based on statistical relational\ud learning (SRL). We argue that a formalism - Markov logic - is a natural fit for modeling data quality rules. Our approach\ud allows for the usage of probabilistic joint inference over interleaved data cleaning rules to improve data quality. Furthermore, it\ud obliterates the need to specify the order of rule execution. We describe how data quality rules expressed as formulas in first-order\ud logic directly translate into the predictive model in our SRL framework

    Evolution of shuttle avionics redundancy management/fault tolerance

    Get PDF
    The challenge of providing redundancy management (RM) and fault tolerance to meet the Shuttle Program requirements of fail operational/fail safe for the avionics systems was complicated by the critical program constraints of weight, cost, and schedule. The basic and sometimes false effectivity of less than pure RM designs is addressed. Evolution of the multiple input selection filter (the heart of the RM function) is discussed with emphasis on the subtle interactions of the flight control system that were found to be potentially catastrophic. Several other general RM development problems are discussed, with particular emphasis on the inertial measurement unit RM, indicative of the complexity of managing that three string system and its critical interfaces with the guidance and control systems
    corecore