87,530 research outputs found

    Sequential Adaptive Detection for In-Situ Transmission Electron Microscopy (TEM)

    Full text link
    We develop new efficient online algorithms for detecting transient sparse signals in TEM video sequences, by adopting the recently developed framework for sequential detection jointly with online convex optimization [1]. We cast the problem as detecting an unknown sparse mean shift of Gaussian observations, and develop adaptive CUSUM and adaptive SSRS procedures, which are based on likelihood ratio statistics with post-change mean vector being online maximum likelihood estimators with â„“1\ell_1. We demonstrate the meritorious performance of our algorithms for TEM imaging using real data

    Frequency domain laser velocimeter signal processor: A new signal processing scheme

    Get PDF
    A new scheme for processing signals from laser velocimeter systems is described. The technique utilizes the capabilities of advanced digital electronics to yield a smart instrument that is able to configure itself, based on the characteristics of the input signals, for optimum measurement accuracy. The signal processor is composed of a high-speed 2-bit transient recorder for signal capture and a combination of adaptive digital filters with energy and/or zero crossing detection signal processing. The system is designed to accept signals with frequencies up to 100 MHz with standard deviations up to 20 percent of the average signal frequency. Results from comparative simulation studies indicate measurement accuracies 2.5 times better than with a high-speed burst counter, from signals with as few as 150 photons per burst

    A Phase Vocoder based on Nonstationary Gabor Frames

    Full text link
    We propose a new algorithm for time stretching music signals based on the theory of nonstationary Gabor frames (NSGFs). The algorithm extends the techniques of the classical phase vocoder (PV) by incorporating adaptive time-frequency (TF) representations and adaptive phase locking. The adaptive TF representations imply good time resolution for the onsets of attack transients and good frequency resolution for the sinusoidal components. We estimate the phase values only at peak channels and the remaining phases are then locked to the values of the peaks in an adaptive manner. During attack transients we keep the stretch factor equal to one and we propose a new strategy for determining which channels are relevant for reinitializing the corresponding phase values. In contrast to previously published algorithms we use a non-uniform NSGF to obtain a low redundancy of the corresponding TF representation. We show that with just three times as many TF coefficients as signal samples, artifacts such as phasiness and transient smearing can be greatly reduced compared to the classical PV. The proposed algorithm is tested on both synthetic and real world signals and compared with state of the art algorithms in a reproducible manner.Comment: 10 pages, 6 figure

    Adaptive Higher Order Sliding Modes for Two-Dimensional Derivative Estimation

    Get PDF
    International audienceIn this paper, some recent technical of the derivatives noisy transient signals estimation is extended to the two-dimensional case. This technique, which called higher order sliding modes is mostly used in the synthesis of robust controllers and is also shown a good results in the synthesis of the rth order robust dierentiators. In this work, such dierentiators are used as an edge detection method into image application. The proposed algorithm use an adaptive mechanism for tuning up its parameters in real time, in order to increase the efficiency of basic scheme. Some comparative study with a conventional methods of edge detection is performed

    A Scalable Model of Cerebellar Adaptive Timing and Sequencing: The Recurrent Slide and Latch (RSL) Model

    Full text link
    From the dawn of modern neural network theory, the mammalian cerebellum has been a favored object of mathematical modeling studies. Early studies focused on the fan-out, convergence, thresholding, and learned weighting of perceptual-motor signals within the cerebellar cortex. This led in the proposals of Albus (1971; 1975) and Marr (1969) to the still viable idea that the granule cell stage in the cerebellar cortex performs a sparse expansive recoding of the time-varying input vector. This recoding reveals and emphasizes combinations (of input state variables) in a distributed representation that serves as a basis for the learned, state-dependent control actions engendered by cerebellar outputs to movement related centers. Although well-grounded as such, this perspective seriously underestimates the intelligence of the cerebellar cortex. Context and state information arises asynchronously due to the heterogeneity of sources that contribute signals to compose the cerebellar input vector. These sources include radically different sensory systems - vision, kinesthesia, touch, balance and audition - as well as many stages of the motor output channel. To make optimal use of available signals, the cerebellum must be able to sift the evolving state representation for the most reliable predictors of the need for control actions, and to use those predictors even if they appear only transiently and well in advance of the optimal time for initiating the control action. Such a cerebellar adaptive timing competence has recently been experimentally verified (Perrett, Ruiz, & Mauk, 1993). This paper proposes a modification to prior, population, models for cerebellar adaptive timing and sequencing. Since it replaces a population with a single clement, the proposed Recurrent Slide and Latch (RSL) model is in one sense maximally efficient, and therefore optimal from the perspective of scalability.Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-92-J-1309, N00014-93-1-1364, N00014-95-1-0409)

    DeSyRe: on-Demand System Reliability

    No full text
    The DeSyRe project builds on-demand adaptive and reliable Systems-on-Chips (SoCs). As fabrication technology scales down, chips are becoming less reliable, thereby incurring increased power and performance costs for fault tolerance. To make matters worse, power density is becoming a significant limiting factor in SoC design, in general. In the face of such changes in the technological landscape, current solutions for fault tolerance are expected to introduce excessive overheads in future systems. Moreover, attempting to design and manufacture a totally defect and fault-free system, would impact heavily, even prohibitively, the design, manufacturing, and testing costs, as well as the system performance and power consumption. In this context, DeSyRe delivers a new generation of systems that are reliable by design at well-balanced power, performance, and design costs. In our attempt to reduce the overheads of fault-tolerance, only a small fraction of the chip is built to be fault-free. This fault-free part is then employed to manage the remaining fault-prone resources of the SoC. The DeSyRe framework is applied to two medical systems with high safety requirements (measured using the IEC 61508 functional safety standard) and tight power and performance constraints

    The Recovery of Weak Impulsive Signals Based on Stochastic Resonance and Moving Least Squares Fitting

    Get PDF
    In this paper a stochastic resonance (SR)-based method for recovering weak impulsive signals is developed for quantitative diagnosis of faults in rotating machinery. It was shown in theory that weak impulsive signals follow the mechanism of SR, but the SR produces a nonlinear distortion of the shape of the impulsive signal. To eliminate the distortion a moving least squares fitting method is introduced to reconstruct the signal from the output of the SR process. This proposed method is verified by comparing its detection results with that of a morphological filter based on both simulated and experimental signals. The experimental results show that the background noise is suppressed effectively and the key features of impulsive signals are reconstructed with a good degree of accuracy, which leads to an accurate diagnosis of faults in roller bearings in a run-to failure test

    Frequency and fundamental signal measurement algorithms for distributed control and protection applications

    Get PDF
    Increasing penetration of distributed generation within electricity networks leads to the requirement for cheap, integrated, protection and control systems. To minimise cost, algorithms for the measurement of AC voltage and current waveforms can be implemented on a single microcontroller, which also carries out other protection and control tasks, including communication and data logging. This limits the frame rate of the major algorithms, although analogue to digital converters (ADCs) can be oversampled using peripheral control processors on suitable microcontrollers. Measurement algorithms also have to be tolerant of poor power quality, which may arise within grid-connected or islanded (e.g. emergency, battlefield or marine) power system scenarios. This study presents a 'Clarke-FLL hybrid' architecture, which combines a three-phase Clarke transformation measurement with a frequency-locked loop (FLL). This hybrid contains suitable algorithms for the measurement of frequency, amplitude and phase within dynamic three-phase AC power systems. The Clarke-FLL hybrid is shown to be robust and accurate, with harmonic content up to and above 28% total harmonic distortion (THD), and with the major algorithms executing at only 500 samples per second. This is achieved by careful optimisation and cascaded use of exact-time averaging techniques, which prove to be useful at all stages of the measurements: from DC bias removal through low-sample-rate Fourier analysis to sub-harmonic ripple removal. Platform-independent algorithms for three-phase nodal power flow analysis are benchmarked on three processors, including the Infineon TC1796 microcontroller, on which only 10% of the 2000 mus frame time is required, leaving the remainder free for other algorithms

    A Survey for Transient Astronomical Radio Emission at 611 MHz

    Full text link
    We have constructed and operated the Survey for Transient Astronomical Radio Emission (STARE) to detect transient astronomical radio emission at 611 MHz originating from the sky over the northeastern United States. The system is sensitive to transient events on timescales of 0.125 s to a few minutes, with a typical zenith flux density detection threshold of approximately 27 kJy. During 18 months of around-the-clock observing with three geographically separated instruments, we detected a total of 4,318,486 radio bursts. 99.9% of these events were rejected as locally generated interference, determined by requiring the simultaneous observation of an event at all three sites for it to be identified as having an astronomical origin. The remaining 3,898 events have been found to be associated with 99 solar radio bursts. These results demonstrate the remarkably effective RFI rejection achieved by a coincidence technique using precision timing (such as GPS clocks) at geographically separated sites. The non-detection of extra-solar bursting or flaring radio sources has improved the flux density sensitivity and timescale sensitivity limits set by several similar experiments in the 1970s. We discuss the consequences of these limits for the immediate solar neighborhood and the discovery of previously unknown classes of sources. We also discuss other possible uses for the large collection of 611 MHz monitoring data assembled by STARE.Comment: 24 pages, 6 figures; to appear in PAS
    • …
    corecore