10,446 research outputs found

    On the Autocorrelation of Complex Envelope of White Noise

    Get PDF
    About four decades ago, in an article in this transactions, Thomas Kailath pointed out that the autocorrelation of the complex envelope of white noise is not strictly an impulse function, even though when treated as an impulse in practical problems, it does lead to correct results. However, it is commonly assumed that by simply letting the bandwidth of a flat-bandlimited noise process to go to infinity, one obtains the result that the autocorrelation of the complex envelope of white noise equals an impulse function. In this correspondence, we show that 1) the limit operation has to be done carefully and 2) when done properly, it leads to the result in the Kailath’s paper, which is different from a pure impulse function

    On detecting the large separation in the autocorrelation of stellar oscillation times series

    Full text link
    The observations carried out by the space missions CoRoT and Kepler provide a large set of asteroseismic data. Their analysis requires an efficient procedure first to determine if the star is reliably showing solar-like oscillations, second to measure the so-called large separation, third to estimate the asteroseismic information that can be retrieved from the Fourier spectrum. We develop in this paper a procedure, based on the autocorrelation of the seismic Fourier spectrum. We have searched for criteria able to predict the output that one can expect from the analysis by autocorrelation of a seismic time series. First, the autocorrelation is properly scaled for taking into account the contribution of white noise. Then, we use the null hypothesis H0 test to assess the reliability of the autocorrelation analysis. Calculations based on solar and CoRoT times series are performed in order to quantify the performance as a function of the amplitude of the autocorrelation signal. We propose an automated determination of the large separation, whose reliability is quantified by the H0 test. We apply this method to analyze a large set of red giants observed by CoRoT. We estimate the expected performance for photometric time series of the Kepler mission. Finally, we demonstrate that the method makes it possible to distinguish l=0 from l=1 modes. The envelope autocorrelation function has proven to be very powerful for the determination of the large separation in noisy asteroseismic data, since it enables us to quantify the precision of the performance of different measurements: mean large separation, variation of the large separation with frequency, small separation and degree identification.Comment: A&A, in pres

    Decline of long-range temporal correlations in the human brain during sustained wakefulness

    Get PDF
    Sleep is crucial for daytime functioning, cognitive performance and general well-being. These aspects of daily life are known to be impaired after extended wake, yet, the underlying neuronal correlates have been difficult to identify. Accumulating evidence suggests that normal functioning of the brain is characterized by long-range temporal correlations (LRTCs) in cortex, which are supportive for decision-making and working memory tasks. Here we assess LRTCs in resting state human EEG data during a 40-hour sleep deprivation experiment by evaluating the decay in autocorrelation and the scaling exponent of the detrended fluctuation analysis from EEG amplitude fluctuations. We find with both measures that LRTCs decline as sleep deprivation progresses. This decline becomes evident when taking changes in signal power into appropriate consideration. Our results demonstrate the importance of sleep to maintain LRTCs in the human brain. In complex networks, LRTCs naturally emerge in the vicinity of a critical state. The observation of declining LRTCs during wake thus provides additional support for our hypothesis that sleep reorganizes cortical networks towards critical dynamics for optimal functioning

    Resilience, reactivity and variability : A mathematical comparison of ecological stability measures

    Full text link
    In theoretical studies, the most commonly used measure of ecological stability is resilience: ecosystems asymptotic rate of return to equilibrium after a pulse-perturbation −-or shock. A complementary notion of growing popularity is reactivity: the strongest initial response to shocks. On the other hand, empirical stability is often quantified as the inverse of temporal variability, directly estimated on data, and reflecting ecosystems response to persistent and erratic environmental disturbances. It is unclear whether and how this empirical measure is related to resilience and reactivity. Here, we establish a connection by introducing two variability-based stability measures belonging to the theoretical realm of resilience and reactivity. We call them intrinsic, stochastic and deterministic invariability; respectively defined as the inverse of the strongest stationary response to white-noise and to single-frequency perturbations. We prove that they predict ecosystems worst response to broad classes of disturbances, including realistic models of environmental fluctuations. We show that they are intermediate measures between resilience and reactivity and that, although defined with respect to persistent perturbations, they can be related to the whole transient regime following a shock, making them more integrative notions than reactivity and resilience. We argue that invariability measures constitute a stepping stone, and discuss the challenges ahead to further unify theoretical and empirical approaches to stability.Comment: 35 pages, 7 figures, 2 table

    A maximum likelihood based technique for validating detrended fluctuation analysis (ML-DFA)

    Get PDF
    Detrended Fluctuation Analysis (DFA) is widely used to assess the presence of long-range temporal correlations in time series. Signals with long-range temporal correlations are typically defined as having a power law decay in their autocorrelation function. The output of DFA is an exponent, which is the slope obtained by linear regression of a log-log fluctuation plot against window size. However, if this fluctuation plot is not linear, then the underlying signal is not self-similar, and the exponent has no meaning. There is currently no method for assessing the linearity of a DFA fluctuation plot. Here we present such a technique, called ML-DFA. We scale the DFA fluctuation plot to construct a likelihood function for a set of alternative models including polynomial, root, exponential, logarithmic and spline functions. We use this likelihood function to determine the maximum likelihood and thus to calculate values of the Akaike and Bayesian information criteria, which identify the best fit model when the number of parameters involved is taken into account and over-fitting is penalised. This ensures that, of the models that fit well, the least complicated is selected as the best fit. We apply ML-DFA to synthetic data from FARIMA processes and sine curves with DFA fluctuation plots whose form has been analytically determined, and to experimentally collected neurophysiological data. ML-DFA assesses whether the hypothesis of a linear fluctuation plot should be rejected, and thus whether the exponent can be considered meaningful. We argue that ML-DFA is essential to obtaining trustworthy results from DFA.Comment: 22 pages, 7 figure

    Blind deconvolution of medical ultrasound images: parametric inverse filtering approach

    Get PDF
    ©2007 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or distribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.DOI: 10.1109/TIP.2007.910179The problem of reconstruction of ultrasound images by means of blind deconvolution has long been recognized as one of the central problems in medical ultrasound imaging. In this paper, this problem is addressed via proposing a blind deconvolution method which is innovative in several ways. In particular, the method is based on parametric inverse filtering, whose parameters are optimized using two-stage processing. At the first stage, some partial information on the point spread function is recovered. Subsequently, this information is used to explicitly constrain the spectral shape of the inverse filter. From this perspective, the proposed methodology can be viewed as a ldquohybridizationrdquo of two standard strategies in blind deconvolution, which are based on either concurrent or successive estimation of the point spread function and the image of interest. Moreover, evidence is provided that the ldquohybridrdquo approach can outperform the standard ones in a number of important practical cases. Additionally, the present study introduces a different approach to parameterizing the inverse filter. Specifically, we propose to model the inverse transfer function as a member of a principal shift-invariant subspace. It is shown that such a parameterization results in considerably more stable reconstructions as compared to standard parameterization methods. Finally, it is shown how the inverse filters designed in this way can be used to deconvolve the images in a nonblind manner so as to further improve their quality. The usefulness and practicability of all the introduced innovations are proven in a series of both in silico and in vivo experiments. Finally, it is shown that the proposed deconvolution algorithms are capable of improving the resolution of ultrasound images by factors of 2.24 or 6.52 (as judged by the autocorrelation criterion) depending on the type of regularization method used
    • …
    corecore