33 research outputs found

    Data retrieval time for energy harvesting wireless sensors

    Get PDF
    We consider the problem of retrieving a reliable estimate of an attribute monitored by a wireless sensor network, where the sensors harvest energy from the environment independently, at random. Each sensor stores the harvested energy in batteries of limited capacity. Moreover, provided they have sufficient energy, the sensors broadcast their measurements in a decentralized fashion. Clients arrive at the sensor network according to a Poisson process and are interested in retrieving a fixed number of sensor measurements, based on which a reliable estimate is computed. We show that the time until an arbitrary sensor broadcasts has a phase-type distribution. Based on this result and the theory of order statistics of phase-type distributions, we determine the probability distribution of the time needed for a client to retrieve a reliable estimate of an attribute monitored by the sensor network. We also provide closed-form expression for the retrieval time of a reliable estimate when the capacity of the sensor battery or the rate at which energy is harvested is asymptotically large. In addition, we analyze numerically the retrieval time of a reliable estimate for various sizes of the sensor network, maximum capacity of the sensor batteries and rate at which energy is harvested. These results show that the energy harvesting rate and the broadcasting rate are the main parameters that influence the retrieval time of a reliable estimate, while deploying sensors with large batteries does not significantly reduce the retrieval time

    Stationary-State Statistics of a Binary Neural Network Model with Quenched Disorder

    Full text link
    We study the statistical properties of the stationary firing-rate states of a neural network model with quenched disorder. The model has arbitrary size, discrete-time evolution equations and binary firing rates, while the topology and the strength of the synaptic connections are randomly generated from known, generally arbitrary, probability distributions. We derived semi-analytical expressions of the occurrence probability of the stationary states and the mean multistability diagram of the model, in terms of the distribution of the synaptic connections and of the external stimuli to the network. Our calculations rely on the probability distribution of the bifurcation points of the stationary states with respect to the external stimuli, which can be calculated in terms of the permanent of special matrices, according to extreme value theory. While our semi-analytical expressions are exact for any size of the network and for any distribution of the synaptic connections, we also specialized our calculations to the case of statistically-homogeneous multi-population networks. In the specific case of this network topology, we calculated analytically the permanent, obtaining a compact formula that outperforms of several orders of magnitude the Balasubramanian-Bax-Franklin-Glynn algorithm. To conclude, by applying the Fisher-Tippett-Gnedenko theorem, we derived asymptotic expressions of the stationary-state statistics of multi-population networks in the large-network-size limit, in terms of the Gumbel (double exponential) distribution. We also provide a Python implementation of our formulas and some examples of the results generated by the code.Comment: 30 pages, 6 figures, 2 supplemental Python script

    Characterization, parameter estimation, and aircraft response statistics of atmospheric turbulence

    Get PDF
    A nonGaussian three component model of atmospheric turbulence is postulated that accounts for readily observable features of turbulence velocity records, their autocorrelation functions, and their spectra. Methods for computing probability density functions and mean exceedance rates of a generic aircraft response variable are developed using nonGaussian turbulence characterizations readily extracted from velocity recordings. A maximum likelihood method is developed for optimal estimation of the integral scale and intensity of records possessing von Karman transverse of longitudinal spectra. Formulas for the variances of such parameter estimates are developed. The maximum likelihood and least-square approaches are combined to yield a method for estimating the autocorrelation function parameters of a two component model for turbulence

    Multivariate hydrological frequency analysis and risk mapping

    Get PDF
    In hydrological frequency analysis, it is difficult to apply standard statistical methods to derive multivariate probability distributions of the characteristics of hydrologic or hydraulic variables except under the following restrictive assumptions: (1) variables are assumed independent, (2) variables are assumed to have the same marginal distributions, and (3) variables are assumed to follow or are transformed to normal distribution. Relaxing these assumptions when deriving multivariate distributions of the characteristics of correlated hydrologic and hydraulic variables. The copula methodology is applied to perform multivariate frequency analysis of rainfall, flood, low-flow, water quality, and channel flow, using data from the Amite river basin in Louisiana. And finally, the risk methodology is applied to analyze flood risks. Through the study, it was found that (1) copula method was found reasonably well to be applied to derive the multivariate hydrological frequency model compared with other conventional methods, i.e., multivariate normal approach, N-K model approach, independence transformation approach etc.; (2) nonstationarity was found more or less existed in the rainfall and streamflow time series, but according to the nonstationary test, in most cases, the stationarity assumption may be approximately valid; (3) the multivariate frequency analysis coupling nonstationarity indicated that the stationary assumption was valid for both bivariate and trivariate analysis; and (4) risk, defined by both flooding event and the damage caused by the scenario, showed the difference from that defined by T-year return period design event and the probability of total damage with the comparison indicating that only one character, i.e., T-year event or probability of total damage was not adequate to define the risk

    Change-point Problem and Regression: An Annotated Bibliography

    Get PDF
    The problems of identifying changes at unknown times and of estimating the location of changes in stochastic processes are referred to as the change-point problem or, in the Eastern literature, as disorder . The change-point problem, first introduced in the quality control context, has since developed into a fundamental problem in the areas of statistical control theory, stationarity of a stochastic process, estimation of the current position of a time series, testing and estimation of change in the patterns of a regression model, and most recently in the comparison and matching of DNA sequences in microarray data analysis. Numerous methodological approaches have been implemented in examining change-point models. Maximum-likelihood estimation, Bayesian estimation, isotonic regression, piecewise regression, quasi-likelihood and non-parametric regression are among the methods which have been applied to resolving challenges in change-point problems. Grid-searching approaches have also been used to examine the change-point problem. Statistical analysis of change-point problems depends on the method of data collection. If the data collection is ongoing until some random time, then the appropriate statistical procedure is called sequential. If, however, a large finite set of data is collected with the purpose of determining if at least one change-point occurred, then this may be referred to as non-sequential. Not surprisingly, both the former and the latter have a rich literature with much of the earlier work focusing on sequential methods inspired by applications in quality control for industrial processes. In the regression literature, the change-point model is also referred to as two- or multiple-phase regression, switching regression, segmented regression, two-stage least squares (Shaban, 1980), or broken-line regression. The area of the change-point problem has been the subject of intensive research in the past half-century. The subject has evolved considerably and found applications in many different areas. It seems rather impossible to summarize all of the research carried out over the past 50 years on the change-point problem. We have therefore confined ourselves to those articles on change-point problems which pertain to regression. The important branch of sequential procedures in change-point problems has been left out entirely. We refer the readers to the seminal review papers by Lai (1995, 2001). The so called structural change models, which occupy a considerable portion of the research in the area of change-point, particularly among econometricians, have not been fully considered. We refer the reader to Perron (2005) for an updated review in this area. Articles on change-point in time series are considered only if the methodologies presented in the paper pertain to regression analysis

    Doctor of Philosophy

    Get PDF
    dissertationThis dissertation addresses several key challenges in multiple-antenna communications, including information-theoretical analysis of channel capacity, capacity-achieving signaling design, and practical statistical detection algorithms. The first part of the thesis studies the capacity limits of multiple-input multiple-output (MIMO) multiple access channel (MAC) via virtual representation (VR) model. The VR model captures the physical scattering environment via channel gains in the angular domain, and hence is a realistic MIMO channel model that includes many existing channel models as special cases. This study provides analytical characterization of the optimal input distribution that achieves the sum-capacity of MAC-VR. It also investigates the optimality of beamforming, which is a simple scalar coding strategy desirable in practice. For temporally correlated channels, beamforming codebook designs are proposed that can efficiently exploit channel correlation. The second part of the thesis focuses on statistical detection for time-varying frequency-selective channels. The proposed statistical detectors are developed based on Markov Chain Monte Carlo (MCMC) techniques. The complexity of such detectors grows linearly in system dimensions, which renders them applicable to inter-symbol-interference (ISI) channels with long delay spread, for which the traditional trellis-based detectors fail due to prohibitive complexity. The proposed MCMC detectors provide substantial gain over the de facto turbo minimum-mean square-error (MMSE) detector for both synthetic channel and underwater acoustic (UWA) channels. The effectiveness of the proposed MCMC detectors is successfully validated through experimental data collected from naval at-sea experiments

    Essays on Statistical Issues in Finance

    Get PDF
    Empirical finance has growingly relied on statistical methods to draw inferences. Such finance applications require tailoring the methods to particular problems, especially when the underlying assumptions are violated in the data. This dissertation studies the development and application of statistical methodologies to address empirical problems in the contexts of empirical asset pricing, household finance and investments. The dissertation consists of four chapters. The first chapter gives an overview of the empirical problems and associated statistical issues for three different finance settings: stock return predictability, house price comovement and mutual fund performance. It also briefly outlines the main contribution of this dissertation in each setting. The second chapter develops a robust methodology of unit root testing and statistical inference for autoregressive processes when the errors are heteroscedastic and heavy-tailed. Applications of the robust test demonstrate that some commonly used financial ratios for stock return predictability are highly persistent with unit roots. The third chapter introduces a new nonparametric framework for estimating and testing comovements among U.S. regional home prices. Comovements are found to be strong in housing prices of four U.S. states, but there is little empirical support for asymmetric tail dependence. The fourth chapter comprehensively studies the bootstrap inference problem in fund performance evaluation. It shows the inadequate size and power properties of two existing bootstrap tests and develops the theory for a valid bootstrap Hotelling’s T-squared test. The new bootstrap test, applied in a sequential testing procedure, identifies a small set of skilled funds. Skilled funds are more engaged in active management and hold stocks with higher expected anomalous returns

    Statistische Physik von Leistungsflüssen auf Netzwerken mit einem hohen Anteil fluktuierender erneuerbarer Erzeugung

    Get PDF
    Renewable energy sources will play an important role in future generation of electrical energy. This is due to the fact that fossil fuel reserves are limited and because of the waste caused by conventional electricity generation. The most important sources of renewable energy, wind and solar irradiation, exhibit strong temporal fluctuations. This poses new problems for the security of supply. Further, the power flows become a stochastic character so that new methods are required to predict flows within an electrical grid. The main focus of this work is the description of power flows in a electrical transmission network with a high share of renewable generation of electrical energy. To define an appropriate model, it is important to understand the general set-up of a stable system with fluctuating generation. Therefore, generation time series of solar and wind power are compared to load time series for whole Europe and the required balancing or storage capacities analyzed. With these insights, a simple model is proposed to study the power flows. An approximation to the full power flow equations is used and evaluated with Monte-Carlo simulations. Further, approximations to the distributions of power flows along the links are analytically derived. Finally, the results are compared to the power flows calculated from the generation and load data.Erneuerbare Energien werden zukünftig eine große Rolle bei der Versorgung mit elektrischer Energie spielen. Zum einen entstehen keine Abgase, die mit dem Klimawandel in Verbindung gebracht werden, als auch keine radioaktiven Abfälle. Zum anderen bedeutet der Einsatz von regenerativen Energiequellen Unabhängigkeit von endlichen fossilen Energieträgern. Die wichtigsten erneuerbaren Energiequellen, Wind- und Solarenergie, zeigen starke zeitliche Fluktuationen. Dies stellt ein Problem für die Versorgungssicherheit dar. Ein anderes wichtiges Problem ist, dass die resultierenden Leistungsflüsse bei einen hohen Anteil erneuerbarer Energieerzeugung einen stochastischen Charakter bekommen. Der Hauptfokus dieser Arbeit liegt auf der Beschreibung von Leistungsflüssen, die aus fluktuierender Energieerzeugung resultiert. Für die Formulierung sinnvoller Modelle ist es wichtig die Rahmenbedingungen zukünftiger Energiesysteme zu verstehen. Daher werden Zeitreihen von Wind- und Solarerzeugung mit der Last verglichen und der Bedarf an Ausgleichskraftwerken oder Speicher ermittelt. Zur Beschreibung der Leistungsflüsse wird ein einfaches Modell formuliert. Eine Näherung der Flussgleichungen wird verwendet und mit Hilfe von Monte-Carlo Simulationen ausgewertet. Im folgenden werden Näherungen der Flussverteilungen auf den Links analytisch hergeleitet. Diese Ergebnisse werden mit den Leistungsflüssen basierend auf den Daten verglichen
    corecore