823 research outputs found

    A generalized linear mixed model for longitudinal binary data with a marginal logit link function

    Get PDF
    Longitudinal studies of a binary outcome are common in the health, social, and behavioral sciences. In general, a feature of random effects logistic regression models for longitudinal binary data is that the marginal functional form, when integrated over the distribution of the random effects, is no longer of logistic form. Recently, Wang and Louis [Biometrika 90 (2003) 765--775] proposed a random intercept model in the clustered binary data setting where the marginal model has a logistic form. An acknowledged limitation of their model is that it allows only a single random effect that varies from cluster to cluster. In this paper we propose a modification of their model to handle longitudinal data, allowing separate, but correlated, random intercepts at each measurement occasion. The proposed model allows for a flexible correlation structure among the random intercepts, where the correlations can be interpreted in terms of Kendall's τ\tau. For example, the marginal correlations among the repeated binary outcomes can decline with increasing time separation, while the model retains the property of having matching conditional and marginal logit link functions. Finally, the proposed method is used to analyze data from a longitudinal study designed to monitor cardiac abnormalities in children born to HIV-infected women.Comment: Published in at http://dx.doi.org/10.1214/10-AOAS390 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Levinson's Theorem for Dirac Particles

    Full text link
    Levinson's theorem for Dirac particles constraints the sum of the phase shifts at threshold by the total number of bound states of the Dirac equation. Recently, a stronger version of Levinson's theorem has been proven in which the value of the positive- and negative-energy phase shifts are separately constrained by the number of bound states of an appropriate set of Schr\"odinger-like equations. In this work we elaborate on these ideas and show that the stronger form of Levinson's theorem relates the individual phase shifts directly to the number of bound states of the Dirac equation having an even or odd number of nodes. We use a mean-field approximation to Walecka's scalar-vector model to illustrate this stronger form of Levinson's theorem. We show that the assignment of bound states to a particular phase shift should be done, not on the basis of the sign of the bound-state energy, but rather, in terms of the nodal structure (even/odd number of nodes) of the bound state.Comment: Latex with Revtex, 7 postscript figures (available from the author), SCRI-06109

    Weak convergence of Vervaat and Vervaat Error processes of long-range dependent sequences

    Full text link
    Following Cs\"{o}rg\H{o}, Szyszkowicz and Wang (Ann. Statist. {\bf 34}, (2006), 1013--1044) we consider a long range dependent linear sequence. We prove weak convergence of the uniform Vervaat and the uniform Vervaat error processes, extending their results to distributions with unbounded support and removing normality assumption

    A nanoflare model for active region radiance: application of artificial neural networks

    Full text link
    Context. Nanoflares are small impulsive bursts of energy that blend with and possibly make up much of the solar background emission. Determining their frequency and energy input is central to understanding the heating of the solar corona. One method is to extrapolate the energy frequency distribution of larger individually observed flares to lower energies. Only if the power law exponent is greater than 2, is it considered possible that nanoflares contribute significantly to the energy input. Aims. Time sequences of ultraviolet line radiances observed in the corona of an active region are modelled with the aim of determining the power law exponent of the nanoflare energy distribution. Methods. A simple nanoflare model based on three key parameters (the flare rate, the flare duration time, and the power law exponent of the flare energy frequency distribution) is used to simulate emission line radiances from the ions Fe XIX, Ca XIII, and Si iii, observed by SUMER in the corona of an active region as it rotates around the east limb of the Sun. Light curve pattern recognition by an Artificial Neural Network (ANN) scheme is used to determine the values. Results. The power law exponents, alpha 2.8, 2.8, and 2.6 for Fe XIX, Ca XIII, and Si iii respectively. Conclusions. The light curve simulations imply a power law exponent greater than the critical value of 2 for all ion species. This implies that if the energy of flare-like events is extrapolated to low energies, nanoflares could provide a significant contribution to the heating of active region coronae.Comment: 4 pages, 5 figure

    On line power spectra identification and whitening for the noise in interferometric gravitational wave detectors

    Get PDF
    In this paper we address both to the problem of identifying the noise Power Spectral Density of interferometric detectors by parametric techniques and to the problem of the whitening procedure of the sequence of data. We will concentrate the study on a Power Spectral Density like the one of the Italian-French detector VIRGO and we show that with a reasonable finite number of parameters we succeed in modeling a spectrum like the theoretical one of VIRGO, reproducing all its features. We propose also the use of adaptive techniques to identify and to whiten on line the data of interferometric detectors. We analyze the behavior of the adaptive techniques in the field of stochastic gradient and in the Least Squares ones.Comment: 28 pages, 21 figures, uses iopart.cls accepted for pubblication on Classical and Quantum Gravit

    Asymptotic normality of the Parzen-Rosenblatt density estimator for strongly mixing random fields

    Get PDF
    We prove the asymptotic normality of the kernel density estimator (introduced by Rosenblatt (1956) and Parzen (1962)) in the context of stationary strongly mixing random fields. Our approach is based on the Lindeberg's method rather than on Bernstein's small-block-large-block technique and coupling arguments widely used in previous works on nonparametric estimation for spatial processes. Our method allows us to consider only minimal conditions on the bandwidth parameter and provides a simple criterion on the (non-uniform) strong mixing coefficients which do not depend on the bandwith.Comment: 16 page

    Linear Estimation of Location and Scale Parameters Using Partial Maxima

    Full text link
    Consider an i.i.d. sample X^*_1,X^*_2,...,X^*_n from a location-scale family, and assume that the only available observations consist of the partial maxima (or minima)sequence, X^*_{1:1},X^*_{2:2},...,X^*_{n:n}, where X^*_{j:j}=max{X^*_1,...,X^*_j}. This kind of truncation appears in several circumstances, including best performances in athletics events. In the case of partial maxima, the form of the BLUEs (best linear unbiased estimators) is quite similar to the form of the well-known Lloyd's (1952, Least-squares estimation of location and scale parameters using order statistics, Biometrika, vol. 39, pp. 88-95) BLUEs, based on (the sufficient sample of) order statistics, but, in contrast to the classical case, their consistency is no longer obvious. The present paper is mainly concerned with the scale parameter, showing that the variance of the partial maxima BLUE is at most of order O(1/log n), for a wide class of distributions.Comment: This article is devoted to the memory of my six-years-old, little daughter, Dionyssia, who leaved us on August 25, 2010, at Cephalonia isl. (26 pages, to appear in Metrika

    Mixing Bandt-Pompe and Lempel-Ziv approaches: another way to analyze the complexity of continuous-states sequences

    Get PDF
    In this paper, we propose to mix the approach underlying Bandt-Pompe permutation entropy with Lempel-Ziv complexity, to design what we call Lempel-Ziv permutation complexity. The principle consists of two steps: (i) transformation of a continuous-state series that is intrinsically multivariate or arises from embedding into a sequence of permutation vectors, where the components are the positions of the components of the initial vector when re-arranged; (ii) performing the Lempel-Ziv complexity for this series of `symbols', as part of a discrete finite-size alphabet. On the one hand, the permutation entropy of Bandt-Pompe aims at the study of the entropy of such a sequence; i.e., the entropy of patterns in a sequence (e.g., local increases or decreases). On the other hand, the Lempel-Ziv complexity of a discrete-state sequence aims at the study of the temporal organization of the symbols (i.e., the rate of compressibility of the sequence). Thus, the Lempel-Ziv permutation complexity aims to take advantage of both of these methods. The potential from such a combined approach - of a permutation procedure and a complexity analysis - is evaluated through the illustration of some simulated data and some real data. In both cases, we compare the individual approaches and the combined approach.Comment: 30 pages, 4 figure

    Automatic Network Fingerprinting through Single-Node Motifs

    Get PDF
    Complex networks have been characterised by their specific connectivity patterns (network motifs), but their building blocks can also be identified and described by node-motifs---a combination of local network features. One technique to identify single node-motifs has been presented by Costa et al. (L. D. F. Costa, F. A. Rodrigues, C. C. Hilgetag, and M. Kaiser, Europhys. Lett., 87, 1, 2009). Here, we first suggest improvements to the method including how its parameters can be determined automatically. Such automatic routines make high-throughput studies of many networks feasible. Second, the new routines are validated in different network-series. Third, we provide an example of how the method can be used to analyse network time-series. In conclusion, we provide a robust method for systematically discovering and classifying characteristic nodes of a network. In contrast to classical motif analysis, our approach can identify individual components (here: nodes) that are specific to a network. Such special nodes, as hubs before, might be found to play critical roles in real-world networks.Comment: 16 pages (4 figures) plus supporting information 8 pages (5 figures
    corecore