715 research outputs found

    Algorithmic options for joint time-frequency analysis in structural dynamics applications

    Get PDF
    The purpose of this paper is to present recent research efforts by the authors supporting the superiority of joint time-frequency analysis over the traditional Fourier transform in the study of non-stationary signals commonly encountered in the fields of earthquake engineering, and structural dynamics. In this respect, three distinct signal processing techniques appropriate for the representation of signals in the time-frequency plane are considered. Namely, the harmonic wavelet transform, the adaptive chirplet decomposition, and the empirical mode decomposition, are utilized to analyze certain seismic accelerograms, and structural response records. Numerical examples associated with the inelastic dynamic response of a seismically-excited 3-story benchmark steel-frame building are included to show how the mean-instantaneous-frequency, as derived by the aforementioned techniques, can be used as an indicator of global structural damage

    K-sample subsampling in general spaces: The case of independent time series

    Get PDF
    AbstractThe problem of subsampling in two-sample and K-sample settings is addressed where both the data and the statistics of interest take values in general spaces. We focus on the case where each sample is a stationary time series, and construct subsampling confidence intervals and hypothesis tests with asymptotic validity. Some examples are also given, and the problem of optimal block size choice is discussed

    A non-abelian quasi-particle model for gluon plasma

    Get PDF
    We propose a quasi-particle model for the thermodynamic description of the gluon plasma which takes into account non-abelian characteristics of the gluonic field. This is accomplished utilizing massive non-linear plane wave solutions of the classical equations of motion with a variable mass parameter, reflecting the scale invariance of the Yang-Mills Lagrangian. For the statistical description of the gluon plasma we interpret these non-linear waves as quasi-particles with a temperature dependent mass distribution. Quasi-Gaussian distributions with a common variance but different temperature dependent mean masses for the longitudinal and transverse modes are employed. We use recent Lattice results to fix the mean transverse and longitudinal masses while the variance is fitted to the equation of state of pure SU(3)SU(3) on the Lattice. Thus, our model succeeds to obtain both a consistent description of the gluon plasma energy density as well as a correct behaviour of the mass parameters near the critical point.Comment: 7 pages, 2 figure

    On the asymptotic theory of subsampling

    Get PDF
    A general approach to constructing confidence intervals by subsampling was presented in Politis and Romano (1994). The crux of the method is based on recomputing a statistic over subsamples of the data, and these recomputed values are used to build up an estimated sampling distribution. The method works under extremely weak conditions, it applies to independent, identically distributed (LLd.) observations as well as to dependent data situations, such as time series (possible non stationary) , random fields, and marked point processes. In this article, we present some new theorems showing: a new construction for confidence intervals that removes a previous condition, a general theorem showing the validity of subsampling for datadependent choices of the block size, and a general theorem for the construction of hypothesis tests (which is not necessarily derived from a confidence interval construction). The arguments apply to both the Li.d. setting as well as the dependent data case

    Association of Manganese Biomarker Concentrations with Blood Pressure and Kidney Parameters among Healthy Adolescents: NHANES 2013–2018

    Get PDF
    Deficiency or excess exposure to manganese (Mn), an essential mineral, may have potentially adverse health effects. The kidneys are a major organ of Mn site-specific toxicity because of their unique role in filtration, metabolism, and excretion of xenobiotics. We hypothesized that Mn concentrations were associated with poorer blood pressure (BP) and kidney parameters such as estimated glomerular filtration rate (eGFR), blood urea nitrogen (BUN), and albumin creatinine ratio (ACR). We conducted a cross-sectional analysis of 1931 healthy U.S. adolescents aged 12–19 years participating in National Health and Nutrition Examination Survey cycles 2013–2014, 2015–2016, and 2017–2018. Blood and urine Mn concentrations were measured using inductively coupled plasma mass spectrometry. Systolic and diastolic BP were calculated as the average of available readings. eGFR was calculated from serum creatinine using the Bedside Schwartz equation. We performed multiple linear regression, adjusting for age, sex, body mass index, race/ethnicity, and poverty income ratio. We observed null relationships between blood Mn concentrations with eGFR, ACR, BUN, and BP. In a subset of 691 participants, we observed that a 10-fold increase in urine Mn was associated with a 16.4 mL/min higher eGFR (95% Confidence Interval: 11.1, 21.7). These exploratory findings should be interpreted cautiously and warrant investigation in longitudinal studies

    VerdictDB: Universalizing Approximate Query Processing

    Full text link
    Despite 25 years of research in academia, approximate query processing (AQP) has had little industrial adoption. One of the major causes of this slow adoption is the reluctance of traditional vendors to make radical changes to their legacy codebases, and the preoccupation of newer vendors (e.g., SQL-on-Hadoop products) with implementing standard features. Additionally, the few AQP engines that are available are each tied to a specific platform and require users to completely abandon their existing databases---an unrealistic expectation given the infancy of the AQP technology. Therefore, we argue that a universal solution is needed: a database-agnostic approximation engine that will widen the reach of this emerging technology across various platforms. Our proposal, called VerdictDB, uses a middleware architecture that requires no changes to the backend database, and thus, can work with all off-the-shelf engines. Operating at the driver-level, VerdictDB intercepts analytical queries issued to the database and rewrites them into another query that, if executed by any standard relational engine, will yield sufficient information for computing an approximate answer. VerdictDB uses the returned result set to compute an approximate answer and error estimates, which are then passed on to the user or application. However, lack of access to the query execution layer introduces significant challenges in terms of generality, correctness, and efficiency. This paper shows how VerdictDB overcomes these challenges and delivers up to 171×\times speedup (18.45×\times on average) for a variety of existing engines, such as Impala, Spark SQL, and Amazon Redshift, while incurring less than 2.6% relative error. VerdictDB is open-sourced under Apache License.Comment: Extended technical report of the paper that appeared in Proceedings of the 2018 International Conference on Management of Data, pp. 1461-1476. ACM, 201

    Inconsistency of the MLE for the joint distribution of interval censored survival times and continuous marks

    Full text link
    This paper considers the nonparametric maximum likelihood estimator (MLE) for the joint distribution function of an interval censored survival time and a continuous mark variable. We provide a new explicit formula for the MLE in this problem. We use this formula and the mark specific cumulative hazard function of Huang and Louis (1998) to obtain the almost sure limit of the MLE. This result leads to necessary and sufficient conditions for consistency of the MLE which imply that the MLE is inconsistent in general. We show that the inconsistency can be repaired by discretizing the marks. Our theoretical results are supported by simulations.Comment: 27 pages, 4 figure

    Sampling of temporal networks: methods and biases

    Get PDF
    Temporal networks have been increasingly used to model a diversity of systems that evolve in time; for example, human contact structures over which dynamic processes such as epidemics take place. A fundamental aspect of real-life networks is that they are sampled within temporal and spatial frames. Furthermore, one might wish to subsample networks to reduce their size for better visualization or to perform computationally intensive simulations. The sampling method may affect the network structure and thus caution is necessary to generalize results based on samples. In this paper, we study four sampling strategies applied to a variety of real-life temporal networks. We quantify the biases generated by each sampling strategy on a number of relevant statistics such as link activity, temporal paths and epidemic spread. We find that some biases are common in a variety of networks and statistics, but one strategy, uniform sampling of nodes, shows improved performance in most scenarios. Given the particularities of temporal network data and the variety of network structures, we recommend that the choice of sampling methods be problem oriented to minimize the potential biases for the specific research questions on hand. Our results help researchers to better design network data collection protocols and to understand the limitations of sampled temporal network data
    • …
    corecore