360 research outputs found

    Asymptotics of stochastic learning in structured networks

    Get PDF

    Beam scanning by liquid-crystal biasing in a modified SIW structure

    Get PDF
    A fixed-frequency beam-scanning 1D antenna based on Liquid Crystals (LCs) is designed for application in 2D scanning with lateral alignment. The 2D array environment imposes full decoupling of adjacent 1D antennas, which often conflicts with the LC requirement of DC biasing: the proposed design accommodates both. The LC medium is placed inside a Substrate Integrated Waveguide (SIW) modified to work as a Groove Gap Waveguide, with radiating slots etched on the upper broad wall, that radiates as a Leaky-Wave Antenna (LWA). This allows effective application of the DC bias voltage needed for tuning the LCs. At the same time, the RF field remains laterally confined, enabling the possibility to lay several antennas in parallel and achieve 2D beam scanning. The design is validated by simulation employing the actual properties of a commercial LC medium

    Transition Physics and Boundary-Layer Stability: Computational Modeling in Compressible Flow

    Get PDF
    Laminar-to-turbulent transition of boundary layers remains a critical subject of study in aerodynamics. The differences in surface friction and heating between laminar and turbulent flows can be nearly an order of magnitude. Accurate prediction of the transition region between these two regimes is essential for design applications. The objective of this work is to advance simplified approaches to representing the laminar boundary layer and perturbation dynamics that usher flows to turbulence. A versatile boundary-layer solver called DEKAF including thermochemical effects has been created, and the in-house nonlinear parabolized stability equation technique called EPIC has been advanced, including an approach to reduce divergent growth associated with the inclusion of the mean-flow distortion. The simplified approaches are then applied to advance studies in improving aircraft energy efficiency. Under the auspices of a NASA University Leadership Initiative, the transformative technology of a swept, slotted, natural-laminar-flow wing is leveraged to maintain laminar flow over large extents of the wing surface, thereby increasing energy efficiency. From an aircraft performance perspective, sweep is beneficial as it reduces the experienced wave drag. From a boundary-layer transition perspective, though, sweep introduces several physical complications, spawned by the crossflow instability mechanism. As sweep is increased, the crossflow mechanism becomes increasingly unstable, and can lead to an early transition to turbulence. The overarching goal of the present analysis then is to address the question, how much sweep can be applied to this wing while maintaining the benefits of the slotted, natural-laminar-flow design? Linear and nonlinear stability analyses will be presented to assess various pathways to turbulence. In addition, companion computations are presented to accompany the risk-reduction experiment run in the Klebanoff-Saric Wind Tunnel at Texas A&M University. Linear analyses assess a wide range of various configurations to inform experimentalists where relevant unstable content resides. A comparison between simulation and experimental measurements is presented, for which there is a good agreement

    On factor models for high-dimensional time series

    Get PDF
    The aim of this thesis is to develop statistical methods for use with factor models for high-dimensional time series. We consider three broad areas: estimation, changepoint detection, and determination of the number of factors. In Chapter 1, we sketch the backdrop for our thesis and review key aspects of the literature. In Chapter 2, we develop a method to estimate the factors and parameters in an approximate dynamic factor model. Specifically, we present a spectral expectation-maximisation (or \spectral EM") algorithm, whereby we derive the E and M step equations in the frequency domain. Our E step relies on the Wiener-Kolmogorov smoother, the frequency domain counterpart of the Kalman smoother, and our M step is based on maximisation of the Whittle Likelihood with respect to the parameters of the model. We initialise our procedure using dynamic principal components analysis (or \dynamic PCA"), and by leveraging results on lag-window estimators of spectral density by Wu and Zaffaroni (2018), we establish consistency-with-rates of our spectral EM estimator of the parameters and factors as both the dimension (N) and the sample size (T) go to infinity. We find rates commensurate with the literature. Finally, we conduct a simulation study to numerically validate our theoretical results. In Chapter 3, we develop a sequential procedure to detect changepoints in an approximate static factor model. Specifically, we define a ratio of eigenvalues of the covariance matrix of N observed variables. We compute this ratio each period using a rolling window of size m over time, and declare a changepoint when its value breaches an alarm threshold. We investigate the asymptotic behaviour (as N;m ! 1) of our ratio, and prove that, for specific eigenvalues, the ratio will spike upwards when a changepoint is encountered but not otherwise. We use a block-bootstrap to obtain alarm thresholds. We present simulation results and an empirical application based on Financial Times Stock Exchange 100 Index (or \FTSE 100") data. In Chapter 4, we conduct an exploratory analysis which aims to extend the randomised sequential procedure of Trapani (2018) into the frequency domain. Specifically, we aim to estimate the number of dynamically loaded factors by applying the test of Trapani (2018) to eigenvalues of the estimated spectral density matrix (as opposed to the covariance matrix) of the data

    Asymptotics of stochastic learning in structured networks

    Get PDF

    Incorporating Prior Knowledge of Latent Group Structure in Panel Data Models

    Full text link
    The assumption of group heterogeneity has become popular in panel data models. We develop a constrained Bayesian grouped estimator that exploits researchers' prior beliefs on groups in a form of pairwise constraints, indicating whether a pair of units is likely to belong to a same group or different groups. We propose a prior to incorporate the pairwise constraints with varying degrees of confidence. The whole framework is built on the nonparametric Bayesian method, which implicitly specifies a distribution over the group partitions, and so the posterior analysis takes the uncertainty of the latent group structure into account. Monte Carlo experiments reveal that adding prior knowledge yields more accurate estimates of coefficient and scores predictive gains over alternative estimators. We apply our method to two empirical applications. In a first application to forecasting U.S. CPI inflation, we illustrate that prior knowledge of groups improves density forecasts when the data is not entirely informative. A second application revisits the relationship between a country's income and its democratic transition; we identify heterogeneous income effects on democracy with five distinct groups over ninety countries

    Sampling from Mean-Field Gibbs Measures via Diffusion Processes

    Full text link
    We consider Ising mixed pp-spin glasses at high-temperature and without external field, and study the problem of sampling from the Gibbs distribution μ\mu in polynomial time. We develop a new sampling algorithm with complexity of the same order as evaluating the gradient of the Hamiltonian and, in particular, at most linear in the input size. We prove that, at sufficiently high-temperature, it produces samples from a distribution μalg\mu^{alg} which is close in normalized Wasserstein distance to μ\mu. Namely, there exists a coupling of μ\mu and μalg\mu^{alg} such that if (x,xalg)∈{−1,+1}n×{−1,+1}n({\boldsymbol x},{\boldsymbol x}^{alg})\in\{-1,+1\}^n\times \{-1,+1\}^n is a pair drawn from this coupling, then n−1E{∥x−xalg∥22}=on(1)n^{-1}{\mathbb E}\{\|{\boldsymbol x}-{\boldsymbol x}^{alg}\|_2^2\}=o_n(1). For the case of the Sherrington-Kirkpatrick model, our algorithm succeeds in the full replica-symmetric phase. We complement this result with a negative one for sampling algorithms satisfying a certain `stability' property, which is verified by many standard techniques. No stable algorithm can approximately sample at temperatures below the onset of shattering, even under the normalized Wasserstein metric. Further, no algorithm can sample at temperatures below the onset of replica symmetry breaking. Our sampling method implements a discretized version of a diffusion process that has become recently popular in machine learning under the name of `denoising diffusion.' We derive the same process from the general construction of stochastic localization. Implementing the diffusion process requires to efficiently approximate the mean of the tilted measure. To this end, we use an approximate message passing algorithm that, as we prove, achieves sufficiently accurate mean estimation.Comment: 61 pages. arXiv admin note: substantial text overlap with arXiv:2203.0509

    Overcoming the timescale barrier in molecular dynamics: Transfer operators, variational principles and machine learning

    Get PDF
    One of the main challenges in molecular dynamics is overcoming the ‘timescale barrier’: in many realistic molecular systems, biologically important rare transitions occur on timescales that are not accessible to direct numerical simulation, even on the largest or specifically dedicated supercomputers. This article discusses how to circumvent the timescale barrier by a collection of transfer operator-based techniques that have emerged from dynamical systems theory, numerical mathematics and machine learning over the last two decades. We will focus on how transfer operators can be used to approximate the dynamical behaviour on long timescales, review the introduction of this approach into molecular dynamics, and outline the respective theory, as well as the algorithmic development, from the early numerics-based methods, via variational reformulations, to modern data-based techniques utilizing and improving concepts from machine learning. Furthermore, its relation to rare event simulation techniques will be explained, revealing a broad equivalence of variational principles for long-time quantities in molecular dynamics. The article will mainly take a mathematical perspective and will leave the application to real-world molecular systems to the more than 1000 research articles already written on this subject

    MCMC methods: graph samplers, invariance tests and epidemic models

    Get PDF
    Markov Chain Monte Carlo (MCMC) techniques are used ubiquitously for simulation-based inference. This thesis provides novel contributions to MCMC methods and their application to graph sampling and epidemic modeling. The first topic considered is that of sampling graphs conditional on a set of prescribed statistics, which is a difficult problem arising naturally in many fields: sociology (Holland and Leinhardt, 1981), psychology (Connor and Simberloff, 1979), categorical data analysis (Agresti, 1992) and finance (Squartini et al., 2018, Gandy and Veraart, 2019) being examples. Bespoke MCMC samplers are proposed for this setting. The second major topic addressed is that of modeling the dynamics of infectious diseases, where MCMC is leveraged as the general inference engine. The first part of this thesis addresses important problems such as the uniform sampling of graphs with given degree sequences, and weighted graphs with given strength sequences. These distributions are frequently used for exact tests on social networks and two-way contingency tables. Another application is quantifying the statistical significance of patterns observed in real networks. This is crucial for understanding whether such patterns indicate the presence of interesting network phenomena, or whether they simply result from less interesting processes, such as nodal-heterogeneity. The MCMC samplers developed in the course of this research are complex, and there is great scope for conceptual, analytic, and implementation errors. This motivates a chapter that develops novel tests for detecting errors in MCMC implementations. The tests introduced are unique in being exact, which allows us to keep the false rejection probability arbitrarily low. Rather than develop bespoke samplers, as in the first part of the thesis, the second part leverages a standard MCMC framework Stan (Stan Development Team, 2018) as the workhorse for fitting state-of-the-art epidemic models. We present a general framework for semi-mechanistic Bayesian modeling of infectious diseases using renewal processes. The term semi-mechanistic relates to statistical estimation within some constrained mechanism. This research was motivated by the ongoing SARS-COV-2 pandemic, and variants of the model have been used in specific analyses of Covid-19. We present epidemia, an R package allowing researchers to leverage the epidemic models. A key goal of this work is to demonstrate that MCMC, and in particular, Stan’s No-U-Turn (Hoffman and Gelman, 2014) sampler, can be routinely employed to fit a large-class of epidemic models. A second goal is to make the models accessible to the general research community, through epidemia.Open Acces
    • …
    corecore