98 research outputs found

    Topological models of swarming

    Get PDF
    We study the collective behaviour of animal aggregations, swarming, using theoretical models of collective motion. Focusing on bird flocking, we aim to reproduce two main aspects of real world aggregations: cohesion and coalignment. Following the observation that interactions between birds in the flock does not have a characteristic length-scale, we concentrate on topological, metric-free models of collective motion. We propose and analyse three novel models of swarming: two based on topological interactions between particles, which define interacting neighbours based on Voronoi tessellation of the group of particles, and one which uses the visual field of the agent. We explore the problem of cohesion, bounding of topological flocks in free space, by introducing the mechanism of neighbour anticipation. This relies on going towards the inferred future position of an individuals neighbours and results in providing the bounding forces for the group. We also address the issue of unrealistic density distributions in existing metric-free models by introducing a homogeneous, tunable motional bias throughout the swarm. The proposed model produces swarms with density distributions corresponding to empirical data from flocks of Starlings. Furthermore, we show that for a group with a visual information input and individuals moving so as to seek marginal opacity that alignment and group cohesion can be induced without the need for explicit aligning interaction rules between group members. For each of the proposed models a comprehensive analysis of characteristics and behaviour under different parameter sets is performed

    Multimodal Data Fusion: An Overview of Methods, Challenges and Prospects

    No full text
    International audienceIn various disciplines, information about the same phenomenon can be acquired from different types of detectors, at different conditions, in multiple experiments or subjects, among others. We use the term "modality" for each such acquisition framework. Due to the rich characteristics of natural phenomena, it is rare that a single modality provides complete knowledge of the phenomenon of interest. The increasing availability of several modalities reporting on the same system introduces new degrees of freedom, which raise questions beyond those related to exploiting each modality separately. As we argue, many of these questions, or "challenges" , are common to multiple domains. This paper deals with two key questions: "why we need data fusion" and "how we perform it". The first question is motivated by numerous examples in science and technology, followed by a mathematical framework that showcases some of the benefits that data fusion provides. In order to address the second question, "diversity" is introduced as a key concept, and a number of data-driven solutions based on matrix and tensor decompositions are discussed, emphasizing how they account for diversity across the datasets. The aim of this paper is to provide the reader, regardless of his or her community of origin, with a taste of the vastness of the field, the prospects and opportunities that it holds

    Sparsely Observed Functional Time Series: Estimation and Prediction

    Full text link
    Functional time series analysis, whether based on time of frequency domain methodology, has traditionally been carried out under the assumption of complete observation of the constituent series of curves, assumed stationary. Nevertheless, as is often the case with independent functional data, it may well happen that the data available to the analyst are not the actual sequence of curves, but relatively few and noisy measurements per curve, potentially at different locations in each curve's domain. Under this sparse sampling regime, neither the established estimators of the time series' dynamics, nor their corresponding theoretical analysis will apply. The subject of this paper is to tackle the problem of estimating the dynamics and of recovering the latent process of smooth curves in the sparse regime. Assuming smoothness of the latent curves, we construct a consistent nonparametric estimator of the series' spectral density operator and use it develop a frequency-domain recovery approach, that predicts the latent curve at a given time by borrowing strength from the (estimated) dynamic correlations in the series across time. Further to predicting the latent curves from their noisy point samples, the method fills in gaps in the sequence (curves nowhere sampled), denoises the data, and serves as a basis for forecasting. Means of providing corresponding confidence bands are also investigated. A simulation study interestingly suggests that sparse observation for a longer time period, may be provide better performance than dense observation for a shorter period, in the presence of smoothness. The methodology is further illustrated by application to an environmental data set on fair-weather atmospheric electricity, which naturally leads to a sparse functional time-series

    Collective behaviours in the stock market -- A maximum entropy approach

    Full text link
    Scale invariance, collective behaviours and structural reorganization are crucial for portfolio management (portfolio composition, hedging, alternative definition of risk, etc.). This lack of any characteristic scale and such elaborated behaviours find their origin in the theory of complex systems. There are several mechanisms which generate scale invariance but maximum entropy models are able to explain both scale invariance and collective behaviours. The study of the structure and collective modes of financial markets attracts more and more attention. It has been shown that some agent based models are able to reproduce some stylized facts. Despite their partial success, there is still the problem of rules design. In this work, we used a statistical inverse approach to model the structure and co-movements in financial markets. Inverse models restrict the number of assumptions. We found that a pairwise maximum entropy model is consistent with the data and is able to describe the complex structure of financial systems. We considered the existence of a critical state which is linked to how the market processes information, how it responds to exogenous inputs and how its structure changes. The considered data sets did not reveal a persistent critical state but rather oscillations between order and disorder. In this framework, we also showed that the collective modes are mostly dominated by pairwise co-movements and that univariate models are not good candidates to model crashes. The analysis also suggests a genuine adaptive process since both the maximum variance of the log-likelihood and the accuracy of the predictive scheme vary through time. This approach may provide some clue to crash precursors and may provide highlights on how a shock spreads in a financial network and if it will lead to a crash. The natural continuation of the present work could be the study of such a mechanism.Comment: 146 pages, PhD Thesi

    Power System Simulation, Control and Optimization

    Get PDF
    This Special Issue “Power System Simulation, Control and Optimization” offers valuable insights into the most recent research developments in these topics. The analysis, operation, and control of power systems are increasingly complex tasks that require advanced simulation models to analyze and control the effects of transformations concerning electricity grids today: Massive integration of renewable energies, progressive implementation of electric vehicles, development of intelligent networks, and progressive evolution of the applications of artificial intelligence

    High-Dimensional Semiparametric Selection Models: Estimation Theory with an Application to the Retail Gasoline Market

    Full text link
    This paper proposes a multi-stage projection-based Lasso procedure for the semiparametric sample selection model in high-dimensional settings under a weak nonparametric restriction on the selection correction. In particular, the number of regressors in the main equation, p, and the number of regressors in the selection equation, d, can grow with and exceed the sample size n. The analysis considers the exact sparsity case and the approximate sparsity case. The main theoretical results are finite-sample bounds from which sufficient scaling conditions on the sample size for estimation consistency and variable-selection consistency are established. Statistical efficiency of the proposed estimators is studied via lower bounds on minimax risks and the result shows that, for a family of models with exactly sparse structure on the coefficient vector in the main equation, one of the proposed estimators attains the smallest estimation error up to the (n,d,p)-scaling among a class of procedures in worst-case scenarios. Inference procedures for the coefficients of the main equation, one based on a pivotal Dantzig selector to construct non-asymptotic confidence sets and one based on a post-selection strategy, are discussed. Other theoretical contributions include establishing the non-asymptotic counterpart of the familiar asymptotic oracle results from previous literature: the estimator of the coefficients in the main equation behaves as if the unknown nonparametric component were known, provided the nonparametric component is sufficiently smooth. Small-sample performance of the high-dimensional multi-stage estimation procedure is evaluated by Monte-Carlo simulations and illustrated with an empirical application to the retail gasoline market in the Greater Saint Louis area.Comment: 3 figure
    • …
    corecore