32,388 research outputs found

    Cluster-based reduced-order modelling of a mixing layer

    Full text link
    We propose a novel cluster-based reduced-order modelling (CROM) strategy of unsteady flows. CROM combines the cluster analysis pioneered in Gunzburger's group (Burkardt et al. 2006) and and transition matrix models introduced in fluid dynamics in Eckhardt's group (Schneider et al. 2007). CROM constitutes a potential alternative to POD models and generalises the Ulam-Galerkin method classically used in dynamical systems to determine a finite-rank approximation of the Perron-Frobenius operator. The proposed strategy processes a time-resolved sequence of flow snapshots in two steps. First, the snapshot data are clustered into a small number of representative states, called centroids, in the state space. These centroids partition the state space in complementary non-overlapping regions (centroidal Voronoi cells). Departing from the standard algorithm, the probabilities of the clusters are determined, and the states are sorted by analysis of the transition matrix. Secondly, the transitions between the states are dynamically modelled using a Markov process. Physical mechanisms are then distilled by a refined analysis of the Markov process, e.g. using finite-time Lyapunov exponent and entropic methods. This CROM framework is applied to the Lorenz attractor (as illustrative example), to velocity fields of the spatially evolving incompressible mixing layer and the three-dimensional turbulent wake of a bluff body. For these examples, CROM is shown to identify non-trivial quasi-attractors and transition processes in an unsupervised manner. CROM has numerous potential applications for the systematic identification of physical mechanisms of complex dynamics, for comparison of flow evolution models, for the identification of precursors to desirable and undesirable events, and for flow control applications exploiting nonlinear actuation dynamics.Comment: 48 pages, 30 figures. Revised version with additional material. Accepted for publication in Journal of Fluid Mechanic

    Trajectory and Policy Aware Sender Anonymity in Location Based Services

    Full text link
    We consider Location-based Service (LBS) settings, where a LBS provider logs the requests sent by mobile device users over a period of time and later wants to publish/share these logs. Log sharing can be extremely valuable for advertising, data mining research and network management, but it poses a serious threat to the privacy of LBS users. Sender anonymity solutions prevent a malicious attacker from inferring the interests of LBS users by associating them with their service requests after gaining access to the anonymized logs. With the fast-increasing adoption of smartphones and the concern that historic user trajectories are becoming more accessible, it becomes necessary for any sender anonymity solution to protect against attackers that are trajectory-aware (i.e. have access to historic user trajectories) as well as policy-aware (i.e they know the log anonymization policy). We call such attackers TP-aware. This paper introduces a first privacy guarantee against TP-aware attackers, called TP-aware sender k-anonymity. It turns out that there are many possible TP-aware anonymizations for the same LBS log, each with a different utility to the consumer of the anonymized log. The problem of finding the optimal TP-aware anonymization is investigated. We show that trajectory-awareness renders the problem computationally harder than the trajectory-unaware variants found in the literature (NP-complete in the size of the log, versus PTIME). We describe a PTIME l-approximation algorithm for trajectories of length l and empirically show that it scales to large LBS logs (up to 2 million users)

    Deep Predictive Policy Training using Reinforcement Learning

    Full text link
    Skilled robot task learning is best implemented by predictive action policies due to the inherent latency of sensorimotor processes. However, training such predictive policies is challenging as it involves finding a trajectory of motor activations for the full duration of the action. We propose a data-efficient deep predictive policy training (DPPT) framework with a deep neural network policy architecture which maps an image observation to a sequence of motor activations. The architecture consists of three sub-networks referred to as the perception, policy and behavior super-layers. The perception and behavior super-layers force an abstraction of visual and motor data trained with synthetic and simulated training samples, respectively. The policy super-layer is a small sub-network with fewer parameters that maps data in-between the abstracted manifolds. It is trained for each task using methods for policy search reinforcement learning. We demonstrate the suitability of the proposed architecture and learning framework by training predictive policies for skilled object grasping and ball throwing on a PR2 robot. The effectiveness of the method is illustrated by the fact that these tasks are trained using only about 180 real robot attempts with qualitative terminal rewards.Comment: This work is submitted to IEEE/RSJ International Conference on Intelligent Robots and Systems 2017 (IROS2017

    Simulated single molecule microscopy with SMeagol

    Full text link
    SMeagol is a software tool to simulate highly realistic microscopy data based on spatial systems biology models, in order to facilitate development, validation, and optimization of advanced analysis methods for live cell single molecule microscopy data. Availability and Implementation: SMeagol runs on Matlab R2014 and later, and uses compiled binaries in C for reaction-diffusion simulations. Documentation, source code, and binaries for recent versions of Mac OS, Windows, and Ubuntu Linux can be downloaded from http://smeagol.sourceforge.net.Comment: v2: 14 pages including supplementary text. Pre-copyedited, author-produced version of an application note published in Bioinformatics following peer review. The version of record, and additional supplementary material is available online at: https://academic.oup.com/bioinformatics/article-lookup/doi/10.1093/bioinformatics/btw10

    Verifying black hole orbits with gravitational spectroscopy

    Full text link
    Gravitational waves from test masses bound to geodesic orbits of rotating black holes are simulated, using Teukolsky's black hole perturbation formalism, for about ten thousand generic orbital configurations. Each binary radiates power exclusively in modes with frequencies that are integer-linear-combinations of the orbit's three fundamental frequencies. The following general spectral properties are found with a survey of orbits: (i) 99% of the radiated power is typically carried by a few hundred modes, and at most by about a thousand modes, (ii) the dominant frequencies can be grouped into a small number of families defined by fixing two of the three integer frequency multipliers, and (iii) the specifics of these trends can be qualitatively inferred from the geometry of the orbit under consideration. Detections using triperiodic analytic templates modeled on these general properties would constitute a verification of radiation from an adiabatic sequence of black hole orbits and would recover the evolution of the fundamental orbital frequencies. In an analogy with ordinary spectroscopy, this would compare to observing the Bohr model's atomic hydrogen spectrum without being able to rule out alternative atomic theories or nuclei. The suitability of such a detection technique is demonstrated using snapshots computed at 12-hour intervals throughout the last three years before merger of a kludged inspiral. Because of circularization, the number of excited modes decreases as the binary evolves. A hypothetical detection algorithm that tracks mode families dominating the first 12 hours of the inspiral would capture 98% of the total power over the remaining three years.Comment: 18 pages, expanded section on detection algorithms and made minor edits. Final published versio
    • …
    corecore