18,798 research outputs found

    PHOTOS Monte Carlo: a precision tool for QED corrections in Z and W decays

    Full text link
    We present a discussion of the precision for the PHOTOS Monte Carlo algorithm, with improved implementation of QED interference and multiple-photon radiation. The main application of PHOTOS is the generation of QED radiative corrections in decays of any resonances, simulated by a "host" Monte Carlo generator. By careful comparisons automated with the help of the MC-TESTER tool specially tailored for that purpose, we found that the precision of the current version of PHOTOS is of 0.1% in the case of Z and W decays. In the general case, the precision of PHOTOS was also improved, but this will not be quantified here.Comment: Version 2: -Figure 8a replaced by link to hep-ph/040600

    A cell outage management framework for dense heterogeneous networks

    Get PDF
    In this paper, we present a novel cell outage management (COM) framework for heterogeneous networks with split control and data planes-a candidate architecture for meeting future capacity, quality-of-service, and energy efficiency demands. In such an architecture, the control and data functionalities are not necessarily handled by the same node. The control base stations (BSs) manage the transmission of control information and user equipment (UE) mobility, whereas the data BSs handle UE data. An implication of this split architecture is that an outage to a BS in one plane has to be compensated by other BSs in the same plane. Our COM framework addresses this challenge by incorporating two distinct cell outage detection (COD) algorithms to cope with the idiosyncrasies of both data and control planes. The COD algorithm for control cells leverages the relatively larger number of UEs in the control cell to gather large-scale minimization-of-drive-test report data and detects an outage by applying machine learning and anomaly detection techniques. To improve outage detection accuracy, we also investigate and compare the performance of two anomaly-detecting algorithms, i.e., k-nearest-neighbor- and local-outlier-factor-based anomaly detectors, within the control COD. On the other hand, for data cell COD, we propose a heuristic Grey-prediction-based approach, which can work with the small number of UE in the data cell, by exploiting the fact that the control BS manages UE-data BS connectivity and by receiving a periodic update of the received signal reference power statistic between the UEs and data BSs in its coverage. The detection accuracy of the heuristic data COD algorithm is further improved by exploiting the Fourier series of the residual error that is inherent to a Grey prediction model. Our COM framework integrates these two COD algorithms with a cell outage compensation (COC) algorithm that can be applied to both planes. Our COC solution utilizes an actor-critic-based reinforcement learning algorithm, which optimizes the capacity and coverage of the identified outage zone in a plane, by adjusting the antenna gain and transmission power of the surrounding BSs in that plane. The simulation results show that the proposed framework can detect both data and control cell outage and compensate for the detected outage in a reliable manner

    Formation and Incidence of Shell Galaxies in the Illustris Simulation

    Get PDF
    Shells are low surface brightness tidal debris that appear as interleaved caustics with large opening angles, often situated on both sides of the galaxy center. In this paper, we study the incidence and formation processes of shell galaxies in the cosmological gravity+hydrodynamics Illustris simulation. We identify shells at redshift z=0 using stellar surface density maps, and we use stellar history catalogs to trace the birth, trajectory and progenitors of each individual star particle contributing to the tidal feature. Out of a sample of the 220 most massive galaxies in Illustris (M200crit>6×1012 M⊙\mathrm{M}_{\mathrm{200crit}}>6\times10^{12}\,\mathrm{M}_{\odot}), 18%±3%18\%\pm3\% of the galaxies exhibit shells. This fraction increases with increasing mass cut: higher mass galaxies are more likely to have stellar shells. Furthermore, the fraction of massive galaxies that exhibit shells decreases with increasing redshift. We find that shell galaxies observed at redshift z=0z=0 form preferentially through relatively major mergers (≳\gtrsim1:10 in stellar mass ratio). Progenitors are accreted on low angular momentum orbits, in a preferred time-window between ∼\sim4 and 8 Gyrs ago. Our study indicates that, due to dynamical friction, more massive satellites are allowed to probe a wider range of impact parameters at accretion time, while small companions need almost purely radial infall trajectories in order to produce shells. We also find a number of special cases, as a consequence of the additional complexity introduced by the cosmological setting. These include galaxies with multiple shell-forming progenitors, satellite-of-satellites also forming shells, or satellites that fail to produce shells due to multiple major mergers happening in quick succession.Comment: 27 pages, 18 figures. Accepted for publication in MNRAS (new figures 3 and D1 + additional minor changes to match accepted version

    Formation and Incidence of Shell Galaxies in the Illustris Simulation

    Full text link
    Shells are low surface brightness tidal debris that appear as interleaved caustics with large opening angles, often situated on both sides of the galaxy center. In this paper, we study the incidence and formation processes of shell galaxies in the cosmological gravity+hydrodynamics Illustris simulation. We identify shells at redshift z=0 using stellar surface density maps, and we use stellar history catalogs to trace the birth, trajectory and progenitors of each individual star particle contributing to the tidal feature. Out of a sample of the 220 most massive galaxies in Illustris (M200crit>6×1012 M⊙\mathrm{M}_{\mathrm{200crit}}>6\times10^{12}\,\mathrm{M}_{\odot}), 18%±3%18\%\pm3\% of the galaxies exhibit shells. This fraction increases with increasing mass cut: higher mass galaxies are more likely to have stellar shells. Furthermore, the fraction of massive galaxies that exhibit shells decreases with increasing redshift. We find that shell galaxies observed at redshift z=0z=0 form preferentially through relatively major mergers (≳\gtrsim1:10 in stellar mass ratio). Progenitors are accreted on low angular momentum orbits, in a preferred time-window between ∼\sim4 and 8 Gyrs ago. Our study indicates that, due to dynamical friction, more massive satellites are allowed to probe a wider range of impact parameters at accretion time, while small companions need almost purely radial infall trajectories in order to produce shells. We also find a number of special cases, as a consequence of the additional complexity introduced by the cosmological setting. These include galaxies with multiple shell-forming progenitors, satellite-of-satellites also forming shells, or satellites that fail to produce shells due to multiple major mergers happening in quick succession.Comment: 27 pages, 18 figures. Accepted for publication in MNRAS (new figures 3 and D1 + additional minor changes to match accepted version

    Cluster-based reduced-order modelling of a mixing layer

    Full text link
    We propose a novel cluster-based reduced-order modelling (CROM) strategy of unsteady flows. CROM combines the cluster analysis pioneered in Gunzburger's group (Burkardt et al. 2006) and and transition matrix models introduced in fluid dynamics in Eckhardt's group (Schneider et al. 2007). CROM constitutes a potential alternative to POD models and generalises the Ulam-Galerkin method classically used in dynamical systems to determine a finite-rank approximation of the Perron-Frobenius operator. The proposed strategy processes a time-resolved sequence of flow snapshots in two steps. First, the snapshot data are clustered into a small number of representative states, called centroids, in the state space. These centroids partition the state space in complementary non-overlapping regions (centroidal Voronoi cells). Departing from the standard algorithm, the probabilities of the clusters are determined, and the states are sorted by analysis of the transition matrix. Secondly, the transitions between the states are dynamically modelled using a Markov process. Physical mechanisms are then distilled by a refined analysis of the Markov process, e.g. using finite-time Lyapunov exponent and entropic methods. This CROM framework is applied to the Lorenz attractor (as illustrative example), to velocity fields of the spatially evolving incompressible mixing layer and the three-dimensional turbulent wake of a bluff body. For these examples, CROM is shown to identify non-trivial quasi-attractors and transition processes in an unsupervised manner. CROM has numerous potential applications for the systematic identification of physical mechanisms of complex dynamics, for comparison of flow evolution models, for the identification of precursors to desirable and undesirable events, and for flow control applications exploiting nonlinear actuation dynamics.Comment: 48 pages, 30 figures. Revised version with additional material. Accepted for publication in Journal of Fluid Mechanic

    Synchronization properties of self-sustained mechanical oscillators

    Get PDF
    We study, both analytically and numerically, the dynamics of mechanical oscillators kept in motion by a feedback force, which is generated electronically from a signal produced by the oscillators themselves. This kind of self-sustained systems may become standard in the design of frequency-control devices at microscopic scales. Our analysis is thus focused on their synchronization properties under the action of external forces, and on the joint dynamics of two to many coupled oscillators. Existence and stability of synchronized motion are assessed in terms of the mechanical properties of individual oscillators --namely, their natural frequencies and damping coefficients-- and synchronization frequencies are determined. Similarities and differences with synchronization phenomena in other coupled oscillating systems are emphasized.Comment: To appear in Phys. Rev.

    Contraction and optimality properties of an adaptive Legendre-Galerkin method: the multi-dimensional case

    Full text link
    We analyze the theoretical properties of an adaptive Legendre-Galerkin method in the multidimensional case. After the recent investigations for Fourier-Galerkin methods in a periodic box and for Legendre-Galerkin methods in the one dimensional setting, the present study represents a further step towards a mathematically rigorous understanding of adaptive spectral/hphp discretizations of elliptic boundary-value problems. The main contribution of the paper is a careful construction of a multidimensional Riesz basis in H1H^1, based on a quasi-orthonormalization procedure. This allows us to design an adaptive algorithm, to prove its convergence by a contraction argument, and to discuss its optimality properties (in the sense of non-linear approximation theory) in certain sparsity classes of Gevrey type
    • …
    corecore