97,920 research outputs found

    Radio detection of cosmic ray air showers with LOPES

    Get PDF
    In the last few years, radio detection of cosmic ray air showers has experienced a true renaissance, becoming manifest in a number of new experiments and simulation efforts. In particular, the LOPES project has successfully implemented modern interferometric methods to measure the radio emission from extensive air showers. LOPES has confirmed that the emission is coherent and of geomagnetic origin, as expected by the geosynchrotron mechanism, and has demonstrated that a large scale application of the radio technique has great potential to complement current measurements of ultra-high energy cosmic rays. We describe the current status, most recent results and open questions regarding radio detection of cosmic rays and give an overview of ongoing research and development for an application of the radio technique in the framework of the Pierre Auger Observatory.Comment: 8 pages; Proceedings of the CRIS2006 conference, Catania, Italy; to be published in Nuclear Physics B, Proceedings Supplement

    Calibrated Ultra Fast Image Simulations for the Dark Energy Survey

    Full text link
    Weak lensing by large-scale structure is a powerful technique to probe the dark components of the universe. To understand the measurement process of weak lensing and the associated systematic effects, image simulations are becoming increasingly important. For this purpose we present a first implementation of the Monte Carlo Control Loops\textit{Monte Carlo Control Loops} (MCCL\textit{MCCL}; Refregier & Amara 2014), a coherent framework for studying systematic effects in weak lensing. It allows us to model and calibrate the shear measurement process using image simulations from the Ultra Fast Image Generator (UFig; Berge et al. 2013). We apply this framework to a subset of the data taken during the Science Verification period (SV) of the Dark Energy Survey (DES). We calibrate the UFig simulations to be statistically consistent with DES images. We then perform tolerance analyses by perturbing the simulation parameters and study their impact on the shear measurement at the one-point level. This allows us to determine the relative importance of different input parameters to the simulations. For spatially constant systematic errors and six simulation parameters, the calibration of the simulation reaches the weak lensing precision needed for the DES SV survey area. Furthermore, we find a sensitivity of the shear measurement to the intrinsic ellipticity distribution, and an interplay between the magnitude-size and the pixel value diagnostics in constraining the noise model. This work is the first application of the MCCL\textit{MCCL} framework to data and shows how it can be used to methodically study the impact of systematics on the cosmic shear measurement.Comment: 14 pages, 9 Figures, submitted to Ap

    Improving Large-Scale Network Traffic Simulation with Multi-Resolution Models

    Get PDF
    Simulating a large-scale network like the Internet is a challenging undertaking because of the sheer volume of its traffic. Packet-oriented representation provides high-fidelity details but is computationally expensive; fluid-oriented representation offers high simulation efficiency at the price of losing packet-level details. Multi-resolution modeling techniques exploit the advantages of both representations by integrating them in the same simulation framework. This dissertation presents solutions to the problems regarding the efficiency, accuracy, and scalability of the traffic simulation models in this framework. The ``ripple effect\u27\u27 is a well-known problem inherent in event-driven fluid-oriented traffic simulation, causing explosion of fluid rate changes. Integrating multi-resolution traffic representations requires estimating arrival rates of packet-oriented traffic, calculating the queueing delay upon a packet arrival, and computing packet loss rate under buffer overflow. Real time simulation of a large or ultra-large network demands efficient background traffic simulation. The dissertation includes a rate smoothing technique that provably mitigates the ``ripple effect\u27\u27, an accurate and efficient approach that integrates traffic models at multiple abstraction levels, a sequential algorithm that achieves real time simulation of the coarse-grained traffic in a network with 3 tier-1 ISP (Internet Service Provider) backbones using an ordinary PC, and a highly scalable parallel algorithm that simulates network traffic at coarse time scales

    OpenMSCG: A Software Tool for Bottom-Up Coarse-Graining

    Get PDF
    The “bottom-up” approach to coarse-graining, for building accurate and efficient computational models to simulate large-scale and complex phenomena and processes, is an important approach in computational chemistry, biophysics, and materials science. As one example, the Multiscale Coarse-Graining (MS-CG) approach to developing CG models can be rigorously derived using statistical mechanics applied to fine-grained, i.e., all-atom simulation data for a given system. Under a number of circumstances, a systematic procedure, such as MS-CG modeling, is particularly valuable. Here, we present the development of the OpenMSCG software, a modularized open-source software that provides a collection of successful and widely applied bottom-up CG methods, including Boltzmann Inversion (BI), Force-Matching (FM), Ultra-Coarse-Graining (UCG), Relative Entropy Minimization (REM), Essential Dynamics Coarse-Graining (EDCG), and Heterogeneous Elastic Network Modeling (HeteroENM). OpenMSCG is a high-performance and comprehensive toolset that can be used to derive CG models from large-scale fine-grained simulation data in file formats from common molecular dynamics (MD) software packages, such as GROMACS, LAMMPS, and NAMD. OpenMSCG is modularized in the Python programming framework, which allows users to create and customize modeling “recipes” for reproducible results, thus greatly improving the reliability, reproducibility, and sharing of bottom-up CG models and their applications

    Accuracy control in ultra-large-scale electronic structure calculation

    Full text link
    Numerical aspects are investigated in ultra-large-scale electronic structure calculation. Accuracy control methods in process (molecular-dynamics) calculation are focused. Flexible control methods are proposed so as to control variational freedoms, automatically at each time step, within the framework of generalized Wannier state theory. The method is demonstrated in silicon cleavage simulation with 10^2-10^5 atoms. The idea is of general importance among process calculations and is also used in Krylov subspace theory, another large-scale-calculation theory.Comment: 8 pages, 3 figures. To appear in J.Phys. Condens. Matter. A preprint PDF file in better graphics is available at http://fujimac.t.u-tokyo.ac.jp/lses/index_e.htm

    Spin-Based Neuron Model with Domain Wall Magnets as Synapse

    Full text link
    We present artificial neural network design using spin devices that achieves ultra low voltage operation, low power consumption, high speed, and high integration density. We employ spin torque switched nano-magnets for modelling neuron and domain wall magnets for compact, programmable synapses. The spin based neuron-synapse units operate locally at ultra low supply voltage of 30mV resulting in low computation power. CMOS based inter-neuron communication is employed to realize network-level functionality. We corroborate circuit operation with physics based models developed for the spin devices. Simulation results for character recognition as a benchmark application shows 95% lower power consumption as compared to 45nm CMOS design
    • …
    corecore