10,164 research outputs found

    Robust Foregrounds Removal for 21-cm Experiments

    Get PDF
    Direct detection of the Epoch of Reionization via the redshifted 21-cm line will have unprecedented implications on the study of structure formation in the early Universe. To fulfill this promise current and future 21-cm experiments will need to detect the weak 21-cm signal over foregrounds several order of magnitude greater. This requires accurate modeling of the galactic and extragalactic emission and of its contaminants due to instrument chromaticity, ionosphere and imperfect calibration. To solve for this complex modeling, we propose a new method based on Gaussian Process Regression (GPR) which is able to cleanly separate the cosmological signal from most of the foregrounds contaminants. We also propose a new imaging method based on a maximum likelihood framework which solves for the interferometric equation directly on the sphere. Using this method, chromatic effects causing the so-called "wedge" are effectively eliminated (i.e. deconvolved) in the cylindrical (k,kk_{\perp}, k_{\parallel}) power spectrum.Comment: Subbmited to the Proceedings of the IAUS333, Peering Towards Cosmic Dawn, 4 pages, 2 figure

    Scheduling the installation of the LHC injection lines

    Get PDF
    The installation of the two Large Hadron Collider (LHC) injection lines has to fit within tight milestones of the LHC project and of CERN's accelerator activity in general. For instance, the transfer line from the Super Proton Synchrotron (SPS) to LHC point 8 (to fill the anti-clockwise LHC ring) should be tested with beam before the end of 2004 since the SPS will not run in 2005. It will first serve during the LHC sector test in 2006. Time constraints are also very strong on the installation of the transfer line from the SPS to LHC point 2 (for the clockwise LHC ring): its tunnel is the sole access for the LHC cryo-magnets and a large part of the beam line can only be installed once practically all LHC cryo-magnets are in place. Of course, the line must be operational when the LHC starts. This paper presents the various constraints and how they are taken into account for the logistics and installation planning of the LHC injection lines

    Number partitioning as random energy model

    Full text link
    Number partitioning is a classical problem from combinatorial optimisation. In physical terms it corresponds to a long range anti-ferromagnetic Ising spin glass. It has been rigorously proven that the low lying energies of number partitioning behave like uncorrelated random variables. We claim that neighbouring energy levels are uncorrelated almost everywhere on the energy axis, and that energetically adjacent configurations are uncorrelated, too. Apparently there is no relation between geometry (configuration) and energy that could be exploited by an optimization algorithm. This ``local random energy'' picture of number partitioning is corroborated by numerical simulations and heuristic arguments.Comment: 8+2 pages, 9 figures, PDF onl

    Ratchet behavior in nonlinear Klein-Gordon systems with point-like inhomogeneities

    Get PDF
    We investigate the ratchet dynamics of nonlinear Klein-Gordon kinks in a periodic, asymmetric lattice of point-like inhomogeneities. We explain the underlying rectification mechanism within a collective coordinate framework, which shows that such system behaves as a rocking ratchet for point particles. Careful attention is given to the kink width dynamics and its role in the transport. We also analyze the robustness of our kink rocking ratchet in the presence of noise. We show that the noise activates unidirectional motion in a parameter range where such motion is not observed in the noiseless case. This is subsequently corroborated by the collective variable theory. An explanation for this new phenomenom is given

    Adaptive Regret Minimization in Bounded-Memory Games

    Get PDF
    Online learning algorithms that minimize regret provide strong guarantees in situations that involve repeatedly making decisions in an uncertain environment, e.g. a driver deciding what route to drive to work every day. While regret minimization has been extensively studied in repeated games, we study regret minimization for a richer class of games called bounded memory games. In each round of a two-player bounded memory-m game, both players simultaneously play an action, observe an outcome and receive a reward. The reward may depend on the last m outcomes as well as the actions of the players in the current round. The standard notion of regret for repeated games is no longer suitable because actions and rewards can depend on the history of play. To account for this generality, we introduce the notion of k-adaptive regret, which compares the reward obtained by playing actions prescribed by the algorithm against a hypothetical k-adaptive adversary with the reward obtained by the best expert in hindsight against the same adversary. Roughly, a hypothetical k-adaptive adversary adapts her strategy to the defender's actions exactly as the real adversary would within each window of k rounds. Our definition is parametrized by a set of experts, which can include both fixed and adaptive defender strategies. We investigate the inherent complexity of and design algorithms for adaptive regret minimization in bounded memory games of perfect and imperfect information. We prove a hardness result showing that, with imperfect information, any k-adaptive regret minimizing algorithm (with fixed strategies as experts) must be inefficient unless NP=RP even when playing against an oblivious adversary. In contrast, for bounded memory games of perfect and imperfect information we present approximate 0-adaptive regret minimization algorithms against an oblivious adversary running in time n^{O(1)}.Comment: Full Version. GameSec 2013 (Invited Paper

    Fine and ultrafine particle number and size measurements from industrial combustion processes : primary emissions field data

    Get PDF
    This study is to our knowledge the first to present the results of on-line measurements of residual nanoparticle numbers downstream of the flue gas treatment systems of a wide variety of medium- and large-scale industrial installations. Where available, a semi-quantitative elemental composition of the sampled particles is carried out using a Scanning Electron Microscope coupled with an Energy Dispersive Spectrometer (SEM-EDS). The semi-quantitative elemental composition as a function of the particle size is presented. EU's Best Available Technology documents (BAT) show removal efficiencies of Electrostatic Precipitator (ESP) and bag filter dedusting systems exceeding 99% when expressed in terms of weight. Their efficiency decreases slightly for particles smaller than 1 mu m but when expressed in terms of weight, still exceeds 99% for bag filters and 96% for ESP. This study reveals that in terms of particle numbers, residual nanoparticles (NP) leaving the dedusting systems dominate by several orders of magnitude. In terms of weight, all installations respect their emission limit values and the contribution of NP to weight concentrations is negligible, despite their dominance in terms of numbers. Current World Health Organisation regulations are expressed in terms of PM2.5 wt concentrations and therefore do not reflect the presence or absence of a high number of NP. This study suggests that research is needed on possible additional guidelines related to NP given their possible toxicity and high potential to easily enter the blood stream when inhaled by humans

    Optimization of soliton ratchets in inhomogeneous sine-Gordon systems

    Get PDF
    Unidirectional motion of solitons can take place, although the applied force has zero average in time, when the spatial symmetry is broken by introducing a potential V(x)V(x), which consists of periodically repeated cells with each cell containing an asymmetric array of strongly localized inhomogeneities at positions xix_{i}. A collective coordinate approach shows that the positions, heights and widths of the inhomogeneities (in that order) are the crucial parameters so as to obtain an optimal effective potential UoptU_{opt} that yields a maximal average soliton velocity. UoptU_{opt} essentially exhibits two features: double peaks consisting of a positive and a negative peak, and long flat regions between the double peaks. Such a potential can be obtained by choosing inhomogeneities with opposite signs (e.g., microresistors and microshorts in the case of long Josephson junctions) that are positioned close to each other, while the distance between each peak pair is rather large. These results of the collective variables theory are confirmed by full simulations for the inhomogeneous sine-Gordon system

    Should liver enzymes be checked in a patient taking niacin?

    Get PDF
    No randomized trials directly address the question of frequency of liver enzyme monitoring with niacin use. Niacin use is associated with early and late hepatotoxicity (strength of recommendation [SOR]: B, based on incidence data from randomized controlled trials and systematic reviews of cohort studies). Long-acting forms of niacin (Slo-Niacin) are more frequently associated with hepatotoxicity than the immediate-release (Niacor, Nicolar) or extended-release (Niaspan) forms (SOR: B, based on 1 randomized controlled trial and systematic reviews of cohort studies)
    corecore