201,045 research outputs found

    Study of scintillation in natural and synthetic quartz and methacrylate

    Get PDF
    Samples from different materials typically used as optical windows or light guides in scintillation detectors were studied in a very low background environment, at the Canfranc Underground Laboratory, searching for scintillation. A positive result can be confirmed for natural quartz: two distinct scintillation components have been identified, not being excited by an external gamma source. Although similar effect has not been observed neither for synthetic quartz nor for methacrylate, a fast light emission excited by intense gamma flux is evidenced for all the samples in our measurements. These results could affect the use of these materials in low energy applications of scintillation detectors requiring low radioactive background conditions, as they entail a source of background.Comment: Accepted for publication in Optical Material

    Detection of Lying Electrical Vehicles in Charging Coordination Application Using Deep Learning

    Full text link
    The simultaneous charging of many electric vehicles (EVs) stresses the distribution system and may cause grid instability in severe cases. The best way to avoid this problem is by charging coordination. The idea is that the EVs should report data (such as state-of-charge (SoC) of the battery) to run a mechanism to prioritize the charging requests and select the EVs that should charge during this time slot and defer other requests to future time slots. However, EVs may lie and send false data to receive high charging priority illegally. In this paper, we first study this attack to evaluate the gains of the lying EVs and how their behavior impacts the honest EVs and the performance of charging coordination mechanism. Our evaluations indicate that lying EVs have a greater chance to get charged comparing to honest EVs and they degrade the performance of the charging coordination mechanism. Then, an anomaly based detector that is using deep neural networks (DNN) is devised to identify the lying EVs. To do that, we first create an honest dataset for charging coordination application using real driving traces and information revealed by EV manufacturers, and then we also propose a number of attacks to create malicious data. We trained and evaluated two models, which are the multi-layer perceptron (MLP) and the gated recurrent unit (GRU) using this dataset and the GRU detector gives better results. Our evaluations indicate that our detector can detect lying EVs with high accuracy and low false positive rate

    Lithium abundance in the globular cluster M4: from the Turn-Off to the RGB Bump

    Full text link
    We present Li and Fe abundances for 87 stars in the GC M4,obtained with GIRAFFE high-resolution spectra. The targets range from the TO up to the RGB Bump. The Li abundance in the TO stars is uniform, with an average value A(Li)=2.30+-0.02 dex,consistent with the upper envelope of Li content measured in other GCs and in the Halo stars,confirming also for M4 the discrepancy with the primordial Li abundance predicted by WMAP+BBNS. The iron content of M4 is [Fe/H]=-1.10+-0.01 dex, with no systematic offsets between dwarf and giant stars.The behaviour of the Li and Fe abundance along the entire evolutionary path is incompatible with models with atomic diffusion, pointing out that an additional turbulent mixing below the convective region needs to be taken into account,able to inhibit the atomic diffusion.The measured A(Li) and its homogeneity in the TO stars allow to put strong constraints on the shape of the Li profile inside the M4 TO stars. The global behaviour of A(Li) with T_{eff} can be reproduced with different pristine Li abundances, depending on the kind of adopted turbulent mixing.One cannot reproduce the global trend starting from the WMAP+BBNS A(Li) and adopting the turbulent mixing described by Richard et al.(2005) with the same efficiency used by Korn et al.(2006) to explain the Li content in NGC6397. Such a solution is not able to well reproduce simultaneously the Li abundance observed in TO and RGB stars.Otherwise, theWMAP+BBNS A(Li) can be reproduced assuming a more efficient turbulent mixing able to reach deeper stellar regions where the Li is burned. The cosmological Li discrepancy cannot be easily solved with the present,poor understanding of the turbulence in the stellar interiors and a future effort to well understand the true nature of this non-canonical process is needed.Comment: Accepted for publication in the MNRA

    Scalable Population Synthesis with Deep Generative Modeling

    Full text link
    Population synthesis is concerned with the generation of synthetic yet realistic representations of populations. It is a fundamental problem in the modeling of transport where the synthetic populations of micro-agents represent a key input to most agent-based models. In this paper, a new methodological framework for how to 'grow' pools of micro-agents is presented. The model framework adopts a deep generative modeling approach from machine learning based on a Variational Autoencoder (VAE). Compared to the previous population synthesis approaches, including Iterative Proportional Fitting (IPF), Gibbs sampling and traditional generative models such as Bayesian Networks or Hidden Markov Models, the proposed method allows fitting the full joint distribution for high dimensions. The proposed methodology is compared with a conventional Gibbs sampler and a Bayesian Network by using a large-scale Danish trip diary. It is shown that, while these two methods outperform the VAE in the low-dimensional case, they both suffer from scalability issues when the number of modeled attributes increases. It is also shown that the Gibbs sampler essentially replicates the agents from the original sample when the required conditional distributions are estimated as frequency tables. In contrast, the VAE allows addressing the problem of sampling zeros by generating agents that are virtually different from those in the original data but have similar statistical properties. The presented approach can support agent-based modeling at all levels by enabling richer synthetic populations with smaller zones and more detailed individual characteristics.Comment: 27 pages, 15 figures, 4 table

    MESS (Multi-purpose Exoplanet Simulation System): A Monte Carlo tool for the statistical analysis and prediction of exoplanets search results

    Full text link
    The high number of planet discoveries made in the last years provides a good sample for statistical analysis, leading to some clues on the distributions of planet parameters, like masses and periods, at least in close proximity to the host star. We likely need to wait for the extremely large telescopes (ELTs) to have an overall view of the extrasolar planetary systems. In this context it would be useful to have a tool that can be used for the interpretation of the present results,and also to predict what the outcomes would be of the future instruments. For this reason we built MESS: a Monte Carlo simulation code which uses either the results of the statistical analysis of the properties of discovered planets, or the results of the planet formation theories, to build synthetic planet populations fully described in terms of frequency, orbital elements and physical properties. They can then be used to either test the consistency of their properties with the observed population of planets given different detection techniques or to actually predict the expected number of planets for future surveys. In addition to the code description, we present here some of its applications to actually probe the physical and orbital properties of a putative companion within the circumstellar disk of a given star and to test constrain the orbital distribution properties of a potential planet population around the members of the TW Hydrae association. Finally, using in its predictive mode, the synergy of future space and ground-based telescopes instrumentation has been investigated to identify the mass-period parameter space that will be probed in future surveys for giant and rocky planetsComment: 14 pages, 16 figure

    An iterative approach for generating statistically realistic populations of households

    Get PDF
    Background: Many different simulation frameworks, in different topics, need to treat realistic datasets to initialize and calibrate the system. A precise reproduction of initial states is extremely important to obtain reliable forecast from the model. Methodology/Principal Findings: This paper proposes an algorithm to create an artificial population where individuals are described by their age, and are gathered in households respecting a variety of statistical constraints (distribution of household types, sizes, age of household head, difference of age between partners and among parents and children). Such a population is often the initial state of microsimulation or (agent) individual-based models. To get a realistic distribution of households is often very important, because this distribution has an impact on the demographic evolution. Usual techniques from microsimulation approach cross different sources of aggregated data for generating individuals. In our case the number of combinations of different households (types, sizes, age of participants) makes it computationally difficult to use directly such methods. Hence we developed a specific algorithm to make the problem more easily tractable. Conclusions/Significance: We generate the populations of two pilot municipalities in Auvergne region (France), to illustrate the approach. The generated populations show a good agreement with the available statistical datasets (not used for the generation) and are obtained in a reasonable computational time.Comment: 16 oages, 11 figure
    • …
    corecore