1,850 research outputs found

    Space-Based Cosmic-Ray and Gamma-Ray Detectors: a Review

    Full text link
    Prepared for the 2014 ISAPP summer school, this review is focused on space-borne and balloon-borne cosmic-ray and gamma-ray detectors. It is meant to introduce the fundamental concepts necessary to understand the instrument performance metrics, how they tie to the design choices and how they can be effectively used in sensitivity studies. While the write-up does not aim at being complete or exhaustive, it is largely self-contained in that related topics such as the basic physical processes governing the interaction of radiation with matter and the near-Earth environment are briefly reviewed.Comment: 86 pages, 70 figures, prepared for the 2014 ISAPP summer school. Change log in the writeup, ancillary material at https://bitbucket.org/lbaldini/crdetector

    Alternative approaches to Long Term Care financing. Distributive implications and sustainability for Italy.

    Get PDF
    In the last decade, many countries have adopted tax schemes specifically aimed at financing programs for Long Term Care (LTC). These mechanisms have important distributional implications both within and across generations. Given the process of demographic ageing, the issue of inter and intra-generational fairness is deeply linked with the problem of the long-term financial equilibrium of an LTC fund. In this paper we first compare, on a microdata sample of the Italian population, the distributive effects (both on current income and across generations) of six alternative approaches to finance an LTC scheme. In particular, we consider a hypothetical LTC scheme (with a size equivalent to that of the German one) to be introduced in Italy and analyse the distributive implications of some tax options, taken from the financing mechanisms implemented or under discussion in Germany, Luxembourg, Japan and Italy.In the second part of the paper we move from a static to a dynamic perspective: we study the long-term sustainability of an hypothetical Pay as You Go (Payg) LTC scheme operating in Italy (that is, assuming the Italian projected demographic trends) under scenarios that consider alternative indexation rules, growth rates of GNP, future incidence of disability among age groups.long term care; distributive effects; tax-benefit model; intertemporal sustainability; trust fund

    Analysis of caesium distribution in a negative ion source by means of absorption spectroscopy diagnostics.

    Get PDF
    openLa fusione nucleare è tra i più ambiziosi progetti di produzione di energia a basso impatto ambientale del futuro. Attualmente a Cadarache, nel sud della Francia, è in costruzione il più grande esperimento al mondo sulla fusione termonucleare controllata: ITER. Al fine di raggiungere le prestazioni ideali per ottenere la fusione nucleare all’in- terno di ITER è necessario adottare sistemi di riscaldamento addizionali. Tra i metodi più importanti rientra l’iniezione di fasci di particelle neutre. Presso il Consorzio RFX a Padova è in corso la sperimentazione su SPIDER (Source for Production of Ion of Deu- terium Extracted from RF plasma), il prototipo della sorgente di ioni negativi che verrà impiegata negli iniettori di fasci neutri. Gli ioni negativi sono funzionali alla produzione di fasci neutri, la cui corrente deve essere massimizzata al fine di portare la temperatura centrale del plasma all’interno di ITER a 10 keV–15 keV. Uno tra i principali risultati che SPIDER dovrà raggiungere sarà ottenere una densità di corrente di ioni negativi estratti H− di 355 Am−2 e di ioni D− di 285 Am−2. Per raggiun- gere questi obiettivi è essenziale evaporare cesio all’interno della sorgente. Il cesio abbassa il potenziale di estrazione delle superfici aumentando così il tasso di conversione di neutri e ioni positivi in H−/D−; è quindi di fondamentale importanza monitorare densità e di- stribuzione del cesio all’interno di SPIDER. La diagnostica che si utilizza per controllare la densità e l’uniformità del cesio neutro è la Laser Absorption Spectroscopy (LAS). La tesi si concentra principalmente su quest’ultima, migliorando le stime di densità effettuate nella campagna sperimentale 2021 e approfondendo la conoscenza sulla distribuzione del cesio all’interno di SPIDER in condizioni di vuoto e di plasma. I risultati ottenuti dalle analisi mostrano che il cesio è distribuito principalmente nella parte medio alta della sorgente e la sua presenza è fortemente dipendente dall’evapora- zione dei forni e dall’alternanza delle fasi di vuoto/plasma. La campagna sperimentale è stata effettuata principalmente in gas idrogeno, gli ultimi due giorni in deuterio. I dati analizzati in presenza di deuterio mostrano che l’erosione di cesio è molto più evidente (si trova il triplo della densità di cesio nella sorgente), mentre la densità rilevata in funzione dell’evaporazione dai forni è invariata rispetto alle operazioni in idrogeno.Nuclear fusion is among the most ambitious energy production research areas of the future with low environmental impact. Currently, the world’s largest controlled thermonu- clear fusion experiment is under construction in Cadarache, southern France: ITER. In order to achieve the ideal performance for nuclear fusion in ITER, additional heating systems are required. Injection of beams of neutral particles is among the most impor- tant methods. At the RFX Consortium in Padua, experiments are underway on SPIDER (Source for Production of Ion of Deuterium Extracted from Rf plasma), the prototype of the negative ion source that will be used in neutral injectors. Negative ions are instru- mental in the production of neutral beams, whose current must be maximised in order to bring the central plasma temperature inside ITER to 10 keV–15 keV. One of the main results to be achieved by SPIDER is a current density of negative ions H− of 355 Am−2 and ions D− of 285 Am−2. To achieve these goals, it is essential to evaporate caesium within the source. Caesium lowers the surface work function so as to increase the conversion of ions and atoms into negative ions; it is therefore of paramount importance to monitor caesium density and distribution within SPIDER. The diagnostic system used to monitor the density and uniformity of neutral caesium is Laser Absorption Spectroscopy (LAS). This thesis focuses mainly on this one by improving the density estimates made in the 2021 experimental campaign and deepening the knowledge on the distribution of caesium inside SPIDER under vacuum and plasma conditions. The results obtained from the analysis show that caesium is mainly distributed in the upper-middle part of the source and its presence is strongly dependent on oven evapora- tion and vacuum/plasma phases. The experimental campaign was carried out mainly in hydrogen gas, the last two days in deuterium. The data analysed in the presence of deu- terium show that the erosion of caesium is much more pronounced (the caesium density in the source triples), while the density measured as a function of evaporation from the ovens is unchanged with respect to operations in hydrogen

    Influence of Beam Broadening on the Accuracy of Radar Polarimetric Rainfall Estimation

    Get PDF
    Abstract The quantitative estimation of rain rates using meteorological radar has been a major theme in radar meteorology and radar hydrology. The increase of interest in polarimetric radar is in part because polarization diversity can reduce the effect on radar precipitation estimates caused by raindrop size variability, which has allowed progress on radar rainfall estimation and on hydrometeorological applications. From an operational point of view, the promises regarding the improvement of radar rainfall accuracy have not yet been completely proven. The main reason behind these limits is the geometry of radar measurements combined with the variability of the spatial structure of the precipitation systems. To overcome these difficulties, a methodology has been developed to transform the estimated drop size distribution (DSD) provided by a vertically pointing micro rain radar to a profile given by a ground-based polarimetric radar. As a result, the rainfall rate at the ground is fixed at all ranges, whereas the broadening beam encompasses a large variability of DSDs. The resulting DSD profile is used to simulate the corresponding profile of radar measurements at C band. Rainfall algorithms based on polarimetric radar measurements were taken into account to estimate the rainfall into the radar beam. Finally, merit factors were used to achieve a quantitative analysis of the performance of the rainfall algorithm in comparison with the corresponding measurements at the ground obtained from a 2D video disdrometer (2DVD) that was positioned beside the micro rain radar. In this method, the behavior change of the merit factors in the range is directly attributable to the DSD variability inside the radar measurement volume, thus providing an assessment of the effects due to beam broadening

    Multipolarisation radar receiver for target detection

    Get PDF

    Complexity vs. performance in granular embedding spaces for graph classification

    Get PDF
    The most distinctive trait in structural pattern recognition in graph domain is the ability to deal with the organization and relations between the constituent entities of the pattern. Even if this can be convenient and/or necessary in many contexts, most of the state-of the art classi\ufb01cation techniques can not be deployed directly in the graph domain without \ufb01rst embedding graph patterns towards a metric space. Granular Computing is a powerful information processing paradigm that can be employed in order to drive the synthesis of automatic embedding spaces from structured domains. In this paper we investigate several classi\ufb01cation techniques starting from Granular Computing-based embedding procedures and provide a thorough overview in terms of model complexity, embedding space complexity and performances on several open-access datasets for graph classi\ufb01cation. We witness that certain classi\ufb01cation techniques perform poorly both from the point of view of complexity and learning performances as the case of non-linear SVM, suggesting that high dimensionality of the synthesized embedding space can negatively affect the effectiveness of these approaches. On the other hand, linear support vector machines, neuro-fuzzy networks and nearest neighbour classi\ufb01ers have comparable performances in terms of accuracy, with second being the most competitive in terms of structural complexity and the latter being the most competitive in terms of embedding space dimensionality

    A multi-objective optimization approach for the synthesis of granular computing-based classification systems in the graph domain

    Get PDF
    The synthesis of a pattern recognition system usually aims at the optimization of a given performance index. However, in many real-world scenarios, there exist other desired facets to take into account. In this regard, multi-objective optimization acts as the main tool for the optimization of different (and possibly conflicting) objective functions in order to seek for potential trade-offs among them. In this paper, we propose a three-objective optimization problem for the synthesis of a granular computing-based pattern recognition system in the graph domain. The core pattern recognition engine searches for suitable information granules (i.e., recurrent and/or meaningful subgraphs from the training data) on the top of which the graph embedding procedure towards the Euclidean space is performed. In the latter, any classification system can be employed. The optimization problem aims at jointly optimizing the performance of the classifier, the number of information granules and the structural complexity of the classification model. Furthermore, we address the problem of selecting a suitable number of solutions from the resulting Pareto Fronts in order to compose an ensemble of classifiers to be tested on previously unseen data. To perform such selection, we employed a multi-criteria decision making routine by analyzing different case studies that differ on how much each objective function weights in the ranking process. Results on five open-access datasets of fully labeled graphs show that exploiting the ensemble is effective (especially when the structural complexity of the model plays a minor role in the decision making process) if compared against the baseline solution that solely aims at maximizing the performances

    Microphysical Retrievals from Dual-Polarization Radar Measurements at X Band

    Get PDF
    Abstract The recent advances in attenuation correction methodology are based on the use of a constraint represented by the total amount of the attenuation encountered along the path shared over each range bin in the path. This technique is improved by using the inner self-consistency of radar measurements. The full self-consistency methodology provides an optimization procedure for obtaining the best estimate of specific and cumulative attenuation and specific and cumulative differential attenuation. The main goal of the study is to examine drop size distribution (DSD) retrieval from X-band radar measurements after attenuation correction. A new technique for estimating the slope of a linear axis ratio model from polarimetric radar measurements at attenuated frequencies is envisioned. A new set of improved algorithms immune to variability in the raindrop shape–size relation are presented for the estimation of the governing parameters characterizing a gamma raindrop size distribution. Simulations based on the use of profiles of gamma drop size distribution parameters obtained from S-band observations are used for quantitative analysis. Radar data collected by the NOAA/Earth System Research Laboratory (ESRL) X-band polarimetric radar are used to provide examples of the DSD parameter retrievals using attenuation-corrected radar measurements. Retrievals agree fairly well with disdrometer data. The radar data are also used to observe the prevailing shape of raindrops directly from the radar measurements. A significant result is that oblateness of drops is bounded between the two shape models of Pruppacher and Beard, and Beard and Chuang, the former representing the upper boundary and the latter the lower boundary

    Relaxed Dissimilarity-based Symbolic Histogram Variants for Granular Graph Embedding

    Get PDF
    Graph embedding is an established and popular approach when designing graph-based pattern recognition systems. Amongst the several strategies, in the last ten years, Granular Computing emerged as a promising framework for structural pattern recognition. In the late 2000\u2019s, symbolic histograms have been proposed as the driving force in order to perform the graph embedding procedure by counting the number of times each granule of information appears in the graph to be embedded. Similarly to a bag-of-words representation of a text corpora, symbolic histograms have been originally conceived as integer-valued vectorial representation of the graphs. In this paper, we propose six \u2018relaxed\u2019 versions of symbolic histograms, where the proper dissimilarity values between the information granules and the constituent parts of the graph to be embedded are taken into account, information which is discarded in the original symbolic histogram formulation due to the hard-limited nature of the counting procedure. Experimental results on six open-access datasets of fully-labelled graphs show comparable performance in terms of classification accuracy with respect to the original symbolic histograms (average accuracy shift ranging from -7% to +2%), counterbalanced by a great improvement in terms of number of resulting information granules, hence number of features in the embedding space (up to 75% less features, on average)

    From Beam to Chassis: How to Increase NVH Performances with an Optimized Moment of Inertia Distribution

    Get PDF
    Car weight reduction is becoming more and more important for every kind of vehicle: minor mass implies, in fact, minor consumption, makes easier to fulfill homologation rules and assures a better handling behavior. Despite that, several vehicle missions have always been solved by adding more mass, e.g. NVH. In this paper, a methodology to optimize the stiffness distribution is proposed in order to obtain better vibrational performances without increasing the mass. At first, the problem has been solved for a simple beam using finite element and optimization algorithms. At a second stage, the optimal moment of inertia distribution found has been applied to a chassis thanks to a topometry optimization. Finally, the improvement in NVH performances has been verified comparing the inertances of the optimized model with those of the non-optimized one
    • 

    corecore