103 research outputs found

    The ISAE manufacturing survey sample : validating the Nace Rev.2 sectorial allocation

    Get PDF
    After the full implementation of the new EU Standard Classification of Economic Activities (Nace Rev.2) in 2008, statistical agencies have increasingly dealt with the problem of redefining sampling designs and estimation techniques, especially in the case of stratified surveys with NACE codes as stratification variables. In light of this changes, the Italian Institute for Studies and Economic Analysis (Istituto di Studi e Analisi Economica - ISAE) is currently updating the sample design of its Business Tendency Survey (BTS). The focus of this paper is on finding a strata allocation methodology suitable to overcome the NACE Rev.2 changes. The analysis is carried out by considering two opposite needs: i) the strata allocation must retain multiple information; ii) the strata allocation must retain the optimality of the estimates. The allocation methods considered are: i) the classical Neyman x-optimal allocation, ii) the Neyman allocation used by ISAE, i.e. with direct application to areal stratification, iii) the multivariate Neyman allocation on qualitative variance according to Bethel formulation, iv) the Robust Optimal Allocation with Uniform Stratum Threshold (ROAUST). The ROAUST is a new allocation method which generates a new class of stratified estimators. Comparison among these methods is carried out via a simulation device - the Sequential Selection- Allocation (SSA). This simulation device constructs a new population list with units re-labelled within each stratum, such that the new labels corresponds to the order of selection in a SWOR resampling of the stratum units. This process is repeated a certain number N (N=1,000 in the simulation presented in this paper) of times. From this new labelled population, all the allocation algorithms can be evaluated simultaneousl

    Between theoretical and applied approach : which compromise for unit allocation in business surveys?

    Get PDF
    Neyman\u2019s algorithm for the allocation of sample units in business sampling can result unsatisfactory in domain analysis with imperfect frames and sectorial and/or regional data. Improved estimates can be obtained using stratified estimators combined with an optimal unit allocation. We achieve this outcome by an interdisciplinary approach which leads to a methodological improvement. Starting from Martini\u2019s approach which considers an empirical view of the statistical analysis, we propose the Robust Optimal Allocation with Uniform Stratum Threshold (ROAUST) class of stratified estimators and prove their reliability by using a simulation approach inspired by Magagnoli\u2019s work on this issue. In particular, contrary to Neyman\u2019s stratified estimator with optimal allocation and stratum threshold, our class guarantees better domain representativenes

    Divide, Allocate et Impera: Comparing Allocation Strategies via Simulation

    Get PDF
    In stratified sampling, the problem of optimally allocating the sample size is of primary importance, especially when reliable estimates are required both for the overall population and for subdomains. To this purpose, in this paper we compare multiple standard allocation mechanisms. In particular, standard allocation methods are compared with an allocation method that has been recently adopted by the Italian National Statistical Institute: the Robust Optimal Allocation with Uniform Stratum Threshold (ROAUST) method. Standard allocation methods considered in this comparison are: (i) the optimal Neyman allocation, (ii) the multivariate Neyman allocation, (iii) the Costa allocation, (iv) the Bankier allocation, and (v) the Interior Point Non Linear Programming (IPNLP) allocation. Results show that the optimal Neyman allocation method outperforms the ROAUST method at the overall sample level, whereas the latter method performs better at the stratum level. Some results on the Nonlinear Programming method are particularly interesting

    Study of the temperature distribution in Si nanowires under microscopic laser beam excitation

    Get PDF
    The use of laser beams as excitation sources for the characterization of semiconductor nanowires (NWs) is largely extended. Raman spectroscopy and photoluminescence (PL) are currently applied to the study of NWs. However, NWs are systems with poor thermal conductivity and poor heat dissipation, which result in unintentional heating under the excitation with a focused laser beam with microscopic size, as those usually used in microRaman and microPL experiments. On the other hand, the NWs have subwavelength diameter, which changes the optical absorption with respect to the absorption in bulk materials. Furthermore, the NW diameter is smaller than the laser beam spot, which means that the optical power absorbed by the NW depends on its position inside the laser beam spot. A detailed analysis of the interaction between a microscopic focused laser beam and semiconductor NWs is necessary for the understanding of the experiments involving laser beam excitation of NWs. We present in this work a numerical analysis of the thermal transport in Si NWs, where the heat source is the laser energy locally absorbed by the NW. This analysis takes account of the optical absorption, the thermal conductivity, the dimensions, diameter and length of the NWs, and the immersion medium. Both free standing and heat-sunk NWs are considered. Also, the temperature distribution in ensembles of NWs is discussed. This analysis intends to constitute a tool for the understanding of the thermal phenomena induced by laser beams in semiconductor NWs

    All-sky search for long-duration gravitational wave transients with initial LIGO

    Get PDF
    We present the results of a search for long-duration gravitational wave transients in two sets of data collected by the LIGO Hanford and LIGO Livingston detectors between November 5, 2005 and September 30, 2007, and July 7, 2009 and October 20, 2010, with a total observational time of 283.0 days and 132.9 days, respectively. The search targets gravitational wave transients of duration 10-500 s in a frequency band of 40-1000 Hz, with minimal assumptions about the signal waveform, polarization, source direction, or time of occurrence. All candidate triggers were consistent with the expected background; as a result we set 90% confidence upper limits on the rate of long-duration gravitational wave transients for different types of gravitational wave signals. For signals from black hole accretion disk instabilities, we set upper limits on the source rate density between 3.4×10-5 and 9.4×10-4 Mpc-3 yr-1 at 90% confidence. These are the first results from an all-sky search for unmodeled long-duration transient gravitational waves. © 2016 American Physical Society

    All-sky search for long-duration gravitational wave transients with initial LIGO

    Get PDF
    We present the results of a search for long-duration gravitational wave transients in two sets of data collected by the LIGO Hanford and LIGO Livingston detectors between November 5, 2005 and September 30, 2007, and July 7, 2009 and October 20, 2010, with a total observational time of 283.0 days and 132.9 days, respectively. The search targets gravitational wave transients of duration 10-500 s in a frequency band of 40-1000 Hz, with minimal assumptions about the signal waveform, polarization, source direction, or time of occurrence. All candidate triggers were consistent with the expected background; as a result we set 90% confidence upper limits on the rate of long-duration gravitational wave transients for different types of gravitational wave signals. For signals from black hole accretion disk instabilities, we set upper limits on the source rate density between 3.4×10-5 and 9.4×10-4 Mpc-3 yr-1 at 90% confidence. These are the first results from an all-sky search for unmodeled long-duration transient gravitational waves. © 2016 American Physical Society

    Search for the associated production of the Higgs boson with a top-quark pair

    Get PDF
    A search for the standard model Higgs boson produced in association with a top-quark pair t t ¯ H (tt¯H) is presented, using data samples corresponding to integrated luminosities of up to 5.1 fb −1 and 19.7 fb −1 collected in pp collisions at center-of-mass energies of 7 TeV and 8 TeV respectively. The search is based on the following signatures of the Higgs boson decay: H → hadrons, H → photons, and H → leptons. The results are characterized by an observed t t ¯ H tt¯H signal strength relative to the standard model cross section, μ = σ/σ SM ,under the assumption that the Higgs boson decays as expected in the standard model. The best fit value is μ = 2.8 ± 1.0 for a Higgs boson mass of 125.6 GeV

    Measurement of the azimuthal anisotropy of Y(1S) and Y(2S) mesons in PbPb collisions at √S^{S}NN = 5.02 TeV

    Get PDF
    The second-order Fourier coefficients (υ2_{2}) characterizing the azimuthal distributions of Υ(1S) and Υ(2S) mesons produced in PbPb collisions at sNN\sqrt{s_{NN}} = 5.02 TeV are studied. The Υmesons are reconstructed in their dimuon decay channel, as measured by the CMS detector. The collected data set corresponds to an integrated luminosity of 1.7 nb1^{-1}. The scalar product method is used to extract the υ2_{2} coefficients of the azimuthal distributions. Results are reported for the rapidity range |y| < 2.4, in the transverse momentum interval 0 < pT_{T} < 50 GeV/c, and in three centrality ranges of 10–30%, 30–50% and 50–90%. In contrast to the J/ψ mesons, the measured υ2_{2} values for the Υ mesons are found to be consistent with zero

    Measurement of prompt D0^{0} and D\overline{D}0^{0} meson azimuthal anisotropy and search for strong electric fields in PbPb collisions at root SNN\sqrt{S_{NN}} = 5.02 TeV

    Get PDF
    The strong Coulomb field created in ultrarelativistic heavy ion collisions is expected to produce a rapiditydependent difference (Av2) in the second Fourier coefficient of the azimuthal distribution (elliptic flow, v2) between D0 (uc) and D0 (uc) mesons. Motivated by the search for evidence of this field, the CMS detector at the LHC is used to perform the first measurement of Av2. The rapidity-averaged value is found to be (Av2) = 0.001 ? 0.001 (stat)? 0.003 (syst) in PbPb collisions at ?sNN = 5.02 TeV. In addition, the influence of the collision geometry is explored by measuring the D0 and D0mesons v2 and triangular flow coefficient (v3) as functions of rapidity, transverse momentum (pT), and event centrality (a measure of the overlap of the two Pb nuclei). A clear centrality dependence of prompt D0 meson v2 values is observed, while the v3 is largely independent of centrality. These trends are consistent with expectations of flow driven by the initial-state geometry. ? 2021 The Author. Published by Elsevier B.V. This is an open access article under the CC BY licens

    Performance of reconstruction and identification of τ leptons decaying to hadrons and vτ in pp collisions at √s=13 TeV

    Get PDF
    The algorithm developed by the CMS Collaboration to reconstruct and identify τ leptons produced in proton-proton collisions at √s=7 and 8 TeV, via their decays to hadrons and a neutrino, has been significantly improved. The changes include a revised reconstruction of π⁰ candidates, and improvements in multivariate discriminants to separate τ leptons from jets and electrons. The algorithm is extended to reconstruct τ leptons in highly Lorentz-boosted pair production, and in the high-level trigger. The performance of the algorithm is studied using proton-proton collisions recorded during 2016 at √s=13 TeV, corresponding to an integrated luminosity of 35.9 fb¯¹. The performance is evaluated in terms of the efficiency for a genuine τ lepton to pass the identification criteria and of the probabilities for jets, electrons, and muons to be misidentified as τ leptons. The results are found to be very close to those expected from Monte Carlo simulation
    corecore