4,902 research outputs found

    Incentive-Compatible Critical Values

    Full text link
    Statistical hypothesis tests are a cornerstone of scientific research. The tests are informative when their size is properly controlled, so the frequency of rejecting true null hypotheses (type I error) stays below a prespecified nominal level. Publication bias exaggerates test sizes, however. Since scientists can typically only publish results that reject the null hypothesis, they have the incentive to continue conducting studies until attaining rejection. Such pp-hacking takes many forms: from collecting additional data to examining multiple regression specifications, all in the search of statistical significance. The process inflates test sizes above their nominal levels because the critical values used to determine rejection assume that test statistics are constructed from a single study---abstracting from pp-hacking. This paper addresses the problem by constructing critical values that are compatible with scientists' behavior given their incentives. We assume that researchers conduct studies until finding a test statistic that exceeds the critical value, or until the benefit from conducting an extra study falls below the cost. We then solve for the incentive-compatible critical value (ICCV). When the ICCV is used to determine rejection, readers can be confident that size is controlled at the desired significance level, and that the researcher's response to the incentives delineated by the critical value is accounted for. Since they allow researchers to search for significance among multiple studies, ICCVs are larger than classical critical values. Yet, for a broad range of researcher behaviors and beliefs, ICCVs lie in a fairly narrow range

    The Association of Large-Scale Climate Variability and Teleconnections on Wind Energy Resource over Europe and its Intermittency

    Get PDF
    In times of increasing importance of wind power in the world’s energy mix, this study focuses on a better understanding of the influences of large-scale climate variability on wind power resource over Europe. The impact of the North Atlantic Oscillation (NAO), the Arctic Oscillation (AO), the El Niño Southern Oscillation (ENSO) and the Atlantic Multidecadal Oscillation (AMO) are investigated in terms of their correlation with wind power density (WPD) at 80 m hub height. These WPDs are calculated based on the MERRA Reanalysis data set covering 31 years of measurements. Not surprisingly, AO and NAO are highly correlated with the time series of WPD. This correlation can also be found in the first principal component of a Principal Component Analysis (PCA) of WPD over Europe explaining 14% of the overall variation. Further, cross-correlation analyses indicates the strongest associated variations are achieved with AO/NAO leading WPD by at most one day. Furthermore, the impact of high and low phases of the respective oscillations has been assessed to provide a more comprehensive illustration. The fraction of WPD for high and low AO/NAO increases considerably for northern Europe, whereas the opposite pattern can be observed for southern Europe. Similar results are obtained by calculating the energy output of three hypothetical wind turbines for every grid point over Europe. Thus, we identified a high interconnection potential between wind farms in order to reduce intermittency, one of the primary challenges in wind power generation. In addition, we observe significant correlations between WPD and AMO

    Experimental realization of an ideal Floquet disordered system

    Full text link
    The atomic Quantum Kicked Rotor is an outstanding "quantum simulator" for the exploration of transport in disordered quantum systems. Here we study experimentally the phase-shifted quantum kicked rotor, which we show to display properties close to an ideal disordered quantum system, opening new windows into the study of Anderson physics.Comment: 10 pages, 7 figures, submitted to New Journal of Physics focus issue on Quantum Transport with Ultracold Atom

    Proposal of an Approach to Automate the Generation of a Transitic System's Observer and Decision Support using MDE

    Get PDF
    International audienceShort term decision support for manufacturing systems is generally difficult because of the initial data needed by the calculations. Previous works suggest the use of a discrete event observer in order to retrieve these data from a virtual copy of the workshop, as up to date as possible at any time. This proposal offered many perspectives, but suffers from the difficulties to generate a decision support tool combining decision calculations and observation. Meanwhile, interesting developments were made in literature about automatic generation of logic control programs for those same manufacturing systems, especially using the Model Driven Engineering. This paper suggests the use of MDE to generate logic control programs, the observer and the decision support tool at the same time, based on the same data collected by the designer of the system. Thus, the last section presents the evolution needed in the initial data structure, as well as the conception flow suggested to automatize the generatio

    Data processing from manufacturing systems to decision support systems: propositions of alternative design approaches

    Get PDF
    International audienceWith the increase of flexibility and production rates, the complexity of manufacturing systems reached a point where the operator in charge of the production activity control of the system is not able to forecast efficiently the impact of his decisions on the global performances. As a matter of fact, more and more Decision Support Systems (DSS) are developed, as much in literature or industrial applications. DSS have one common point: the initialization of their forecasting functionality is based on data coming from the manufacturing system. Furthermore, this feature is fundamental, as it has a direct impact on the accuracy of the forecasts. Considering the variety of input and output data, a data processing is necessary to adapt those coming from the manufacturing system. The aim of this paper is to present several design approaches enabling the integrator of a new manufacturing system to speed up the implementation, with the idea of automate and systematize the maximum design phases thanks the model driven engineering

    Histogram-based models on non-thin section chest CT predict invasiveness of primary lung adenocarcinoma subsolid nodules.

    Get PDF
    109 pathologically proven subsolid nodules (SSN) were segmented by 2 readers on non-thin section chest CT with a lung nodule analysis software followed by extraction of CT attenuation histogram and geometric features. Functional data analysis of histograms provided data driven features (FPC1,2,3) used in further model building. Nodules were classified as pre-invasive (P1, atypical adenomatous hyperplasia and adenocarcinoma in situ), minimally invasive (P2) and invasive adenocarcinomas (P3). P1 and P2 were grouped together (T1) versus P3 (T2). Various combinations of features were compared in predictive models for binary nodule classification (T1/T2), using multiple logistic regression and non-linear classifiers. Area under ROC curve (AUC) was used as diagnostic performance criteria. Inter-reader variability was assessed using Cohen's Kappa and intra-class coefficient (ICC). Three models predicting invasiveness of SSN were selected based on AUC. First model included 87.5 percentile of CT lesion attenuation (Q.875), interquartile range (IQR), volume and maximum/minimum diameter ratio (AUC:0.89, 95%CI:[0.75 1]). Second model included FPC1, volume and diameter ratio (AUC:0.91, 95%CI:[0.77 1]). Third model included FPC1, FPC2 and volume (AUC:0.89, 95%CI:[0.73 1]). Inter-reader variability was excellent (Kappa:0.95, ICC:0.98). Parsimonious models using histogram and geometric features differentiated invasive from minimally invasive/pre-invasive SSN with good predictive performance in non-thin section CT
    • …
    corecore