3,166 research outputs found

    Comparison of continuous in situ CO2 observations at Jungfraujoch using two different measurement techniques

    Get PDF
    Since 2004, atmospheric carbon dioxide (CO2) is being measured at the High Altitude Research Station Jungfraujoch by the division of Climate and Environmental Physics at the University of Bern (KUP) using a nondispersive infrared gas analyzer (NDIR) in combination with a paramagnetic O2 analyzer. In January 2010, CO2 measurements based on cavity ring-down spectroscopy (CRDS) as part of the Swiss National Air Pollution Monitoring Network were added by the Swiss Federal Laboratories for Materials Science and Technology (Empa). To ensure a smooth transition – a prerequisite when merging two data sets, e.g., for trend determinations – the two measurement systems run in parallel for several years. Such a long-term intercomparison also allows the identification of potential offsets between the two data sets and the collection of information about the compatibility of the two systems on different time scales. A good agreement of the seasonality, short-term variations and, to a lesser extent mainly due to the short common period, trend calculations is observed. However, the comparison reveals some issues related to the stability of the calibration gases of the KUP system and their assigned CO2 mole fraction. It is possible to adapt an improved calibration strategy based on standard gas determinations, which leads to better agreement between the two data sets. By excluding periods with technical problems and bad calibration gas cylinders, the average hourly difference (CRDS – NDIR) of the two systems is −0.03 ppm ± 0.25 ppm. Although the difference of the two data sets is in line with the compatibility goal of ±0.1 ppm of the World Meteorological Organization (WMO), the standard deviation is still too high. A significant part of this uncertainty originates from the necessity to switch the KUP system frequently (every 12 min) for 6 min from ambient air to a working gas in order to correct short-term variations of the O2 measurement system. Allowing additional time for signal stabilization after switching the sample, an effective data coverage of only one-sixth for the KUP system is achieved while the Empa system has a nearly complete data coverage. Additionally, different internal volumes and flow rates may affect observed differences

    Precedence probability and prediction intervals

    Get PDF
    Precedence tests are simple yet useful nonparametric tests based on two specified order statistics from two independent random samples or, equivalently, on the count of the number of observations from one of the samples preceding some order statistic of the other sample. The probability that an order statistic from the second sample exceeds an order statistic from the first sample is termed the precedence probability. When the distributions are the same, this probability can be calculated exactly, without any specific knowledge of the underlying common continuous distribution. This fact can be utilized to set up nonparametric prediction intervals in a number of situations. In this paper, prediction intervals are considered for the number of second sample observations that exceed a particular order statistic of the first sample. To aid the user, tables are provided for small sample sizes, where exact calculations are most necessary. The same tables can be used to implement a precedence test for small sample sizes

    Precedence tests and confidence bounds for complete data : an overview and some results

    Get PDF
    An overview of some nonparametric procedures based on precedence (or exceedance) statistics is given. The procedures include both tests and confidence intervals. In particular, the construction of some simple distribution-free confidence bounds for location difference of two distributions with the same shape is considered and some properties are derived. The asymptotic relative efficiency of an asymptotic form of the corresponding test relative to Wilcoxon's two-sample rank-sum test and the two-sample Student's t-test is given for various cases. Some K -sample problems are discussed where precedence type tests are useful, along with a review of the literature

    Precedence probability, prediction interval and a combinatorial identity

    Get PDF
    Precedence tests are simple yet useful nonparametric tests based on two specified order statistics from independent random samples or, equivalently, on the count of the number of observations from one of the samples preceding some order statistic of the other sample. The probability that an order statistic from the second sample exceeds an order statistic from the first sample is termed the precedence probability. When the distributions are the same, this probability can be calculated exactly, without any specific knowledge of the underlying common continuous distribution. This fact can be utilized to set up nonparametric prediction intervals in a number of situations. In this paper, prediction intervals are considered for the number of second sample observations that exceed a particular order statistic of the first sample. To aid the user, tables are provided for small sample sizes, where exact calculations are most necessary. The same tables can be used to implement a precedence test for small sample sizes. Finally, a combinatorial identity is proved. Keywords: Distribution-free; Extremes; Exceedance and Precedence; Nonparametric; Order statistics

    Responsible Accounting for Stakeholders

    Get PDF
    Through a critique of existing financial theory underlying current accounting practices, and reapplication of this theory to a broad group of stakeholders, this paper lays a normative foundation for a revised perspective on the responsibility of the public accounting profession. Specifically, we argue that the profession should embrace the development of standards for reporting information important to a broader group of stakeholders than just investors and creditors. The FASB has recently moved in the opposite direction. Nonetheless, an institution around accounting for stakeholders continues to grow, backed by a groundswell of support from many sources. Based on institutional theory, we predict that this institution and the forces supporting it will cause changes in the public accounting profession, even if through coercion. We also provide examples of stakeholder accounting, building from the premise that a primary responsibility of accounting is to provide information to address the risk management needs of stakeholders

    A generalization of moderated statistics to data adaptive semiparametric estimation in high-dimensional biology

    Full text link
    The widespread availability of high-dimensional biological data has made the simultaneous screening of numerous biological characteristics a central statistical problem in computational biology. While the dimensionality of such datasets continues to increase, the problem of teasing out the effects of biomarkers in studies measuring baseline confounders while avoiding model misspecification remains only partially addressed. Efficient estimators constructed from data adaptive estimates of the data-generating distribution provide an avenue for avoiding model misspecification; however, in the context of high-dimensional problems requiring simultaneous estimation of numerous parameters, standard variance estimators have proven unstable, resulting in unreliable Type-I error control under standard multiple testing corrections. We present the formulation of a general approach for applying empirical Bayes shrinkage approaches to asymptotically linear estimators of parameters defined in the nonparametric model. The proposal applies existing shrinkage estimators to the estimated variance of the influence function, allowing for increased inferential stability in high-dimensional settings. A methodology for nonparametric variable importance analysis for use with high-dimensional biological datasets with modest sample sizes is introduced and the proposed technique is demonstrated to be robust in small samples even when relying on data adaptive estimators that eschew parametric forms. Use of the proposed variance moderation strategy in constructing stabilized variable importance measures of biomarkers is demonstrated by application to an observational study of occupational exposure. The result is a data adaptive approach for robustly uncovering stable associations in high-dimensional data with limited sample sizes
    • …
    corecore