758,511 research outputs found

    Visualization of the decision criteria in testing statistical hypotheses on programming in R (Rstudio)

    Get PDF
    Purpose: The case study addresses a development and justification of approaches to visualization of decision criteria in the problems of testing statistical hypotheses for a given distribution law (specifically, checking the distribution normality). Design/Methodology/Approach: The study describes a construction of graphical model that visualizes an application of criteria when testing statistical hypotheses for compliance with a given distribution law. This problem is solved in the language of statistical analysis R in the RStudio environment. Using the standard approach and focusing only on the P-value in relation to the chosen level of significance, the researcher cannot take into account the error of the second kind. However, analyzing the graphical representation of the behavior of the sample under study, one can conclude whether the value of the obtained P-value corresponds to the real assumption that the sample corresponds to a given distribution law. Findings: The research case proposes a new approach to testing statistical hypotheses on the compliance of a sample with a given distribution law, using visualization tools and allowing a researcher having even a little experience with the R language to solve applied problems. Practical implications: The approach does not require an in-depth knowledge of mathematics and programming which can be used by experts in various fields of knowledge to successfully solve applied problems. The text of the article contains working scripts in the language R and graphical illustrations obtained with their help. Originality/Value: The main contribution of this study is to expand the variety of methods for testing statistical hypotheses. The proposed method extends the set of statistical problems successfully solved by means of R.peer-reviewe

    Assessing the Utility of Early Warning Systems for Detecting Failures in Major Wind Turbine Components

    Get PDF
    This paper provides enhancements to normal behaviour models for monitoring major wind turbine components and a methodology to assess the monitoring system reliability based on SCADA data and decision analysis. Typically, these monitoring systems are based on fully data-driven regression of damage sensitive-parameters. Firstly, the problem of selecting suitable inputs for building a temperature model of operating main bearings is addressed, based on a sensitivity study. This shows that the dimensionality of the dataset can be greatly reduced while reaching sufficient prediction accuracy. Subsequently, performance quantities are derived from a statistical description of the prediction error and used as input to a decision analysis. Two distinct intervention policies, replacement and repair, are compared in terms of expected utility. The aim of this study is to provide a method to quantify the benefit of implementing the online system from an economic risk perspective. Under the realistic hypotheses made, the numerical example shows for instance that replacement is not convenient compared to repair

    On-the-fly confluence detection for statistical model checking (extended version)

    Get PDF
    Statistical model checking is an analysis method that circumvents the state space explosion problem in model-based verification by combining probabilistic simulation with statistical methods that provide clear error bounds. As a simulation-based technique, it can only provide sound results if the underlying model is a stochastic process. In verification, however, models are usually variations of nondeterministic transition systems. The notion of confluence allows the reduction of such transition systems in classical model checking by removing spurious nondeterministic choices. In this paper, we show that confluence can be adapted to detect and discard such choices on-the-fly during simulation, thus extending the applicability of statistical model checking to a subclass of Markov decision processes. In contrast to previous approaches that use partial order reduction, the confluence-based technique can handle additional kinds of nondeterminism. In particular, it is not restricted to interleavings. We evaluate our approach, which is implemented as part of the modes simulator for the Modest modelling language, on a set of examples that highlight its strengths and limitations and show the improvements compared to the partial order-based method

    The Dynamics of Demand and Supply of Electricity in Nigeria

    Get PDF
    This paper presents an empirical analysis of the demand and supply of electricity in Nigeria. The analysis was performed using annual times series data for the period 1970 to 2012. For this purpose, we estimated the long–run demand and supply equations for electricity using the reduced form regression method (RFRM) and the Vector error correction method (VECM) approach. Our analysis revealed that the theoretical modeling requirements rather than the simplified reduced form regression in the simultaneous equation system to satisfy the statistical requirements determine the choice of the statistical model. The results from the estimated model in terms of individual parameters in the system revealed that both price and income are demand elastic. As such, increasing electricity price in Nigeria would lead to a reduction in revenue by Power Holding Company of Nigeria (PHCN). The study also show that PHCN is currently experiencing diseconomies of scale as a result of inefficiency, inability to innovate as well as the necessary knowledge needed to expand output so as to reduce average cost. Similarly, the paper posits that the current reform in the electricity sector would only lead to increase in average unit cost and hence the price of electricity. We therefore recommend that for the Nigerian electricity sector to be viable as well as meet the supply and demand needs of both the private, commercial and industrial sector of the economy, the government at all levels, policy and decision makers must take stringent measures to curtail the problem of inefficiency, lack of manpower, be able to innovate so as to reduce wastage to its lowest web. This will not only bolster the growth of the Nigerian economy but will also be a source of revenue for the government for its infrastructural development needs. Keywords: Electricity demand and supply, Annual data, Simultaneous equation method and Vector error correction method (VECM

    Evaluating The Quality Of Cell Counting Measurements Using Experimental Design And Statistical Analysis

    Get PDF
    Cell counting measurements are critical in the research, development, and manufacturing of cell therapy products where they support decision-making in product testing and release. Evaluating cell quantity with accuracy and precision has remained a challenge for many specific applications or purposes. While new measurement platforms have been developed with increased measurement throughput and improved precision, discrepancies between cell counts acquired via various measurement processes are still pervasive. In addition, the industry as a whole has recognized that complex biological properties, as well as operator, equipment, and procedure variations can greatly affect the measurement quality, and the development of a single reference material or reference measurement is impractical to address broad counting needs. Here, we describe an experimental design and statistical analysis approach to evaluate the quality of a given cell counting measurement process. The experimental design uses a dilution series study with replicate samples as well as procedures to reduce pipetting error, and operator and temporal bias. The statistical analysis methods generate a set of metrics for evaluating measurement quality in terms of accuracy and precision, where accuracy is based on deviation from proportionality. In this design, a proportional response to dilution fraction serves as an internal control, where deviation from a proportional response is indicative of a systematic or non-systematic bias in the measurement process. The utility of this approach was demonstrated in the counting of human mesenchymal stem cells (hMSC) via automated or manual counting methods, where the automated method performed better in terms of both precision and proportionality. These results enabled a transition from the labor intensive and often imprecise manual counting method to the automated counting method. The experimental design and statistical method presented here is agnostic to the cell type and analytical platform, thus suitable as a horizontal approach to evaluate the quality of cell counting measurements with respect to method selection, optimization, and validation, thereby facilitating subsequent decision making. We are also working closely with industry partners and Standards Development Organizations (SDOs) to develop cell counting standards using this and other strategies

    A new family of kernels from the beta polynomial kernels with applications in density estimation

    Get PDF
    One of the fundamental data analytics tools in statistical estimation is the non-parametric kernel method that involves probability estimates production. The method uses the observations to obtain useful statistical information to aid the practicing statistician in decision making and further statistical investigations. The kernel techniques primarily examine essential characteristics in a data set, and this research aims to introduce new kernel functions that can easily detect inherent properties in any given observations. However, accurate application of kernel estimator as data analytics apparatus requires the kernel function and smoothing parameter that regulates the level of smoothness applied to the estimates. A plethora of kernel functions of different families and smoothing parameter selectors exist in the literature, but no one method is universally acceptable in all situations. Hence, more kernel functions with smoothing parameter selectors have been propounded customarily in density estimation. This article proposes a distinct kernel family from the beta polynomial kernel family using the exponential progression in its derivation. The newly proposed kernel family was evaluated with simulated and life data. The outcomes clearly indicated that this kernel family could compete favorably well with other kernel families in density estimation. A further comparison of numerical results of the new family and the existing beta family revealed that the new family outperformed the classical beta kernel family with simulation and real data examples with the aid of asymptotic mean integrated squared error (AMISE) as criterion function. The information obtained from the data analysis of this research could be used for decision making in an organization, especially when human and material resources are to be considered. In addition, Kernel functions are vital tools for data analysis and data visualization; hence the newly proposed functions are vital exploratory tools
    • …
    corecore