1,522 research outputs found

    Understanding Interactions in Social Networks and Committees

    Get PDF
    While much of the literature on cross section dependence has fo?cused mainly on estimation of the regression coefficients in the under?lying model, estimation and inferences on the magnitude and strength of spill-overs and interactions has been largely ignored. At the same time, such inferences are important in many applications, not least because they have structural interpretations and provide useful inter?pretation and structural explanation for the strength of any interac?tions. In this paper we propose GMM methods designed to uncover underlying (hidden) interactions in social networks and committees. Special attention is paid to the interval censored regression model. Our methods are applied to a study of committee decision making within the Bank of England¡¯s monetary policy committee.Committee Decision Making, Social Networks, Cross Section and Spatial Interaction, Generalised Method of Moments, Censored Regression Model, Expectation-Maximisation Algorithm, Monetary Policy, Interest Rates.

    Vol. 5, No. 1 (Full Issue)

    Get PDF

    A Multivariate Homogeneously Weighted Moving Average Control Chart

    Get PDF
    This paper presents a multivariate homogeneously weighted moving average (MHWMA) control chart for monitoring a process mean vector. The MHWMA control chart statistic gives a specific weight to the current observation, and the remaining weight is evenly distributed among the previous observations. We present the design procedure and compare the average run length (ARL) performance of the proposed chart with multivariate Chi-square, multivariate EWMA, and multivariate cumulative sum control charts. The ARL comparison indicates superior performance of the MHWMA chart over its competitors, particularly for the detection of small shifts in the process mean vector. Examples are also provided to show the application of the proposed chart. - 2013 IEEE.Scopu

    Methods for dependability analysis of small satellite missions

    Get PDF
    The use of small-satellites as platforms for fast-access to space with relatively low cost has increased in the last years. In particular, many universities in the world have now permanent hands-on education programs based on CubeSats. These small and cheap platforms are becoming more and more attractive also for other-than-educational missions, such as for example technology demonstration, science application, and Earth observation. This new objectives require the development of adequate technology to increase CubeSat performances. Furthermore, it is necessary to improve mission reliability. The research aims at studying methods for dependability analysis conducted by small satellites. The attention is focused on the reliability, as main attribute of the dependability, of CubeSats and CubeSats missions. The work has been structured in three main blocks. The first part of the work has been dedicated to the general study of dependability from the theoretical point of view. It has been studied the dependability attributes, the threads that can affect the dependability of a system, the techniques that are used to mitigate the threads, parameters to measure dependability, and models and techniques for dependability modelling. The second part contains a study of failures occurred during CubeSats missions in the last ten years and their observed reliability evaluation have been conducted. In order to perform this analysis a database has been created. This database contents information of all CubeSats launched until December 2013. The information has been gathered from public sources (i.e. CubeSat projects webs, publications on international journals, etc.) and contains general information (e.g. launch date, objectives) and data regarding possible failures. All this information is then used to conduct a quantitative reliability analysis of these missions by means of non-parametric and parametric methods, demonstrating that these failures follow a Weibull distribution. In the third section different methods, based on the concept of fault prevention, removal and tolerance, have been proposed in order to evaluate and increase dependability, and concretely reliability, of CubeSats and their missions. Concretely, three different methods have been developed: 1) after an analysis of the activities conducted by CubeSat’s developers during whole CubeSat life-cycle, it has been proposed a wide range of activities to be conducted during all phases of satellite’s life-cycle to increase mission rate of success, 2) increase reliability through CubeSats verification, mainly tailoring international ECSS standards to be applied to a CubeSat project, 3) reliability rising at mission level by means of implementing distributed mission architectures instead of classical monolithic architectures. All these methods developed in the present PhD research have been applied to a real space projects under development at Politecnico di Torino within e-st@r program. The e-st@r program is being conducted by the CubeSat Team of the Mechanical and AeroSpace Engineering Department. Concretely, e-st@r-I, e-st@r-II, and 3STAR CubeSats have been used as test cases for the proposed methods. Moreover, part of the present research has been conducted within an internship at the European Space research and Technology Centre (ESTEC) of the European Space Agency (ESA) at Noordwijk (The Netherlands). In particular, the partially realisation of the CubeSats database, the analysis of activities conducted by CubeSat developers and statement of activities for mission rate of success increase have been conducted during the internship

    On the Type-I Half-logistic Distribution and Related Contributions: A Review

    Get PDF
    The half-logistic (HL) distribution is a widely considered statistical model for studying lifetime phenomena arising in science, engineering, finance, and biomedical sciences. One of its weaknesses is that it has a decreasing probability density function and an increasing hazard rate function only. Due to that, researchers have been modifying the HL distribution to have more functional ability. This article provides an extensive overview of the HL distribution and its generalization (or extensions). The recent advancements regarding the HL distribution have led to numerous results in modern theory and statistical computing techniques across science and engineering. This work extended the body of literature in a summarized way to clarify some of the states of knowledge, potentials, and important roles played by the HL distribution and related models in probability theory and statistical studies in various areas and applications. In particular, at least sixty-seven flexible extensions of the HL distribution have been proposed in the past few years. We give a brief introduction to these distributions, emphasizing model parameters, properties derived, and the estimation method. Conclusively, there is no doubt that this summary could create a consensus between various related results in both theory and applications of the HL-related models to develop an interest in future studies

    Understanding Interactions in Social Networks and Committees

    Get PDF
    While much of the literature on cross section dependence has focused mainly on estimation of the regression coefficients in the underlying model, estimation and inferences on the magnitude and strength of spill-overs and interactions has been largely ignored. At the same time, such inferences are important in many applications, not least because they have structural interpretations and provide useful interpretation and structural explanation for the strength of any interactions. In this paper we propose GMM methods designed to uncover underlying (hidden) interactions in social networks and committees. Special attention is paid to the interval censored regression model. Our methods are applied to a study of committee decision making within the Bank of England’s monetary policy committee.Expectation-Maximisation Algorith

    Spatial Statistical Models: an overview under the Bayesian Approach

    Full text link
    Spatial documentation is exponentially increasing given the availability of Big IoT Data, enabled by the devices miniaturization and data storage capacity. Bayesian spatial statistics is a useful statistical tool to determine the dependence structure and hidden patterns over space through prior knowledge and data likelihood. Nevertheless, this modeling class is not well explored as the classification and regression machine learning models given their simplicity and often weak (data) independence supposition. In this manner, this systematic review aimed to unravel the main models presented in the literature in the past 20 years, identify gaps, and research opportunities. Elements such as random fields, spatial domains, prior specification, covariance function, and numerical approximations were discussed. This work explored the two subclasses of spatial smoothing global and local.Comment: 33 pages, 6 figure

    A data analytics approach to gas turbine prognostics and health management

    Get PDF
    As a consequence of the recent deregulation in the electrical power production industry, there has been a shift in the traditional ownership of power plants and the way they are operated. To hedge their business risks, the many new private entrepreneurs enter into long-term service agreement (LTSA) with third parties for their operation and maintenance activities. As the major LTSA providers, original equipment manufacturers have invested huge amounts of money to develop preventive maintenance strategies to minimize the occurrence of costly unplanned outages resulting from failures of the equipments covered under LTSA contracts. As a matter of fact, a recent study by the Electric Power Research Institute estimates the cost benefit of preventing a failure of a General Electric 7FA or 9FA technology compressor at 10to10 to 20 million. Therefore, in this dissertation, a two-phase data analytics approach is proposed to use the existing monitoring gas path and vibration sensors data to first develop a proactive strategy that systematically detects and validates catastrophic failure precursors so as to avoid the failure; and secondly to estimate the residual time to failure of the unhealthy items. For the first part of this work, the time-frequency technique of the wavelet packet transforms is used to de-noise the noisy sensor data. Next, the time-series signal of each sensor is decomposed to perform a multi-resolution analysis to extract its features. After that, the probabilistic principal component analysis is applied as a data fusion technique to reduce the number of the potentially correlated multi-sensors measurement into a few uncorrelated principal components. The last step of the failure precursor detection methodology, the anomaly detection decision, is in itself a multi-stage process. The obtained principal components from the data fusion step are first combined into a one-dimensional reconstructed signal representing the overall health assessment of the monitored systems. Then, two damage indicators of the reconstructed signal are defined and monitored for defect using a statistical process control approach. Finally, the Bayesian evaluation method for hypothesis testing is applied to a computed threshold to test for deviations from the healthy band. To model the residual time to failure, the anomaly severity index and the anomaly duration index are defined as defects characteristics. Two modeling techniques are investigated for the prognostication of the survival time after an anomaly is detected: the deterministic regression approach, and parametric approximation of the non-parametric Kaplan-Meier plot estimator. It is established that the deterministic regression provides poor prediction estimation. The non parametric survival data analysis technique of the Kaplan-Meier estimator provides the empirical survivor function of the data set comprised of both non-censored and right censored data. Though powerful because no a-priori predefined lifetime distribution is made, the Kaplan-Meier result lacks the flexibility to be transplanted to other units of a given fleet. The parametric analysis of survival data is performed with two popular failure analysis distributions: the exponential distribution and the Weibull distribution. The conclusion from the parametric analysis of the Kaplan-Meier plot is that the larger the data set, the more accurate is the prognostication ability of the residual time to failure model.PhDCommittee Chair: Mavris, Dimitri; Committee Member: Jiang, Xiaomo; Committee Member: Kumar, Virendra; Committee Member: Saleh, Joseph; Committee Member: Vittal, Sameer; Committee Member: Volovoi, Vital

    Vol. 16, No. 1 (Full Issue)

    Get PDF
    corecore