664 research outputs found

    Methodology for Evaluating Reliability Growth Programs of Discrete Systems

    Get PDF
    The term Reliability Growth (RG) refers to the elimination of design weaknesses inherent to intermediate prototypes of complex systems via failure mode discovery, analysis, and effective correction. A wealth of models have been developed over the years to plan, track, and project reliability improvements of developmental items whose test durations are continuous, as well as discrete. This research reveals capability gaps, and contributes new methods to the area of discrete RG projection. The purpose of this area of research is to quantify the reliability that could be achieved if failure modes observed during testing are corrected via a specified level of fix effectiveness. Fix effectiveness factors reduce initial probabilities (or rates) of occurrence of individual failure modes by a fractional amount, thereby increasing system reliability. The contributions of this research are as follows. New RG management metrics are prescribed for one-shot systems under two corrective action strategies. The first is when corrective actions are delayed until the end of the current test phase. The second is when they are applied to prototypes after associated failure modes are first discovered. These management metrics estimate: initial system reliability, projected reliability (i.e., reliability after failure mode mitigation), RG potential, the expected number of failure modes observed during test, the probability of discovering new failure modes, and the portion of system unreliability associated with repeat failure modes. These management metrics give practitioners the means to address model goodness-of-fit concerns, quantify programmatic risk, assess reliability maturity, and estimate the initial, projected, and upper achievable reliability of discrete systems throughout their development programs. Statistical procedures (i.e., classical and Bayesian) for point-estimation, confidence interval construction, and model goodness-of-fit testing are also developed. In particular, a new likelihood function and maximum likelihood procedure are derived to estimate model parameters. Limiting approximations of these parameters, as well as the management metrics, are also derived. The features of these new methods are illustrated by simple numerical example. Monte Carlo simulation is utilized to characterize model accuracy. This research is useful to program managers and practitioners working to assess the RG program and development effort of discrete systems

    Semantik renk değişmezliği

    Get PDF
    Color constancy aims to perceive the actual color of an object, disregarding the effectof the light source. Recent works showed that utilizing the semantic information inan image enhances the performance of the computational color constancy methods.Considering the recent success of the segmentation methods and the increased numberof labeled images, we propose a color constancy method that combines individualilluminant estimations of detected objects which are computed using the classes of theobjects and their associated colors. Then we introduce a weighting system that valuesthe applicability of the object classes to the color constancy problem. Lastly, weintroduce another metric expressing the detected object and how well it fits the learnedmodel of its class. Finally, we evaluate our proposed method on a popular colorconstancy dataset, confirming that each weight addition enhances the performanceof the global illuminant estimation. Experimental results show promising results,outperforming the conventional methods while competing with the state of the artmethods.--M.S. - Master of Scienc

    Computerized Analysis of Magnetic Resonance Images to Study Cerebral Anatomy in Developing Neonates

    Get PDF
    The study of cerebral anatomy in developing neonates is of great importance for the understanding of brain development during the early period of life. This dissertation therefore focuses on three challenges in the modelling of cerebral anatomy in neonates during brain development. The methods that have been developed all use Magnetic Resonance Images (MRI) as source data. To facilitate study of vascular development in the neonatal period, a set of image analysis algorithms are developed to automatically extract and model cerebral vessel trees. The whole process consists of cerebral vessel tracking from automatically placed seed points, vessel tree generation, and vasculature registration and matching. These algorithms have been tested on clinical Time-of- Flight (TOF) MR angiographic datasets. To facilitate study of the neonatal cortex a complete cerebral cortex segmentation and reconstruction pipeline has been developed. Segmentation of the neonatal cortex is not effectively done by existing algorithms designed for the adult brain because the contrast between grey and white matter is reversed. This causes pixels containing tissue mixtures to be incorrectly labelled by conventional methods. The neonatal cortical segmentation method that has been developed is based on a novel expectation-maximization (EM) method with explicit correction for mislabelled partial volume voxels. Based on the resulting cortical segmentation, an implicit surface evolution technique is adopted for the reconstruction of the cortex in neonates. The performance of the method is investigated by performing a detailed landmark study. To facilitate study of cortical development, a cortical surface registration algorithm for aligning the cortical surface is developed. The method first inflates extracted cortical surfaces and then performs a non-rigid surface registration using free-form deformations (FFDs) to remove residual alignment. Validation experiments using data labelled by an expert observer demonstrate that the method can capture local changes and follow the growth of specific sulcus

    Spatio-Temporal Mixed Models for Diffusion Tensor Magnetic Resonance Imaging

    Get PDF
    Diffusion tensor imaging (DTI) is a magnetic resonance imaging modality that provides useful in vivo information about the microstructure of human brain tissue, particularly the white matter structures that comprise the 'wiring' of the brain. DTI holds great promise for enhancing our understanding of white matter disorders, which comprise public health burdens in a variety of medical domains. Due to its relatively complex structure, however, extracting useful information from DTI data presents a number of statistical challenges. More effective statistical methodologies will improve the sensitivity of DTI data analyses and increases their clinical relevance, a goal of substantial public health significance. In this dissertation, I propose a series of analytic approaches to DTI data analysis based on linear mixed effects models (LMEs). These models provide a number of advantages over several expedient DTI data analyses in current use. I demonstrate the applicability and advantages of my LME-based approaches in an analysis that compares white matter microstructure in a group of children and young adults with autism spectrum disorders (ASDs) to typically developing controls.I first identify a class of LMEs for DTI data analyses for which closed-form maximum likelihood estimators of all parameters exist. By avoiding iteration, these models enable practitioners to perform exploratory and confirmatory analyses of large DTI datasets in clinically feasible time. This family of models incorporates group heterogeneity in variance-covariance structure. I then compare the results of my approach with other approaches currently in practice in an analysis of white matter abnormalities associated with ASDs. I also introduce a data analytic framework that incorporates the entire multivariate tensor in a single analysis. I last describe a unified likelihood-based approach to addressing reliability with a new estimator of a generalized intraclass correlation coefficient. I establish the robustness of this approach to model perturbations with a series of theoretical and simulation results and apply it to quantify local spatial reliability in the ASDs example

    Towards greater accuracy in individual-tree mortality regression

    Get PDF
    Background mortality is an essential component of any forest growth and yield model. Forecasts of mortality contribute largely to the variability and accuracy of model predictions at the tree, stand and forest level. In the present study, I implement and evaluate state-of-the-art techniques to increase the accuracy of individual tree mortality models, similar to those used in many of the current variants of the Forest Vegetation Simulator, using data from North Idaho and Montana. The first technique addresses methods to correct for bias induced by measurement error typically present in competition variables. The second implements survival regression and evaluates its performance against the traditional logistic regression approach. I selected the regression calibration (RC) algorithm as a good candidate for addressing the measurement error problem. Two logistic regression models for each species were fitted, one ignoring the measurement error, which is the “naïve” approach, and the other applying RC. The models fitted with RC outperformed the naïve models in terms of discrimination when the competition variable was found to be statistically significant. The effect of RC was more obvious where measurement error variance was large and for more shade-intolerant species. The process of model fitting and variable selection revealed that past emphasis on DBH as a predictor variable for mortality, while producing models with strong metrics of fit, may make models less generalizable. The evaluation of the error variance estimator developed by Stage and Wykoff (1998), and core to the implementation of RC, in different spatial patterns and diameter distributions, revealed that the Stage and Wykoff estimate notably overestimated the true variance in all simulated stands, but those that are clustered. Results show a systematic bias even when all the assumptions made by the authors are guaranteed. I argue that this is the result of the Poisson-based estimate ignoring the overlapping area of potential plots around a tree. Effects, especially in the application phase, of the variance estimate justify suggested future efforts of improving the accuracy of the variance estimate. The second technique implemented and evaluated is a survival regression model that accounts for the time dependent nature of variables, such as diameter and competition variables, and the interval-censored nature of data collected from remeasured plots. The performance of the model is compared with the traditional logistic regression model as a tool to predict individual tree mortality. Validation of both approaches shows that the survival regression approach discriminates better between dead and alive trees for all species. In conclusion, I showed that the proposed techniques do increase the accuracy of individual tree mortality models, and are a promising first step towards the next generation of background mortality models. I have also identified the next steps to undertake in order to advance mortality models further

    Vol. 6, No. 2 (Full Issue)

    Get PDF
    corecore