2,859 research outputs found

    Experimental analysis of computer system dependability

    Get PDF
    This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance

    Quantifying software architecture attributes

    Get PDF
    Software architecture holds the promise of advancing the state of the art in software engineering. The architecture is emerging as the focal point of many modem reuse/evolutionary paradigms, such as Product Line Engineering, Component Based Software Engineering, and COTS-based software development. The author focuses his research work on characterizing some properties of a software architecture. He tries to use software metrics to represent the error propagation probabilities, change propagation probabilities, and requirements change propagation probabilities of a software architecture. Error propagation probability reflects the probability that an error that arises in one component of the architecture will propagate to other components of the architecture at run-time. Change propagation probability reflects, for a given pair of components A and B, the probability that if A is changed in a corrective/perfective maintenance operation, B has to be changed to maintain the overall function the system. Requirements change propagation probability reflects the likelihood that a requirement change that arises in one component of the architecture propagates to other components. For each case, the author presents the analytical formulas which mainly based on statistical theory and empirical studies. Then the author studies the correlations between analytical results and empirical results. The author also uses several metrics to quantify the properties of a Product Line Architecture, such as scoping, variability, commonality, and applicability. He presents his proposed means to measure the properties and the results of the case studies

    Comparison of Maintainability Index Measurement from Microsoft Code Lens and Line of Code

    Get PDF
    Higher software quality demands are in line with software quality assurance that can be implemented in every step of the software development process. Maintainability Index is a calculation used to review the level of maintenance of the software. MI has a close relationship with software quality parameters based on Halstead Volume (HV), Cyclomatic Complexity McCabe (CC), and Line of Code (LOC). MI calculations can be carried out automatically with the help of a framework that has been introduced in the industrial world, such as Microsoft Visual Studio 2015 in the form of Code Matric Analysis and an additional software named Microsoft CodeLens Code Health Indicator. Previous research explained the close relationships between LOC and HV, and LOC and CC. New equations can be acquired to calculate the MI with the LOC approach. The LOC Parameter is physically shaped in a software program so that the developer can understand it easily and quickly. The aim of this research is to automate the MI calculation process based on the component classification method of modules in a rule-based C # program file. These rules are based on the error of MI calculations that occur from the platform, and the estimation of MI with LOC classification rules generates an error rate of less than 20% (19.75 %) of the data, both of which have the same accuracy

    Empirical Test Guidelines for Content Validity: Wash, Rinse, and Repeat until Clean

    Get PDF
    Empirical research in information systems relies heavily on developing and validating survey instruments. However, researchers’ efforts to evaluate content validity of survey scales are often inconsistent, incomplete, or unreported. Thjs paper defines and describes the most significant facets of content validity and illustrates the mechanisms through which multi-item psychometric scales capture a latent construct’s content. We discuss competing methods and propose new methods to assemble a comprehensive set of metrics and methods to evaluate content validity. The resulting recommendations for researchers evaluating content validity emphasize an iterative pre-study process (wash, rinse, and repeat until clean) to objectively establish “fit for purpose” when developing and adapting survey scales. A sample pre-study demonstrates suitable methods for creating confidence that scales reliably capture the theoretical essence of latent constructs. We demonstrate the efficacy of these methods using a randomized field experiment

    Supply Chain Strategies to Ensure Delivery of Undamaged Goods

    Get PDF
    Supply chain leaders in the oil and gas industry face significant logistical challenges regarding the efficient and safe delivery of undamaged products to their customers. Within the conceptual framework of business process orientation theory, the purpose of this multiple case study was to explore the strategies that supply chain leaders used to ensure delivery of undamaged goods to their customers. Four supply chain leaders in the oil and gas industry in Texas were purposefully selected as participants because they had successfully implemented strategies to ensure the delivery of undamaged goods. Data were collected through semistructured interviews and review of publicly published documents from 4 companies. Data were analyzed using Yin\u27s 5-step data analysis process of compiling, disassembling, reassembling, data interpretation, and conclusion. Four themes emerged from the analyzed data: process strategy, inspection strategy, information technology strategy, and employee training strategy. The findings of this study may provide knowledge to business leaders on how to reduce the cost of product delivery and increase profitability. The study\u27s implications for positive social change include the potential for supply chain leaders to reduce material wastage and environmental pollution through the safe delivery of undamaged oil and gas products to customers

    Social Media, Personality, and Leadership as Predictors of Job Performance

    Get PDF
    A thorough assessment of privacy concerns, reviewer bias, and applicant computer familiarity informs this longitudinal study incorporating features derived from social media, personality, leadership, traditional selection methodology, and objective measures of employee performance to build an empirical foundation for future research. To date, limited research has embarked upon an in-depth examination of the organizational implications of using social media data to assess job applicants. This dissertation addresses the question of whether social media data matters in the practical context of talent selection. I begin with a review of pertinent online communication theories, including media richness, cues filtered out, and social information processing theories before applying their concepts to social media. I review accumulated evidence that signals from social media use can predict personality and explore less-studied links between social media and full leadership behavior, with a focus on transformational leadership. The review also integrates privacy behavior. A survey covering personality, leadership, and privacy behavior, was completed by 107 call center agents who were subsequently invited to share their public Facebook profile. Of those, 48 volunteered to share quantitative and qualitative data from their public profile. A group of trained raters further coded profiles. The participants\u27 employer also provided performance and retention data. This study found mixed support for previously reported links between social media use and personality. An interaction of conscientiousness and computer skills predicted privacy skills and profile completeness, such that participants either high in both or low in both were more likely to have higher self-rated privacy skills and completed social media profiles. Raters were easily able to deduce demographic information from social media profiles, including gender, age, and ethnicity. Worryingly, evidence of bias in pass rates was detected based on raters\u27 hire vs no-hire recommendations, though the degree of bias varied by pass rate threshold. Finally, the various predictors were combined alongside scores from participants\u27 original pre-hire selection assessments to determine whether there was incremental value in including them as part of a holistic selection process. Some support was found for the incremental utility of the entire battery, as personality, social media activity, human ratings of social media profiles, and self-reported transformational leadership behavior uniquely contributed to a Cox regression model predicting retention. Support for the battery approach was much weaker when predicting efficiency (average handle time) as only transformational leadership provided statistically significant predictiveness beyond the pre-hire assessment. Altogether, this dissertation underscores the importance of relying on defensible selection methods to predict retention and performance outcomes. If social media is used in screening, it is best done in the context of other selection methods and should be based on computer-based automated screening rather than individual human ratings to reduce bias. This dissertation demonstrates that social media and leadership can add incremental prediction to selection decisions for entry-level jobs and makes recommendations for further research
    • …
    corecore