17 research outputs found

    Predicting Deviations in Software Quality by Using Relative Critical Value Deviation Metrics

    Get PDF
    Abstract We develop a new metric, Relative Critical Value Deviation (RCVD

    Price risk analysis in electricity supply

    Get PDF

    1992 NASA/ASEE Summer Faculty Fellowship Program

    Get PDF
    For the 28th consecutive year, a NASA/ASEE Summer Faculty Fellowship Program was conducted at the Marshall Space Flight Center (MSFC). The program was conducted by the University of Alabama and MSFC during the period June 1, 1992 through August 7, 1992. Operated under the auspices of the American Society for Engineering Education, the MSFC program, was well as those at other centers, was sponsored by the Office of Educational Affairs, NASA Headquarters, Washington, DC. The basic objectives of the programs, which are the 29th year of operation nationally, are (1) to further the professional knowledge of qualified engineering and science faculty members; (2) to stimulate and exchange ideas between participants and NASA; (3) to enrich and refresh the research and teaching activities of the participants' institutions; and (4) to contribute to the research objectives of the NASA centers

    A normal accident theory-based complexity assessment methodology for safety-related embedded computer systems

    Get PDF
    "Computer-related accidents have caused injuries and fatalities in numerous applications. Normal accident theory (NAT) explains that these accidents are inevitable because of system complexity. Complex systems, such as computer-based systems, are highly interconnected, highly interactive, and tightly coupled. We do not have a scientific methodology to identify and quantify these complexities; specifically, NAT has not been operationalized for computer-based systems. Our research addressed this by operationalizing NAT for the system requirements of safety-related computer systems. It was theorized that there are two types of system complexity: external and internal. External complexity was characterized by three variables: system predictability, observability, and usability - the dependent variables. Internal complexity was characterized by modeling system requirements with software cost reduction dependency graphs, then quantifying model attributes using 15 graph-theoretical metrics - the independent variables. Dependent variable data were obtained by having 32 subjects run simulations of our research test vehicle: the light control system (LCS). The LCS simulation tests used a crossover design. Subject perceptions of these simulations were obtained by using a questionnaire. Canonical correlation analysis and structure correlations were used to test hypotheses 1 to 3: the dependent variables predictability, observability, and usability do not correlate with the NAT complexity metrics. Five of fifteen metrics proposed for NAT complexity correlated with the dependent data. These five metrics had structure correlations exceeding 0.25, standard errors <0.10, and a 95% confidence interval. Therefore, the null hypotheses were rejected. A Wilcoxon signed ranks test was used to test hypotheses 4 to 6: increasing NAT complexity increases system predictability, observability, and usability. The results showed that the dependent variables decreased as complexity increased. Therefore, null hypotheses 4 to 6 were rejected. This work is a step forward to operationalize NAT for safety-related computer systems; however, limitations exist. Opportunities addressing these limitations and advancing NAT were identified. Lastly, the major contribution of this work is fundamental to scientific research: to gain knowledge through the discovery of relationship between the variables of interest. Specifically, NAT has been advanced by defining and quantifying complexity measures and showing their inverse relationship to system predictability, observability, and usability." - NIOSHTIC-2NIOSHTIC no. 20024286200

    Enabling process improvements through systems thinking

    Get PDF
    Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Engineering Systems Division; in conjunction with the Leaders for Manufacturing Program at MIT, 2006.Includes bibliographical references (p. 78-79).Manufacturing organizations around the world strive to improve processes with varying degrees of realization. There is no right way or latest and greatest process that can guarantee success, therefore the approach, and not necessarily the process, is critical. Since every process improvement project is different, using the systems thinking approach decreases the risk of failure as the implementer(s) is/are more aware of critical items on the fringe which might otherwise be neglected. Process metrics are vital for many reasons including motivating employees, determining the level of need for process improvement, and evaluating the outcome of a process improvement project. When evaluating whether a project should be pursued, the expected results on the subsystem and other subsystems should be estimated and tied to the highest level metric, which ultimately should equate to bottom line impact. This evaluation technique ensures a positive impact on the entire system, rather than producing only a subsystem optimum. A subsystem metric indicates a project's success through the use of a hypothesis test. This usage requires that the subsystem metric, which will be used to measure a process improvement, must be stable before initiating the project.(cont.) The individual, team, and organization all play a vital role in a company embracing systems thinking. Individuals and teams need to keep an open mind to issues outside the focus department and accept and encourage involvement of cross-functional representatives on process improvement teams. An organization where systems thinking is integral becomes a learning organization and has a higher percentage of successful projects through a systematic evaluation and approach to projects. To maintain the systems thinking culture, an organization as a whole must encourage the hiring of individuals with varied experiences and who believe in systems thinking.by Jessica Dolak.S.M.M.B.A

    The Automated analysis of object-oriented designs

    Get PDF
    This thesis concerns the use of software measures to assess the quality of object-oriented designs. It examines the ways in which design assessment can be assisted by measurement and the areas in which it can't. Other work in software measurement looks at defining and validating measures,or building prediction systems. This work is distinctive in that it examines the use of measures to help improve design quality during design time. To evaluate a design based on measurement results requires a means of relating measurement values to particular design problems or quality levels. Design heuristics were used to make this connection between measurement and quality. A survey was carried out to find suggestions for guidelines, rules and heuristics from the 00 design literature. This survey resulted in a catalogue of 288 suggestions for 00 design heuristics. The catalogue was structured around the 00 constructs to which the heuristics relate, and includes information on various heuristic attributes. This scheme is intended to allow suitable heuristics to be quickly located and correctly applied. Automation requires tool support. A tool was built which augmented the functionality available in existing sets, and taking input from multiple sources of design information (e.g., CASE tools and source code) and the described so far presents a potential method for automated design assessment provides the means of automation. An empirical study was then required to consider the efficacy of the method and evaluate the novel features of the tool. A case study was used to explore the approach taken by, and evaluate the effectiveness of, 15 subjects using measures and heuristics to assess the design of a small 00 system(IS classes). This study showed that semantic heuristics tended to highlight significant problems, but where attempts were made to automate these it often led to false problems being identified. This result, along with a previous finding that around half of quality criteria are not automatically assessable at design time, strongly suggeststhat people are still a necessary part of design assessment. The main result of the case study was that the subjects correctly identified 90% of the major design problems and were very positive about their experience of using measurement to support design assessment

    A Usability Model for Software Development Processes and Practices

    Get PDF
    La usabilidad caracteriza buenas interacciones entre las personas y sus procesos y prácticas. Promueve la satisfacción y crea entornos seguros para la innovación. Los principios de usabilidad como el feedback y la tolerancia a errores están presentes en muchos conceptos de ingeniería de software, como los procesos iterativos y las revisiones de pares. El propósito de la investigación realizada para esta Tesis es traer el concepto de usabilidad de prácticas y procesos a la ingeniería de software. Para lograr este objetivo, y dada la falta de modelos de calidad de procesos enfocados en la usabilidad, un Modelo de Usabilidad de Prácticas y Procesos (UMP) ha sido creado, refinado y evaluado, siguiendo el marco Desing Science Research. UMP ha sido efectivamente aplicado a Scrum, Test Driven Development (TDD), Integración Continua, Behaviour Driven Development (BDD) y el método Visual Milestone Planning (VMP). UMP fue diseñado para ayudar a practicantes, coaches, consultores, docentes e investigadores. Para evaluar UMP se realizaron varios estudios empíricos: una evaluación de expertos inicial para determinar su factibilidad; un focus group para obtener feedback sobre las características y métricas de UMP; dos estudios de confiabilidad, un estudio de acuerdo entre evaluadores sobre Scrum y un estudio de confiabilidad entre evaluadores sobre TDD-BDD; y dos estudios para evaluar la utilidad de UMP, un estudio de caso sobre la aplicación de UMP al método VMP, y un cuasi-experimento de campo en el cual un equipo de desarrollo en la industria aplicó UMP para mejorar su práctica de BDD. Los resultados de los estudios de utilidad muestran que los usuarios consideran a UMP útil, y 37 evaluaciones independientes por expertos fueron realizadas sobre procesos y prácticas del mundo real. Las contribuciones de esta tesis incluyen: UMP con sus características y métricas, el proceso de evaluación de UMP, el conocimiento creado sobre la confiabilidad y utilidad de UMP a través de los estudios empíricos, y los perfiles que caracterizan la usabilidad de prácticas y procesos de amplio uso actual en la industria como Scrum, Integración Continua, TDD y BDD, obtenidos a través de la aplicación de UMP.Asesor científico: Alejandro Oliveros.Facultad de Informátic

    Global warehouse management: methodology to determine an integrated performance measurement

    Get PDF
    Tese (doutorado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Engenharia de Produção, Florianópolis, 2015.Abstract : The growing warehouse operation complexity has led companies to adopt a large number of indicators, making its management increasingly difficult. It may be hard for managers to evaluate the overall performance of the logistic systems, including the warehouse, because the assessment of the interdependence of indicators with distinct objectives is rather complex (e.g. the level of a cost indicator shall decrease, whereas a quality indicator level shall be maximized). This fact could lead to biases in the analysis executed by the manager in the evaluation of the global warehouse performance. In this context, this thesis develops a methodology to achieve an integrated warehouse performance measurement. It encompasses four main steps: (i) the development of an analytical model of performance indicators usually used for warehouse management; (ii) the definition of indicator relationships analytically and statistically; (iii) the aggregation of these indicators in an integrated model; (iv) the proposition of a scale to assess the evolution of the warehouse performance over time according to the integrated model results.The methodology is applied to a theoretical warehouse to demonstrate its application. The indicators used to evaluate the warehouse come from the literature and the database is generated to perform the mathematical tools. The Jacobian matrix is used to define indicator relationships analytically, and the principal component analysis to achieve indicator's aggregation statistically. The final aggregated model comprehends 33 indicators assigned in six different components, which compose the global performance indicator equation by means of component's weighted average. A scale is developed for the global performance indicator using an optimization approach to obtain its upper and lower boundaries.The usability of the integrated model is tested for two different warehouse performance situations and interesting insights about the final warehouse performance are discussed. Therefore, we conclude that the proposed methodology reaches its objective providing a decision support tool for managers so that they can be more efficient in the global warehouse performance management without neglecting important information from indicators.A crescente complexidade das operações em armazéns tem levado as empresas a adotarem um grande número de indicadores de desempenho, o que tem dificultado cada vez mais o seu gerenciamento. Além do volume de informações, os indicadores normalmente possuem interdependências e objetivos distintos, as vezes até opostos (por exemplo, o indicador de custo deve ser reduzido enquanto o indicador de qualidade deve sempre ser aumentado), tornando complexo para o gestor avaliar o desempenho logístico global do sistema, incluindo o armazém. Dentro deste contexto, esta tese desenvolve uma metodologia para obter uma medida agregada do desempenho global do armazém. A metodologia é composta de quatro etapas principais: (i) o desenvolvimento de um modelo analítico dos indicadores de desempenho já utilizados para o gerenciamento do armazém; (ii) a definição das relações entre os indicadores de forma analítica e estatística; (iii) a agregação destes indicadores em um modelo integrado; (iv) a proposição de uma escala para avaliar a evolução do desempenho global do armazém ao longo do tempo, de acordo com o resultado do modelo integrado. A metodologia é aplicada em um armazém teórico para demonstrar sua aplicabilidade. Os indicadores utilizados para avaliar o desempenho do armazém são provenientes da literatura, e uma base de dados é gerada para permitir a utilização de ferramentas matemáticas. A matriz jacobiana é utilizada para definir de forma analítica as relações entre os indicadores, e uma análise de componentes principais é realizada para agregar os indicadores de forma estatística. O modelo agregado final compreende 33 indicadores, divididos em seis componentes diferentes, e a equação do indicador de desempenho global é obtido a partir da média ponderada dos seis componentes. Uma escala é desenvolvida para o indicador de desempenho global utilizando um modelo de otimização para obter os limites superior e inferior da escala. Depois de testes com o modelo integrado, pôde-se concluir que a metodologia proposta atingiu seu objetivo ao fornecer uma ferramenta de ajuda à decisão para os gestores, permitindo que eles sejam mais eficazes no gerenciamento global do armazém sem negligenciar informações importantes que são fornecidas pelos indicadores

    Strategic outsourcing model : decision support for determining supply chain structure

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering; and, (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; in conjunction with the Leaders for Manufacturing Program at MIT, 2003.Includes bibliographical references (p. 138).Outsourcing is becoming the norm in business today. This is a natural insight from the management philosophy of the 80's and early 90's of doing only what is "Core" to the business. A company, if their main focus is to keep their margins as high as possible, will focus on what gives their company a competitive advantage and differentiates them from the competition. To The Firm, all other tasks outside of these advantages are superfluous and unnecessary. It is only rational, in this management paradigm, to get rid of, or outsource, all of these activities that take scarce resources away from what the company considers core. The next logical question is: "How to conduct an analysis for outsourcing decision making." Current methodologies coalesce cost alternative analysis with a strategic "gut feel" from management to make decisions that will last multiple cycles into the future. Cost analysis is basic. However, strategic analysis is far reaching, impacts the company's future capabilities, and is difficult to evaluate. This thesis proposes a Decision Support System (DSS) for evaluating the strategies of outsourcing and determining the impacts on The Firm. A thorough review of industry and academic literature on outsourcing, analysis of historic outsourcing results, and discussion of current capability concerns has led to the development of six strategic factors: Customer Experience, Technical Clockspeed, Industry Climate, Supply Chain Excellence, Product Architecture, and Competitive Position. Included is an exhaustive discussion of these strategic factors, strategic matrices for evaluating the business climate, development of excel spreadsheets with questions for evaluation of these factors and matrices, development of a database for knowledge transfer, and implementation of the DSS in the organization.by Richard Philip Nardo.M.B.A.S.M

    Attack of the clones: an investigation into removing redundant source code

    Get PDF
    Long-term maintenance of code will often lead to the introduction of duplicated or 'cloned' code. Legacy systems riddled with these clones have large amounts of redundant code and are more difficult to understand and maintain. One option available to improve maintainability and to increase software reuse, is to re-engineer code clones into reusable components. However, before this can be achieved detection and removal of this redundant code is necessary. There are several established clone detection tools for software maintenance and this thesis aims to investigate the similarities between their output. It also looks at how maintainers may best use them to reduce the amount of redundant code in a software system. This will be achieved by running clone detection tools on several different case studies. Included in these case studies will be a novel tool called Covet inspired by research of Mayrand [May96b] which attempted to identify cloned routines through a comparison of software metrics generated from each routine. It was found that none of the clone detection tools achieved either 100% precision or 100% recall. Each tool identified very different sets of clones. Overall MOSS achieved the greatest precision and CCFinder the greatest recall. Also observed was that the use of automatically generated code increased the proportion of clones found in a software system
    corecore