630 research outputs found

    Collaborative problem solving within supply chains: general framework, process and methodology

    Get PDF
    The Problem Solving Process is a central element of the firms' continuous improvement strategies. In this framework, a number of approaches have succeeded to demonstrate their effectiveness to tackle industrial problems. The list includes, but is not limited to PDCA, DMAICS, 7Steps and 8D/9S. However, the emergence and increasing emphasis in the supply chains have impacted the effectiveness of those methods to solve problems that go beyond the boundaries of a single firm and, in consequence, their ability to provide solutions when the contexts on which firms operate are distributed. This can be explained because not only the problems, but also the products, partners, skills, resources and pieces of evidence required to solve those problems are distributed, fragmented and decentralized across the network. This PhD thesis deals with the solving of industrial problems in supply chains based in collaboration. It develops a general framework for studying this paradigm, as well as both a generic process and a collaborative methodology able to deal with the process in practice. The proposal considers all the technical aspects (e.g. products modeling and network structure) and the collaborative aspects (e.g. the trust decisions and/or the power gaps between partners) that simultaneously impact the supply chain operation and the jointly solving of problems. Finally, this research work positions the experiential knowledge as a central lever of the problem solving process to contribute to the continuous improvement strategies at a more global level

    Résolution collaborative de problèmes au sein des chaînes logistiques : cadre conceptuel, processus et méthodologie

    Get PDF
    La Résolution de Problèmes est l'un des piliers des stratégies d'amélioration continue des entreprises. Dans ce cadre, un certain nombre des méthodes ont réussi à démontrer son efficacité pour adresser des problèmes particulièrement complexes. Parmi ces méthodes, on peut distinguer le PDCA, le DMAICS, le 7Steps et le 8D/9S. Pourtant, l'apparition des réseaux distribuées de partenaires, ainsi que le positionnement du concept d'entreprise étendue, ont obligé les entreprises à aller au-delà de ses frontières pour travailler en synergie avec tous les partenaires en amont et en aval de sa chaîne. Dans ce contexte, l'efficacité de ces méthodes de résolution des problèmes a été fortement impactée. Ceci car non seulement les problèmes, mais aussi les produits, les partenaires, les ressources et l'information nécessaires pour sa résolution sont extrêmement fragmentés et décentralisés. Cette thèse s'intéresse donc à la résolution collaborative de problèmes au sein des chaînes distribuées de partenaires et son objectif est de proposer un processus et une méthodologie adaptés à ces contextes. Les propositions faites prennent en compte les aspects techniques (e.g. la modélisation des flux et la configuration de la chaîne) ainsi que les aspects collaboratifs (e.g. le niveau de confiance et/ou le rapport de pouvoir entre les partenaires) que conditionnent l'opération et l'efficacité du réseau. Finalement, cette thèse s'intéresse à l'articulation d'un système de retour d'expérience dans la résolution de problèmes distribués afin d'améliorer son efficacité. ABSTRACT : The Problem Solving Process is a central element of the firms' continuous improvement strategies. In this framework, a number of approaches have succeeded to demonstrate their effectiveness to tackle industrial problems. The list includes, but is not limited to PDCA, DMAICS, 7Steps and 8D/9S. However, the emergence and increasing emphasis in the supply chains have impacted the effectiveness of those methods to solve problems that go beyond the boundaries of a single firm and, in consequence, their ability to provide solutions when the contexts on which firms operate are distributed. This can be explained because not only the problems, but also the products, partners, skills, resources and pieces of evidence required to solve those problems are distributed, fragmented and decentralized across the network. This PhD thesis deals with the solving of industrial problems in supply chains based in collaboration. It develops a general framework for studying this paradigm, as well as both a generic process and a collaborative methodology able to deal with the process in practice. The proposal considers all the technical aspects (e.g. products modeling and network structure) and the collaborative aspects (e.g. the trust decisions and/or the power gaps between partners) that simultaneously impact the supply chain operation and the jointly solving of problems. Finally, this research work positions the experiential knowledge as a central lever of the problem solving process to contribute to the continuous improvement strategies at a more global level

    Accident Analysis Methods and Models — a Systematic Literature Review

    Get PDF
    As part of our co-operation with the Telecommunication Agency of the Netherlands, we want to formulate an accident analysis method and model for use in incidents in telecommunications that cause service unavailability. In order to not re-invent the wheel, we wanted to first get an overview of all existing accident analysis methods and models to see if we could find an overarching method and commonalities between models. Furthermore, we wanted to find any methods that had been applied to incidents in telecommunication networks or even been designed specifically for these incidents. In this article, we present a systematic literature review of incident and accident analysis methods across domains. We find that accident analysis methods have experienced a rise in attention over the last 15 years, leading to a plethora of methods. We discuss the three classes in which they are often categorized. We find that each class has its own advantages and disadvantages: an analysis using a sequential method may be easier to understand and communicate and quicker to execute, but may miss vital underlying causes that can later trigger new, similar accidents. An analysis using an epidemiological method takes more time, but it also finds underlying causes the resolution of which may prevent accidents from happening in the future. Systemic methods are appropriate for complex, tightly coupled systems and executing such a method takes a lot of time and resources, rendering it very expensive. This will often not be justified by the costs of the accident (especially in telecommunications networks) and it will therefore be too expensive to be employed in regular businesses. We were not able to find any published definitions of structured methods specific to telecommunications, nor did we find any applications of structured methods specifically to telecommunications

    Requirement-based Root Cause Analysis Using Log Data

    Get PDF
    Root Cause Analysis for software systems is a challenging diagnostic task due to complexity emanating from the interactions between system components. Furthermore, the sheer size of the logged data makes it often difficult for human operators and administrators to perform problem diagnosis and root cause analysis. The diagnostic task is further complicated by the lack of models that could be used to support the diagnostic process. Traditionally, this diagnostic task is conducted by human experts who create mental models of systems, in order to generate hypotheses and conduct the analysis even in the presence of incomplete logged data. A challenge in this area is to provide the necessary concepts, tools, and techniques for the operators to focus their attention to specific parts of the logged data and ultimately to automate the diagnostic process. The work described in this thesis aims at proposing a framework that includes techniques, formalisms, and algorithms aimed at automating the process of root cause analysis. In particular, this work uses annotated requirement goal models to represent the monitored systems' requirements and runtime behavior. The goal models are used in combination with log data to generate a ranked set of diagnostics that represent the combination of tasks that failed leading to the observed failure. In addition, the framework uses a combination of word-based and topic-based information retrieval techniques to reduce the size of log data by filtering out a subset of log data to facilitate the diagnostic process. The process of log data filtering and reduction is based on goal model annotations and generates a sequence of logical literals that represent the possible systems' observations. A second level of investigation consists of looking for evidence for any malicious (i.e., intentionally caused by a third party) activity leading to task failures. This analysis uses annotated anti-goal models that denote possible actions that can be taken by an external user to threaten a given system task. The framework uses a novel probabilistic approach based on Markov Logic Networks. Our experiments show that our approach improves over existing proposals by handling uncertainty in observations, using natively generated log data, and by providing ranked diagnoses. The proposed framework has been evaluated using a test environment based on commercial off-the-shelf software components, publicly available Java Based ATM machine, and the large publicly available dataset (DARPA 2000)

    The Pertinence of Risk Management in Behavioral Healthcare Organizations

    Get PDF
    The role of risk management in healthcare settings addressing physical issues has been a focus for scholars since the 1950s. Researchers have demonstrated that effective risk mitigation in physical healthcare settings can decrease medical errors, poor patient care, and litigation. Although there has been a significant focus on the implications of risk management in these settings, there is less research related to this relationship within the context of behavioral healthcare. The purpose of this study was to explore risk-management processes in the setting of a midsized for-profit behavioral health organization on the East Coast of the United States. The Baldrige Excellence Framework was used to guide this qualitative case study. Data collection included semistructured interviews with three organization staff, a review of relevant academic and professional literature, and a review of select documents from the organization. Four themes representing potential opportunities related to improving the organization’s risk-management strategies were identified through content analysis of the data: the importance of shared risk-management responsibility, policy, communication, and ongoing risk monitoring. Recommendations to address these opportunities included the development of a quality-improvement program, implementing risk-management training, and enacting a quality council. This research may contribute to positive social change by serving as an example of how to strengthen a behavioral health organization’s approach to managing risk, thereby enhancing organizational ability to sustain delivery of behavioral health services to communities in need

    Socio-Technical Aspects of Security Analysis

    Get PDF
    This thesis seeks to establish a semi-automatic methodology for security analysis when users are considered part of the system. The thesis explores this challenge, which we refer to as ‘socio-technical security analysis’. We consider that a socio-technical vulnerability is the conjunction of a human behaviour, the factors that foster the occurrence of this behaviour, and a system. Therefore, the aim of the thesis is to investigate which human-related factors should be considered in system security, and how to incorporate these identified factors into an analysis framework. Finding a way to systematically detect, in a system, the socio-technical vulnerabilities that can stem from insecure human behaviours, along with the factors that influence users into engaging in these behaviours is a long journey that we can summarise in three research questions: 1. How can we detect a socio-technical vulnerability in a system? 2. How can we identify in the interactions between a system and its users, the human behaviours that can harm this system’s security? 3. How can we identify the factors that foster human behaviours that are harmful to a system’s security? A review of works that aim at bringing social sciences findings into security analysis reveals that there is no unified way to do it. Identifying the points where users can harm a system’s security, and clarifying what factors can foster an insecure behaviour is a complex matter. Hypotheses can arise about the usability of the system, aspects pertaining to the user or the organisational context but there is no way to find and test them all. Further, there is currently no way to systematically integrate the results regarding hypotheses we tested in a security analysis. Thus, we identify two objectives related to these methodological challenges that this thesis aims at fulfilling in its contributions: 1. What form should a framework that intends to identify harmful behaviours for security, and to investigate the factors that foster their occurrence take? 2. What form should a semi-automatic, or tool-assisted methodology for the security analysis of socio-technical systems take? The thesis provides partial answers to the questions. First it defines a methodological framework called STEAL that provides a common ground for an interdisciplinary approach to security analysis. STEAL supports the interaction between computer scientists and social scientists by providing a common reference model to describe a system with its human and non-human components, potential attacks and defences, and the surrounding context. We validate STEAL in a two experimental studies, showing the role of the context and graphical cues in Wi-Fi networks’ security. Then the thesis complements STEAL with a Root Cause Analysis (RCA) methodology for security inspired from the ones used in safety. This methodology, called S·CREAM aims at being more systematic than the research methods that can be used with STEAL (surveys for instance) and at providing reusable findings for analysing security. To do so, S·CREAM provides a retrospective analysis to identify the factors that can explain the success of past attacks and a methodology to compile these factors in a form that allows for the consideration of their potential effects on a system’s security, given an attacker Threat Model. The thesis also illustrates how we developed a tool—the S·CREAM assistant— that supports the methodology with an extensible knowledge base and computer-supported reasoning

    Quality Management and Oversight of Texas Forensic Science Service Providers

    Full text link
    Forensic science oversight in the U.S. largely relies upon voluntary third-party forensic laboratory accreditation programs. Without a national system of regulation and given the highly fragmented local systems of control, few forensic science service providers (FSSPs) are subject to regulatory oversight beyond their third-party accreditors. Texas is unique in its establishment of a robust statewide oversight system and a strong governmental culture of transparency, permitting this study of forensic quality management. This study consisted of two parts. The first part of this dissertation characterized and analyzed quality incident reports (QIRs) published by the Texas Department of Public Safety Crime Laboratory System (DPS Crime Labs). The second part of this dissertation identified, characterized, and analyzed the range of disclosures and complaints received by the Texas Forensic Science Commission (TFSC), its responses thereto, and evaluated whether oversight provided by the TFSC differed from oversight by the American National Standards Institute (ANSI) National Accreditation Board (ANAB) accreditation. This dissertation project will contribute to extending the body of research on forensic science quality management and oversight by exploring the following research questions: Part 1: QIR Study RQ1: What are the characteristics of QIRs produced by the DPS Crime Labs from post-Quality Action Plan (QAP) revision QIRs to May 2021? RQ2: What aspects of QIRs predict a significant disclosure to oversight bodies? RQ3: What did the QAP revision reveal about the theories of forensic science quality management infrastructure when evaluating System DPS Crime Labs QIRs produced before and after the QAP revision? Part 2: TFSC Study RQ4: What are the characteristics of complaints and self-disclosures investigated by the TFSC? Are complaints and self-disclosures significantly different? RQ5: What factors of complaints and self-disclosures predict action by TFSC? RQ6: What factors of self-disclosures predict action by ANAB? RQ7: What do investigations of self-disclosures and complaints to the TFSC reveal about how the theories of forensic science quality management infrastructure operate in these contexts? Methods: This dissertation used publicly available data from DPS Crime Labs and the TFSC to address these research questions. The quantitative portion of the QIR study used QIRs produced by the DPS Crime Labs from 2016-2020 (n=1,203) after a revision was made to the Quality Action Plan (post-QAP revision). Contextual content analysis (CCA) was conducted on the QIRs to extract data and code variables. Exploratory data analysis was used to characterize the quality incidents and logistic regression was used to test for predictors of significant disclosures. The qualitative portion of the QIR study used DPS Crime Labs’ System-level location QIRs (n=146) from before (pre-QAP) and after (post-QAP) the QAP revision. Qualitative content analysis (QCA) and triangulation were used to better understand how theories of forensic science quality management infrastructure were expressed through the production of QIRs before and after the QAP revision. The quantitative portion of the TFSC Study used complaints (n=207) and self-disclosures (n=98) filed with the TFSC and the responses to these complaints and self-disclosures produced by TFSC and ANAB between 2016-2020. CCA was conducted on documents stored on the TFSC website as well as documentation obtained from the TFSC upon request to extract data and code variables. Exploratory data analysis was used to characterize the complaints and self-disclosures and logistic regression was used to test for predictors of (1) TFSC taking further action on complaints and self-disclosures and (2) ANAB taking further action on self-disclosures. For the qualitative portion of the TFSC study, a subset of self-disclosures was selected for further QCA and triangulation to better understand how the theories of forensic science quality management infrastructure might be expressed in the TFSC’s response to these quality incidents. Five self-disclosures in which forensic evidence was lost, missing, or destroyed and in which ANAB chose to review the quality incidents at a future date were selected for review. Findings: In the QIR Study, the exploratory data analysis found that among forensic science practices, quality incidents occurred most frequently in biology/DNA, seized drugs, and evidence coordination. Among types of nonconformity, evidence processing/storage was the second most frequent type of nonconformity after testing/equipment. Evidence coordination also comprised 38.9% of significant disclosures, far more than any other forensic science practice. The full model, containing all variables significantly associated with significant disclosure when control variables were accounted for, produced three significant predictors—violation of discipline-specific standards, QAP conducted, and severity-level of the quality incident for major compared to minor severity-levels. The qualitative analysis of System-level QIRs found the QIRs in the two different periods (pre- and post-QAP) were so distinct that few opportunities were available to fully compare how DPS Crime Labs implemented the theories of forensic science quality management infrastructure. Notably, the qualitative analysis found the QIRs exhibited an unanticipated intersection of the culture of anticipation, repair, and disclosure when decisions about how and whether to send a corrected report were conditioned upon legal rather than scientific outcomes. In the TFSC Study, the vast majority of complaints and self-disclosures were dismissed by the TFSC with 5% of complaints and 10% of self-disclosures accepted for further action. In contrast, 99% of Self-Disclosures were dismissed by or lacked action by ANAB. When complaints were dismissed, the rationale for more than half of dismissals exceeded the scope of what the TFSC was permitted to review. The forensic science practice that was the most frequent subject of complaints was biology/DNA while the most frequent forensic science practice that was the subject of self-disclosures was seized drugs. The full model, containing all variables significantly associated with TFSC disposition when control variables were accounted for, produced two significant predictors—type of complainant when an individual person was compared to all other complainants and type of allegation when negligence and/or misconduct was compared to all other allegations. Since there was a nearly complete separation in the outcome variable ANAB disposition, predictors of this outcome could not be analyzed. The qualitative analysis of the five self-disclosure cases depicted a stark contrast between the visible and active TFSC oversight and the undetectable nature of ANAB accreditation responses. Discussion and Implications: Given the focus of forensic science reform efforts on pattern evidence disciplines (friction ridge, firearm/toolmarks, trace evidence) and concerns regarding their scientific foundations, the QIR study provided evidence that more attention and resources may need to be focused on the collection, processing, and chain of custody of forensic evidence in FSSPs. Although pattern evidence disciplines comprised a small fraction of the QIRs in the study, this data does not necessarily mean that pattern evidence disciplines produced fewer nonconformities or errors. Rather, QIRs may not be the right tool for detecting the accuracy or quality of pattern evidence testing, and other strategies like blind control testing or evidence line-ups may be more effective. The accreditation process proved to be an important quality management strategy in FSSPs but is not positioned to provide the kind of oversight that a regulatory body like the TFSC can. TFSC offered a transparent and publicly accessible forum for discussing and understanding problems that may occur in FSSPs. TFSC is also positioned to act more quickly and investigate disclosures more comprehensively than the accreditation body. Accreditation is essential and necessary to quality management, but state forensic science commissions produce accountability and transparency that accreditation cannot. Both robust accreditation and state forensic science commission oversight are necessary for reliable and accountable forensic science. Currently, state forensic science oversight bodies range in their level of regulatory power, public accessibility, transparency, and composition. As states across the country contemplate forensic science commissions, especially the question of whether one is needed given the accreditation status of FSSPs in their state, this study can offer insight into the benefits and limitations of accreditation as well as the degree to which state forensic science commissions can support more accurate and more just forensic science
    • …
    corecore