959 research outputs found

    Continuous Improvement Through Knowledge-Guided Analysis in Experience Feedback

    Get PDF
    Continuous improvement in industrial processes is increasingly a key element of competitiveness for industrial systems. The management of experience feedback in this framework is designed to build, analyze and facilitate the knowledge sharing among problem solving practitioners of an organization in order to improve processes and products achievement. During Problem Solving Processes, the intellectual investment of experts is often considerable and the opportunities for expert knowledge exploitation are numerous: decision making, problem solving under uncertainty, and expert configuration. In this paper, our contribution relates to the structuring of a cognitive experience feedback framework, which allows a flexible exploitation of expert knowledge during Problem Solving Processes and a reuse such collected experience. To that purpose, the proposed approach uses the general principles of root cause analysis for identifying the root causes of problems or events, the conceptual graphs formalism for the semantic conceptualization of the domain vocabulary and the Transferable Belief Model for the fusion of information from different sources. The underlying formal reasoning mechanisms (logic-based semantics) in conceptual graphs enable intelligent information retrieval for the effective exploitation of lessons learned from past projects. An example will illustrate the application of the proposed approach of experience feedback processes formalization in the transport industry sector

    Biomedical Knowledge Engineering Using a Computational Grid

    Get PDF

    A web-based platform promoting family communication and cascade genetic testing for families with hereditary breast and ovarian cancer (DIALOGUE study)

    Get PDF
    The overall aim of this dissertation is to develop an eHealth intervention to promote family communication and cascade genetic testing among families concerned with Hereditary Breast and Ovarian Cancer (HBOC) syndrome. Within this context an international, multi-centre scientific project entitled "DIALOGUE" was designed that aims to develop (Phase A), and test the feasibility (Phase B) of an intervention within various genetic clinics across Switzerland and South Korea. This dissertation describes only the Phase A, the adaptation of an intervention, a web-based platform designed for families with HBOC to share genetic test results, including usability testing in a sample from Switzerland. Chapter 1 provides a general introduction to the current field of hereditary cancer and cascade genetic testing, including the current state of eHealth technologies in science. The chapter also includes a short introduction to the prototype developed in the U.S.—as well as a description of the DIALOGUE study. In addition, the chapter summarises the main conceptual models, i.e. the Ottawa Decision Support Framework (ODSF) and the Medical Research Council (MRC) framework. These models are commonly implemented in the development and evaluation of complex interventions. The rational of this dissertation is guided by all of these elements. Chapter 2 provides a detailed description of the dissertation’s specific aims, including the three studies conducted. The articles presented in Chapter 3 describe the methodology and findings of the dissertation. Study I comprises a systematic literature review of previous studies, with a particular focus on HBOC and Lynch syndromes. The literature review identified and synthesised evidence from psychoeducational interventions designed to facilitate family communication of genetic testing results and/or cancer predisposition and to promote cascade genetic testing. A meta-analysis was also conducted to assess intervention efficacy in relation to these two research aims. Our findings highlight the need to develop new interventions and approaches to family communication and cascade testing for cancer susceptibility. Study II describes the state-of-the-art text mining techniques used to detect and classify valuable information from interviews with study participants concerning determinants of open intrafamilial communication regarding genetic cancer risk. This study had two major aims: 1) to quantify openness of communication about HBOC cancer risk, and 2) to examine the role of sentiment in predicting openness of communication. Our findings showed that the overall expressed sentiment was associated with the communication of genetic risk among HBOC families. This analysis identified additional factors that affect openness to communicate genetic risk. These were defined as “high-risk” factors and integrated into the design and development of the intervention. Study III describes the development of the intervention, a web-based platform designed for families with HBOC to share genetic test results. The platform was developed in line with the quality criteria set by the MRC framework. Being web-based, the platform could be accessed via a laptop, smartphone or tablet. Usability testing was applied to evaluate the prototype intervention which received high ratings on a satisfaction scale. Chapter 4 synthesises and discusses the key findings of all the studies presented in the previous chapter, and addresses study limitations and implications for future research

    Scientific Mapping of Industry 4.0 Research:A Bibliometric Analysis

    Get PDF
    The fourth industrial revolution is progressing very rapidly. This research aims to investigate the research patterns and trends of industry 4.0 research with a focus on manufacturing. This bibliometric analysis is performed on data of the past five years (2016 to 2020) retrieved from the Scopus database. This research is conducted on 1426 articles in which the top productive countries, authors, institutions, and most cited articles were investigated. Findings demonstrated that Italy, the United States, and China are the most active countries in terms of research publications. South China University of Technology (China) has been identified as the most productive institution.  Wan, J., Li, D., Rauch, E. were found to be the most productive authors. Industry 4.0 is primarily focused on the fields of engineering and computer science and sustainability is the most prolific journal. Co-occurrence analysis of keywords, co-authorship analysis of authors and countries were carried out along with bibliographic coupling of documents using VoS viewer which is the most common information visualisation software. This article summarises the growth of Industry 4.0 in the past five years and gives a short overview of the related works and applications of Industry 4.0.</p

    Translational Research in the Era of Precision Medicine: Where We Are and Where We Will Go

    Get PDF
    The advent of Precision Medicine has globally revolutionized the approach of translational research suggesting a patient-centric vision with therapeutic choices driven by the identification of specific predictive biomarkers of response to avoid ineffective therapies and reduce adverse effects. The spread of "multi-omics" analysis and the use of sensors, together with the ability to acquire clinical, behavioral, and environmental information on a large scale, will allow the digitization of the state of health or disease of each person, and the creation of a global health management system capable of generating real-time knowledge and new opportunities for prevention and therapy in the individual person (high-definition medicine). Real world data-based translational applications represent a promising alternative to the traditional evidence-based medicine (EBM) approaches that are based on the use of randomized clinical trials to test the selected hypothesis. Multi-modality data integration is necessary for example in precision oncology where an Avatar interface allows several simulations in order to define the best therapeutic scheme for each cancer patient

    Webometrics benefitting from web mining? An investigation of methods and applications of two research fields

    Full text link
    Webometrics and web mining are two fields where research is focused on quantitative analyses of the web. This literature review outlines definitions of the fields, and then focuses on their methods and applications. It also discusses the potential of closer contact and collaboration between them. A key difference between the fields is that webometrics has focused on exploratory studies, whereas web mining has been dominated by studies focusing on development of methods and algorithms. Differences in type of data can also be seen, with webometrics more focused on analyses of the structure of the web and web mining more focused on web content and usage, even though both fields have been embracing the possibilities of user generated content. It is concluded that research problems where big data is needed can benefit from collaboration between webometricians, with their tradition of exploratory studies, and web miners, with their tradition of developing methods and algorithms

    Anonymity, Faceprints, and the Constitution

    Get PDF
    Part I defines anonymity and explains that respect for the capacity to remain physically and psychologically unknown to the government traces back to the Founding. With the advent and expansion of new technologies such as facial recognition technology (“FRT”), the ability to remain anonymous has eroded, leading to a litany of possible harms. Part II reviews the existing Fourth and First Amendment doctrine that is available to stave off ubiquitous government surveillance and identifies anonymity as a constitutional value that warrants more explicit doctrinal protection. Although the Fourth Amendment has been construed to excise surveillance of public and third-party information from its scope, the Court’s recent jurisprudence indicates a growing recognition that constitutional doctrine is out of step with modern surveillance technologies. The Supreme Court has expressly recognized a First Amendment right to anonymous speech, which should be taken into account in assessing the constitutionality of government surveillance systems under the Fourth Amendment. This Part accordingly draws a distinction between cases that arose in the pre-digital age, in which content was often collected through physical trespass or eavesdropping, and those arising in the digital age, in which correlations among disparate points of “big data” are used to make predictions. Part III argues that Fourth and First Amendment doctrine should be reconciled to address the manipulation — versus acquisition — of FRT data to derive new information about individuals which is exceedingly intimate and otherwise out of the government’s reach. This Part suggests that this qualitative shift in information gathering is constitutionally significant under existing doctrine. Part III also offers guidelines gleaned from the intersection of First and Fourth Amendment jurisprudence for consideration by lower courts and legislators as they address the threat of limitless surveillance which big data and new technologies present

    Anonymity, Faceprints, and the Constitution

    Get PDF
    Part I defines anonymity and explains that respect for the capacity to remain physically and psychologically unknown to the government traces back to the Founding. With the advent and expansion of new technologies such as facial recognition technology (“FRT”), the ability to remain anonymous has eroded, leading to a litany of possible harms. Part II reviews the existing Fourth and First Amendment doctrine that is available to stave off ubiquitous government surveillance and identifies anonymity as a constitutional value that warrants more explicit doctrinal protection. Although the Fourth Amendment has been construed to excise surveillance of public and third-party information from its scope, the Court’s recent jurisprudence indicates a growing recognition that constitutional doctrine is out of step with modern surveillance technologies. The Supreme Court has expressly recognized a First Amendment right to anonymous speech, which should be taken into account in assessing the constitutionality of government surveillance systems under the Fourth Amendment. This Part accordingly draws a distinction between cases that arose in the pre-digital age, in which content was often collected through physical trespass or eavesdropping, and those arising in the digital age, in which correlations among disparate points of “big data” are used to make predictions. Part III argues that Fourth and First Amendment doctrine should be reconciled to address the manipulation — versus acquisition — of FRT data to derive new information about individuals which is exceedingly intimate and otherwise out of the government’s reach. This Part suggests that this qualitative shift in information gathering is constitutionally significant under existing doctrine. Part III also offers guidelines gleaned from the intersection of First and Fourth Amendment jurisprudence for consideration by lower courts and legislators as they address the threat of limitless surveillance which big data and new technologies present

    EsPRESSo: Efficient Privacy-Preserving Evaluation of Sample Set Similarity

    Full text link
    Electronic information is increasingly often shared among entities without complete mutual trust. To address related security and privacy issues, a few cryptographic techniques have emerged that support privacy-preserving information sharing and retrieval. One interesting open problem in this context involves two parties that need to assess the similarity of their datasets, but are reluctant to disclose their actual content. This paper presents an efficient and provably-secure construction supporting the privacy-preserving evaluation of sample set similarity, where similarity is measured as the Jaccard index. We present two protocols: the first securely computes the (Jaccard) similarity of two sets, and the second approximates it, using MinHash techniques, with lower complexities. We show that our novel protocols are attractive in many compelling applications, including document/multimedia similarity, biometric authentication, and genetic tests. In the process, we demonstrate that our constructions are appreciably more efficient than prior work.Comment: A preliminary version of this paper was published in the Proceedings of the 7th ESORICS International Workshop on Digital Privacy Management (DPM 2012). This is the full version, appearing in the Journal of Computer Securit
    • …
    corecore