11 research outputs found

    Comparison of deposition methods of ZnO thin film on flexible substrate

    Get PDF
    This paper reports the effect of the different deposition methods towards the ZnO nanostructure crystal quality and film thickness on the polyimide substrate. The ZnO film has been deposited by using the spray pyrolysis technique, sol-gel and RF Sputtering. Different methods give a different nanostructure of the ZnO thin film. Sol gel methods, results of nanoflowers ZnO thin film with the thickness of thin film is 600nm. It also produces the best of the piezoelectric effect in term of electrical performance, which is 5.0 V and 12 MHz of frequency which is higher than other frequency obtained by spray pyrolysis and RF sputtering

    Characterizing chronic disease and polymedication prescription patterns from electronic health records

    Get PDF
    Population aging in developed countries brings an increased preva-lence of chronic disease and of polymedication- patients with several prescribed types of medication. Attention to chronic, polymedicated patients is a priority for its high cost and the associated risks, and tools for analyzing, understanding, and managing this reality are becoming necessary. We describe a prototype of a system for discovering, analyzing, and visualizing the co-occurrence of diagnostics, interventions, and medica-tion prescriptions in a large patient database. The final tool is intended to be used both by health managers and planners and for primary care clinicians in direct contact with patients (for example for detecting un-usual disease patterns and incorrect or missing medication). At the core of the analysis module there is a representation of di-agnostics and medications as a hypergraph, and the most crucial func-tionalities rely on hypergraph transversal / variants of association rule discovery methods, with particular emphasis on discovering surprising or alarming combinations. The test database comes from the primary care system in the area of Barcelona for 2013, with over 1.6 million potential patients and almost 20 million diagnostics and prescriptions. 2

    Generating High Precision Classification Rules for Screening of Irrelevant Studies in Systematic Review Literature Searches

    Get PDF
    Systematic reviews aim to produce repeatable, unbiased, and comprehensive answers to clinical questions. Systematic reviews are an essential component of modern evidence based medicine, however due to the risks of omitting relevant research they are highly time consuming to create and are largely conducted manually. This thesis presents a novel framework for partial automation of systematic review literature searches. We exploit the ubiquitous multi-stage screening process by training the classifier using annotations made by reviewers in previous screening stages. Our approach has the benefit of integrating seamlessly with the existing screening process, minimising disruption to users. Ideally, classification models for systematic reviews should be easily interpretable by users. We propose a novel, rule based algorithm for use with our framework. A new approach for identifying redundant associations when generating rules is also presented. The proposed approach to redundancy seeks to both exclude redundant specialisations of existing rules (those with additional terms in their antecedent), as well as redundant generalisations (those with fewer terms in their antecedent). We demonstrate the ability of the proposed approach to improve the usability of the generated rules. The proposed rule based algorithm is evaluated by simulated application to several existing systematic reviews. Workload savings of up to 10% are demonstrated. There is an increasing demand for systematic reviews related to a variety of clinical disciplines, such as diagnosis. We examine reviews of diagnosis and contrast them against more traditional systematic reviews of treatment. We demonstrate existing challenges such as target class heterogeneity and high data imbalance are even more pronounced for this class of reviews. The described algorithm accounts for this by seeking to label subsets of non-relevant studies with high precision, avoiding the need to generate a high recall model of the minority class

    A COMPREHENSIVE GEOSPATIAL KNOWLEDGE DISCOVERY FRAMEWORK FOR SPATIAL ASSOCIATION RULE MINING

    Get PDF
    Continuous advances in modern data collection techniques help spatial scientists gain access to massive and high-resolution spatial and spatio-temporal data. Thus there is an urgent need to develop effective and efficient methods seeking to find unknown and useful information embedded in big-data datasets of unprecedentedly large size (e.g., millions of observations), high dimensionality (e.g., hundreds of variables), and complexity (e.g., heterogeneous data sources, space–time dynamics, multivariate connections, explicit and implicit spatial relations and interactions). Responding to this line of development, this research focuses on the utilization of the association rule (AR) mining technique for a geospatial knowledge discovery process. Prior attempts have sidestepped the complexity of the spatial dependence structure embedded in the studied phenomenon. Thus, adopting association rule mining in spatial analysis is rather problematic. Interestingly, a very similar predicament afflicts spatial regression analysis with a spatial weight matrix that would be assigned a priori, without validation on the specific domain of application. Besides, a dependable geospatial knowledge discovery process necessitates algorithms supporting automatic and robust but accurate procedures for the evaluation of mined results. Surprisingly, this has received little attention in the context of spatial association rule mining. To remedy the existing deficiencies mentioned above, the foremost goal for this research is to construct a comprehensive geospatial knowledge discovery framework using spatial association rule mining for the detection of spatial patterns embedded in geospatial databases and to demonstrate its application within the domain of crime analysis. It is the first attempt at delivering a complete geo-spatial knowledge discovery framework using spatial association rule mining

    Construction efficace du treillis des motifs fermés fréquents et extraction simultanée des bases génériques de règles

    Get PDF
    Durant ces dernières années, les quantités de données collectées, dans divers domaines d’application de l’informatique, deviennent de plus en plus importantes. Ces quantités suscitent le besoin d’analyse et d’interprétation afin d’en extraire des connaissances utiles. Dans ce travail, nous nous intéressons à la technique d’extraction des règles d’association à partir de larges contextes. Cette dernière est parmi les techniques les plus fréquemment utilisées en fouille de données. Toutefois, le nombre de règles extraites est généralement important avec en outre la présence de règles redondantes. Dans cet article, nous proposons un nouvel algorithme, appelé PRINCE, dont la principale originalité est de construire une structure partiellement ordonnée (nommée treillis d’Iceberg) dans l’objectif d’extraire des ensembles réduits de règles, appelés bases génériques. Ces bases forment un sous-ensemble, sans perte d’information, des règles d’association. Pour réduire le coût de cette construction, le treillis d’Iceberg est calculé grâce aux générateurs minimaux, associés aux motifs fermés fréquents. Ces derniers sont simultanément dérivés avec les bases génériques grâce à un simple parcours ascendant de la structure construite. Les expérimentations que nous avons réalisées sur des contextes de référence et « pire des cas » ont montré l’efficacité de l’algorithme proposé, comparativement à des algorithmes tels que CLOSE, A-CLOSE et TITANIC.In the last few years, the amount of collected data, in various computer science applications, has grown considerably. These large volumes of data need to be analyzed in order to extract useful hidden knowledge. This work focuses on association rule extraction. This technique is one of the most popular in data mining. Nevertheless, the number of extracted association rules is often very high, and many of them are redundant. In this paper, we propose a new algorithm, called PRINCE. Its main feature is the construction of a partially ordered structure for extracting subsets of association rules, called generic bases. Without loss of information these subsets form representation of the whole association rule set. To reduce the cost of such a construction, the partially ordered structure is built thanks to the minimal generators associated to fréquent closed patterns. The closed ones are simultaneously derived with generic bases thanks to a simple bottom up traversal of the obtained structure. The experimentations we carried out in benchmark and « worst case » contexts showed the efficiency of the proposed algorithm, compared to algorithms like CLOSE, A-CLOSE and TITANIC

    Génération de connaissances à l’aide du retour d’expérience : application à la maintenance industrielle

    Get PDF
    Les travaux de recherche présentés dans ce mémoire s’inscrivent dans le cadre de la valorisation des connaissances issues des expériences passées afin d’améliorer les performances des processus industriels. La connaissance est considérée aujourd'hui comme une ressource stratégique importante pouvant apporter un avantage concurrentiel décisif aux organisations. La gestion des connaissances (et en particulier le retour d’expérience) permet de préserver et de valoriser des informations liées aux activités d’une entreprise afin d’aider la prise de décision et de créer de nouvelles connaissances à partir du patrimoine immatériel de l’organisation. Dans ce contexte, les progrès des technologies de l’information et de la communication jouent un rôle essentiel dans la collecte et la gestion des connaissances. L’implémentation généralisée des systèmes d’information industriels, tels que les ERP (Enterprise Resource Planning), rend en effet disponible un grand volume d’informations issues des événements ou des faits passés, dont la réutilisation devient un enjeu majeur. Toutefois, ces fragments de connaissances (les expériences passées) sont très contextualisés et nécessitent des méthodologies bien précises pour être généralisés. Etant donné le potentiel des informations recueillies dans les entreprises en tant que source de nouvelles connaissances, nous proposons dans ce travail une démarche originale permettant de générer de nouvelles connaissances tirées de l’analyse des expériences passées, en nous appuyant sur la complémentarité de deux courants scientifiques : la démarche de Retour d’Expérience (REx) et les techniques d’Extraction de Connaissances à partir de Données (ECD). Le couplage REx-ECD proposé porte principalement sur : i) la modélisation des expériences recueillies à l’aide d’un formalisme de représentation de connaissances afin de faciliter leur future exploitation, et ii) l’application de techniques relatives à la fouille de données (ou data mining) afin d’extraire des expériences de nouvelles connaissances sous la forme de règles. Ces règles doivent nécessairement être évaluées et validées par les experts du domaine avant leur réutilisation et/ou leur intégration dans le système industriel. Tout au long de cette démarche, nous avons donné une place privilégiée aux Graphes Conceptuels (GCs), formalisme de représentation des connaissances choisi pour faciliter le stockage, le traitement et la compréhension des connaissances extraites par l’utilisateur, en vue d’une exploitation future. Ce mémoire s’articule en quatre chapitres. Le premier constitue un état de l’art abordant les généralités des deux courants scientifiques qui contribuent à notre proposition : le REx et les techniques d’ECD. Le second chapitre présente la démarche REx-ECD proposée, ainsi que les outils mis en œuvre pour la génération de nouvelles connaissances afin de valoriser les informations disponibles décrivant les expériences passées. Le troisième chapitre présente une méthodologie structurée pour interpréter et évaluer l’intérêt des connaissances extraites lors de la phase de post-traitement du processus d’ECD. Finalement, le dernier chapitre expose des cas réels d’application de la démarche proposée à des interventions de maintenance industrielle. ABSTRACT : The research work presented in this thesis relates to knowledge extraction from past experiences in order to improve the performance of industrial process. Knowledge is nowadays considered as an important strategic resource providing a decisive competitive advantage to organizations. Knowledge management (especially the experience feedback) is used to preserve and enhance the information related to a company’s activities in order to support decision-making and create new knowledge from the intangible heritage of the organization. In that context, advances in information and communication technologies play an essential role for gathering and processing knowledge. The generalised implementation of industrial information systems such as ERPs (Enterprise Resource Planning) make available a large amount of data related to past events or historical facts, which reuse is becoming a major issue. However, these fragments of knowledge (past experiences) are highly contextualized and require specific methodologies for being generalized. Taking into account the great potential of the information collected in companies as a source of new knowledge, we suggest in this work an original approach to generate new knowledge based on the analysis of past experiences, taking into account the complementarity of two scientific threads: Experience Feedback (EF) and Knowledge Discovery techniques from Databases (KDD). The suggested EF-KDD combination focuses mainly on: i) modelling the experiences collected using a knowledge representation formalism in order to facilitate their future exploitation, and ii) applying techniques related to data mining in order to extract new knowledge in the form of rules. These rules must necessarily be evaluated and validated by experts of the industrial domain before their reuse and/or integration into the industrial system. Throughout this approach, we have given a privileged position to Conceptual Graphs (CGs), knowledge representation formalism chosen in order to facilitate the storage, processing and understanding of the extracted knowledge by the user for future exploitation. This thesis is divided into four chapters. The first chapter is a state of the art addressing the generalities of the two scientific threads that contribute to our proposal: EF and KDD. The second chapter presents the EF-KDD suggested approach and the tools used for the generation of new knowledge, in order to exploit the available information describing past experiences. The third chapter suggests a structured methodology for interpreting and evaluating the usefulness of the extracted knowledge during the post-processing phase in the KDD process. Finally, the last chapter discusses real case studies dealing with the industrial maintenance domain, on which the proposed approach has been applied
    corecore