10 research outputs found

    Quels sont les patients atteints d'un cancer du sein dont la décision de prise en charge thérapeutique bénéficie de l'utilisation d'un système d'aide à la décision ? Un exemple utilisant la fouille de données et OncoDoc2

    Get PDF
    Session 2 : Utilisateurs et usagesNational audienceOncoDoc2 est un système d'aide à la décision (SAD) s'appuyant sur des recommandations de pratique clinique (RPC) pour la prise en charge des cancers du sein. Il a été utilisé comme intervention dans un essai randomisé contrôlé dont l'objectif principal était d'évaluer son impact sur la conformité des décisions des réunions de concertation pluridisciplinaire aux RPC. Nous avons utilisé un algorithme de fouille de données pour découvrir les régularités des profils patients, ou " motifs émergents " (ME), associées à la conformité et à la non-conformité des décisions selon que le système OncoDoc2 était ou non utilisé, afin d'évaluer quels profils patients pouvaient bénéficier de l'utilisation du système. Les ME associés à la non conformité des décisions prises sans le système sont associées à la conformité quand le système est utilisé sauf dans certaines situations cliniques pour lesquelles la force de la recommandation est faible

    Misleading Generalized Itemset discovery

    Get PDF
    Frequent generalized itemset mining is a data mining technique utilized to discover a high-level view of interesting knowledge hidden in the analyzed data. By exploiting a taxonomy, patterns are usually extracted at any level of abstraction. However, some misleading high-level patterns could be included in the mined set. This paper proposes a novel generalized itemset type, namely the Misleading Generalized Itemset (MGI). Each MGI represents a frequent generalized itemset X and its set E of low-level frequent descendants for which the correlation type is in contrast to the one of X. To allow experts to analyze the misleading high-level data correlations separately and exploit such knowledge by making different decisions, MGIs are extracted only if the low-level descendant itemsets that represent contrasting correlations cover almost the same portion of data as the high-level (misleading) ancestor. An algorithm to mine MGIs at the top of traditional generalized itemsets is also proposed. The experiments performed on both real and synthetic datasets demonstrate the effectiveness and efficiency of the proposed approac

    MeTA: Characterization of medical treatments at different abstraction levels

    Get PDF
    Physicians and healthcare organizations always collect large amounts of data during patient care. These large and high-dimensional datasets are usually characterized by an inherent sparseness. Hence, the analysis of these datasets to gure out interesting and hidden knowledge is a challenging task. This paper proposes a new data mining framework based on generalized association rules to discover multiple-level correlations among patient data. Specically, correlations among prescribed examinations, drugs, and patient proles are discovered and analyzed at different abstraction levels. The rule extraction process is driven by a taxonomy to generalize examinations and drugs into their corresponding categories. To ease the manual inspection of the result, a worthwhile subset of rules, i.e., the non-redundant generalized rules, is considered. Furthermore, rules are classied according to the involved data features (medical treatments or patient proles) and then explored in a top-down fashion, i.e., from the small subset of high-level rules a drill-down is performed to target more specic rules. The experiments, performed on a real diabetic patient dataset, demonstrate the effectiveness of the proposed approach in discovering interesting rule groups at different abstraction levels

    The Minimum Description Length Principle for Pattern Mining: A Survey

    Full text link
    This is about the Minimum Description Length (MDL) principle applied to pattern mining. The length of this description is kept to the minimum. Mining patterns is a core task in data analysis and, beyond issues of efficient enumeration, the selection of patterns constitutes a major challenge. The MDL principle, a model selection method grounded in information theory, has been applied to pattern mining with the aim to obtain compact high-quality sets of patterns. After giving an outline of relevant concepts from information theory and coding, as well as of work on the theory behind the MDL and similar principles, we review MDL-based methods for mining various types of data and patterns. Finally, we open a discussion on some issues regarding these methods, and highlight currently active related data analysis problems

    Document analysis by means of data mining techniques

    Get PDF
    The huge amount of textual data produced everyday by scientists, journalists and Web users, allows investigating many different aspects of information stored in the published documents. Data mining and information retrieval techniques are exploited to manage and extract information from huge amount of unstructured textual data. Text mining also known as text data mining is the processing of extracting high quality information (focusing relevance, novelty and interestingness) from text by identifying patterns etc. Text mining typically involves the process of structuring input text by means of parsing and other linguistic features or sometimes by removing extra data and then finding patterns from structured data. Patterns are then evaluated at last and interpretation of output is performed to accomplish the desired task. Recently, text mining has got attention in several fields such as in security (involves analysis of Internet news), for commercial (for search and indexing purposes) and in academic departments (such as answering query). Beyond searching the documents consisting the words given in a user query, text mining may provide direct answer to user by semantic web for content based (content meaning and its context). It can also act as intelligence analyst and can also be used in some email spam filters for filtering out unwanted material. Text mining usually includes tasks such as clustering, categorization, sentiment analysis, entity recognition, entity relation modeling and document summarization. In particular, summarization approaches are suitable for identifying relevant sentences that describe the main concepts presented in a document dataset. Furthermore, the knowledge existed in the most informative sentences can be employed to improve the understanding of user and/or community interests. Different approaches have been proposed to extract summaries from unstructured text documents. Some of them are based on the statistical analysis of linguistic features by means of supervised machine learning or data mining methods, such as Hidden Markov models, neural networks and Naive Bayes methods. An appealing research field is the extraction of summaries tailored to the major user interests. In this context, the problem of extracting useful information according to domain knowledge related to the user interests is a challenging task. The main topics have been to study and design of novel data representations and data mining algorithms useful for managing and extracting knowledge from unstructured documents. This thesis describes an effort to investigate the application of data mining approaches, firmly established in the subject of transactional data (e.g., frequent itemset mining), to textual documents. Frequent itemset mining is a widely exploratory technique to discover hidden correlations that frequently occur in the source data. Although its application to transactional data is well-established, the usage of frequent itemsets in textual document summarization has never been investigated so far. A work is carried on exploiting frequent itemsets for the purpose of multi-document summarization so a novel multi-document summarizer, namely ItemSum (Itemset-based Summarizer) is presented, that is based on an itemset-based model, i.e., a framework comprise of frequent itemsets, taken out from the document collection. Highly representative and not redundant sentences are selected for generating summary by considering both sentence coverage, with respect to a sentence relevance score, based on tf-idf statistics, and a concise and highly informative itemset-based model. To evaluate the ItemSum performance a suite of experiments on a collection of news articles has been performed. Obtained results show that ItemSum significantly outperforms mostly used previous summarizers in terms of precision, recall, and F-measure. We also validated our approach against a large number of approaches on the DUC’04 document collection. Performance comparisons, in terms of precision, recall, and F-measure, have been performed by means of the ROUGE toolkit. In most cases, ItemSum significantly outperforms the considered competitors. Furthermore, the impact of both the main algorithm parameters and the adopted model coverage strategy on the summarization performance are investigated as well. In some cases, the soundness and readability of the generated summaries are unsatisfactory, because the summaries do not cover in an effective way all the semantically relevant data facets. A step beyond towards the generation of more accurate summaries has been made by semantics-based summarizers. Such approaches combine the use of general-purpose summarization strategies with ad-hoc linguistic analysis. The key idea is to also consider the semantics behind the document content to overcome the limitations of general-purpose strategies in differentiating between sentences based on their actual meaning and context. Most of the previously proposed approaches perform the semantics-based analysis as a preprocessing step that precedes the main summarization process. Therefore, the generated summaries could not entirely reflect the actual meaning and context of the key document sentences. In contrast, we aim at tightly integrating the ontology-based document analysis into the summarization process in order to take the semantic meaning of the document content into account during the sentence evaluation and selection processes. With this in mind, we propose a new multi-document summarizer, namely Yago-based Summarizer, that integrates an established ontology-based entity recognition and disambiguation step. Named Entity Recognition from Yago ontology is being used for the task of text summarization. The Named Entity Recognition (NER) task is concerned with marking occurrences of a specific object being mentioned. These mentions are then classified into a set of predefined categories. Standard categories include “person”, “location”, “geo-political organization”, “facility”, “organization”, and “time”. The use of NER in text summarization improved the summarization process by increasing the rank of informative sentences. To demonstrate the effectiveness of the proposed approach, we compared its performance on the DUC’04 benchmark document collections with that of a large number of state-of-the-art summarizers. Furthermore, we also performed a qualitative evaluation of the soundness and readability of the generated summaries and a comparison with the results that were produced by the most effective summarizers. A parallel effort has been devoted to integrating semantics-based models and the knowledge acquired from social networks into a document summarization model named as SociONewSum. The effort addresses the sentence-based generic multi-document summarization problem, which can be formulated as follows: given a collection of news articles ranging over the same topic, the goal is to extract a concise yet informative summary, which consists of most salient document sentences. An established ontological model has been used to improve summarization performance by integrating a textual entity recognition and disambiguation step. Furthermore, the analysis of the user-generated content coming from Twitter has been exploited to discover current social trends and improve the appealing of the generated summaries. An experimental evaluation of the SociONewSum performance was conducted on real English-written news article collections and Twitter posts. The achieved results demonstrate the effectiveness of the proposed summarizer, in terms of different ROUGE scores, compared to state-of-the-art open source summarizers as well as to a baseline version of the SociONewSum summarizer that does not perform any UGC analysis. Furthermore, the readability of the generated summaries has also been analyzed

    Data Mining Algorithms for Internet Data: from Transport to Application Layer

    Get PDF
    Nowadays we live in a data-driven world. Advances in data generation, collection and storage technology have enabled organizations to gather data sets of massive size. Data mining is a discipline that blends traditional data analysis methods with sophisticated algorithms to handle the challenges posed by these new types of data sets. The Internet is a complex and dynamic system with new protocols and applications that arise at a constant pace. All these characteristics designate the Internet a valuable and challenging data source and application domain for a research activity, both looking at Transport layer, analyzing network tra c flows, and going up to Application layer, focusing on the ever-growing next generation web services: blogs, micro-blogs, on-line social networks, photo sharing services and many other applications (e.g., Twitter, Facebook, Flickr, etc.). In this thesis work we focus on the study, design and development of novel algorithms and frameworks to support large scale data mining activities over huge and heterogeneous data volumes, with a particular focus on Internet data as data source and targeting network tra c classification, on-line social network analysis, recommendation systems and cloud services and Big data

    Actes des 25es journées francophones d'Ingénierie des Connaissances (IC 2014)

    Get PDF
    National audienceLes Journées Francophones d'Ingénierie des Connaissances fêtent cette année leurs 25 ans. Cette conférence est le rendez-vous annuel de la communauté française et francophone qui se retrouve pour échanger et réfléchir sur des problèmes de recherche qui se posent en acquisition, représentation et gestion des connaissances. Parmi les vingt et un articles sélectionnés pour publication et présentation à la conférence, un thème fondateur de l'ingénierie des connaissances domine : celui de la modélisation de domaines. Six articles traitent de la conception d'ontologies, trois articles de l'annotation sémantique et du peuplement d'ontologies et deux articles de l'exploitation d'ontologies dans des systèmes à base de connaissances. L'informatique médicale est le domaine d'application privilégié des travaux présentés, que l'on retrouve dans sept articles. L'ingénierie des connaissances accompagne l'essor des technologies du web sémantique, en inventant les modèles, méthodes et outils permettant l'intégration de connaissances et le raisonnement dans des systèmes à base de connaissances sur le web. Ainsi, on retrouve les thèmes de la représentation des connaissances et du raisonnement dans six articles abordant les problématiques du web de données : le liage des données, leur transformation et leur interrogation ; la représentation et la réutilisation de règles sur le web de données ; la programmation d'applications exploitant le web de données. L'essor des sciences et technologies de l'information et de la communication, et notamment des technologies du web, dans l'ensemble de la société engendre des mutations dans les pratiques individuelles et collectives. L'ingénierie des connaissances accompagne cette évolution en plaçant l'utilisateur au cœur des systèmes informatiques, pour l'assister dans le traitement de la masse de données disponibles. Quatre articles sont dédiés aux problématiques du web social : analyse de réseaux sociaux, détection de communautés, folksonomies, personnalisation de recommandations, représentation et prise en compte de points de vue dans la recherche d'information. Deux articles traitent de l'adaptation des systèmes aux utilisateurs et de l'assistance aux utilisateurs et deux autres de l'aide à la prise de décision. Le taux de sélection de cette édition de la conférence est de 50%, avec dix-neuf articles longs et deux articles courts acceptés parmi quarante-deux soumissions. S'y ajoutent une sélection de neuf posters et démonstrations parmi douze soumissions, présentés dans une session dédiée et inclus dans les actes. Enfin, une innovation de cette édition 2014 de la conférence est la programmation d'une session spéciale " Projets et Industrie ", animée par Frédérique Segond (Viseo), à laquelle participeront Laurent Pierre (EDF), Alain Berger (Ardans) et Mylène Leitzelman (Mnemotix). Trois conférencières invitées ouvriront chacune des journées de la conférence que je remercie chaleureusement de leur participation. Nathalie Aussenac-Gilles (IRIT) retracera l'évolution de l'ingénierie des connaissances en France depuis 25 ans, de la pénurie à la surabondance. A sa suite, Frédérique Segond (Viseo) abordera le problème de " l'assouvissement " de la faim de connaissances dans la nouvelle ère des connaissances dans laquelle nous sommes entrés. Enfin, Marie-Laure Mugnier (LIRMM) présentera un nouveau cadre pour l'interrogation de données basée sur une ontologie, fondé sur des règles existentielles
    corecore