10 research outputs found

    An Optimal Approach for Mining Rare Causal Associations to Detect ADR Signal Pairs

    Get PDF
    Abstract- Adverse Drug Reaction (ADR) is one of the most important issues in the assessment of drug safety. In fact, many adverse drug reactions are not discovered during limited premarketing clinical trials; instead, they are only observed after long term post-marketing surveillance of drug usage. In light of this, the detection of adverse drug reactions, as early as possible, is an important topic of research for the pharmaceutical industry. Recently, large numbers of adverse events and the development of data mining technology have motivated the development of statistical and data mining methods for the detection of ADRs. These stand-alone methods, with no integration into knowledge discovery systems, are tedious and inconvenient for users and the processes for exploration are time-consuming. This paper proposes an interactive system platform for the detection of ADRs. By integrating an ADR data warehouse and innovative data mining techniques, the proposed system not only supports OLAP style multidimensional analysis of ADRs, but also allows the interactive discovery of associations between drugs and symptoms, called a drug-ADR association rule, which can be further, developed using other factors of interest to the user, such as demographic information. The experiments indicate that interesting and valuable drug-ADR association rules can be efficiently mined. Index Terms- In this paper, we try to employ a knowledgebased approach to capture the degree of causality of an event pair within each sequence and we are going to match the data which was previously referred or suggested for treatment. � It is majorly used for Immediate Treatment for patients. However, mining the relationships between Drug and its Signal Reaction will be treated by In-Experienced Physician’

    A Survey Paper on Ontology-Based Approaches for Semantic Data Mining

    Get PDF
    Semantic Data Mining alludes to the information mining assignments that deliberately consolidate area learning, particularly formal semantics, into the procedure. Numerous exploration endeavors have validated the advantages of fusing area learning in information mining and in the meantime, the expansion of information building has enhanced the group of space learning, particularly formal semantics and Semantic Web ontology. Ontology is an explicit specification of conceptualization and a formal approach to characterize the semantics of information and data. The formal structure of ontology makes it a nature approach to encode area information for the information mining utilization. Here in Semantic information mining ontology can possibly help semantic information mining and how formal semantics in ontologies can be joined into the data mining procedure. DOI: 10.17762/ijritcc2321-8169.16048

    Partial orders and logical concept analysis to explore patterns extracted by data mining

    Get PDF
    International audienceData mining techniques are used in order to discover emerging knowledge (patterns) in databases. The problem of such techniques is that there are, in general, too many resulting patterns for a user to explore them all by hand. Some methods try to reduce the number of patterns without a priori pruning. The number of patterns remains, nevertheless, high. Other approaches, based on a total ranking, propose to show to the user the top-k patterns with respect to a measure. Those methods do not take into account the user's knowledge and the dependencies that exist between patterns. In this paper, we propose a new way for the user to explore extracted patterns. The method is based on navigation in a partial order over the set of all patterns in the Logical Concept Analysis framework. It accommodates several kinds of patterns and the dependencies between patterns are taken into account thanks to partial orders. It allows the user to use his/her background knowledge to navigate through the partial order, without a priori pruning. We illustrate how our method can be applied on two different tasks (software engineering and natural language processing) and two different kinds of patterns (association rules and sequential patterns)

    Knowledge Discovery and Management within Service Centers

    Get PDF
    These days, most enterprise service centers deploy Knowledge Discovery and Management (KDM) systems to address the challenge of timely delivery of a resourceful service request resolution while efficiently utilizing the huge amount of data. These KDM systems facilitate prompt response to the critical service requests and if possible then try to prevent the service requests getting triggered in the first place. Nevertheless, in most cases, information required for a request resolution is dispersed and suppressed under the mountain of irrelevant information over the Internet in unstructured and heterogeneous formats. These heterogeneous data sources and formats complicate the access to reusable knowledge and increase the response time required to reach a resolution. Moreover, the state-of-the art methods neither support effective integration of domain knowledge with the KDM systems nor promote the assimilation of reusable knowledge or Intellectual Capital (IC). With the goal of providing an improved service request resolution within the shortest possible time, this research proposes an IC Management System. The proposed tool efficiently utilizes domain knowledge in the form of semantic web technology to extract the most valuable information from those raw unstructured data and uses that knowledge to formulate service resolution model as a combination of efficient data search, classification, clustering, and recommendation methods. Our proposed solution also handles the technology categorization of a service request which is very crucial in the request resolution process. The system has been extensively evaluated with several experiments and has been used in a real enterprise customer service center

    Improving knowledge about the risks of inappropriate uses of geospatial data by introducing a collaborative approach in the design of geospatial databases

    Get PDF
    La disponibilité accrue de l’information géospatiale est, de nos jours, une réalité que plusieurs organisations, et même le grand public, tentent de rentabiliser; la possibilité de réutilisation des jeux de données est désormais une alternative envisageable par les organisations compte tenu des économies de coûts qui en résulteraient. La qualité de données de ces jeux de données peut être variable et discutable selon le contexte d’utilisation. L’enjeu d’inadéquation à l’utilisation de ces données devient d’autant plus important lorsqu’il y a disparité entre les nombreuses expertises des utilisateurs finaux de la donnée géospatiale. La gestion des risques d’usages inappropriés de l’information géospatiale a fait l’objet de plusieurs recherches au cours des quinze dernières années. Dans ce contexte, plusieurs approches ont été proposées pour traiter ces risques : parmi ces approches, certaines sont préventives et d’autres sont plutôt palliatives et gèrent le risque après l'occurrence de ses conséquences; néanmoins, ces approches sont souvent basées sur des initiatives ad-hoc non systémiques. Ainsi, pendant le processus de conception de la base de données géospatiale, l’analyse de risque n’est pas toujours effectuée conformément aux principes d’ingénierie des exigences (Requirements Engineering) ni aux orientations et recommandations des normes et standards ISO. Dans cette thèse, nous émettons l'hypothèse qu’il est possible de définir une nouvelle approche préventive pour l’identification et l’analyse des risques liés à des usages inappropriés de la donnée géospatiale. Nous pensons que l’expertise et la connaissance détenues par les experts (i.e. experts en geoTI), ainsi que par les utilisateurs professionnels de la donnée géospatiale dans le cadre institutionnel de leurs fonctions (i.e. experts du domaine d'application), constituent un élément clé dans l’évaluation des risques liés aux usages inadéquats de ladite donnée, d’où l’importance d’enrichir cette connaissance. Ainsi, nous passons en revue le processus de conception des bases de données géospatiales et proposons une approche collaborative d’analyse des exigences axée sur l’utilisateur. Dans le cadre de cette approche, l’utilisateur expert et professionnel est impliqué dans un processus collaboratif favorisant l’identification a priori des cas d’usages inappropriés. Ensuite, en passant en revue la recherche en analyse de risques, nous proposons une intégration systémique du processus d’analyse de risque au processus de la conception de bases de données géospatiales et ce, via la technique Delphi. Finalement, toujours dans le cadre d’une approche collaborative, un référentiel ontologique de risque est proposé pour enrichir les connaissances sur les risques et pour diffuser cette connaissance aux concepteurs et utilisateurs finaux. L’approche est implantée sous une plateforme web pour mettre en œuvre les concepts et montrer sa faisabilité.Nowadays, the increased availability of geospatial information is a reality that many organizations, and even the general public, are trying to transform to a financial benefit. The reusability of datasets is now a viable alternative that may help organizations to achieve cost savings. The quality of these datasets may vary depending on the usage context. The issue of geospatial data misuse becomes even more important because of the disparity between the different expertises of the geospatial data end-users. Managing the risks of geospatial data misuse has been the subject of several studies over the past fifteen years. In this context, several approaches have been proposed to address these risks, namely preventive approaches and palliative approaches. However, these approaches are often based on ad-hoc initiatives. Thus, during the design process of the geospatial database, risk analysis is not always carried out in accordance neither with the principles/guidelines of requirements engineering nor with the recommendations of ISO standards. In this thesis, we suppose that it is possible to define a preventive approach for the identification and analysis of risks associated to inappropriate use of geospatial data. We believe that the expertise and knowledge held by experts and users of geospatial data are key elements for the assessment of risks of geospatial data misuse of this data. Hence, it becomes important to enrich that knowledge. Thus, we review the geospatial data design process and propose a collaborative and user-centric approach for requirements analysis. Under this approach, the user is involved in a collaborative process that helps provide an a priori identification of inappropriate use of the underlying data. Then, by reviewing research in the domain of risk analysis, we propose to systematically integrate risk analysis – using the Delphi technique – through the design of geospatial databases. Finally, still in the context of a collaborative approach, an ontological risk repository is proposed to enrich the knowledge about the risks of data misuse and to disseminate this knowledge to the design team, developers and end-users. The approach is then implemented using a web platform in order to demonstrate its feasibility and to get the concepts working within a concrete prototype

    How Simulation can Illuminate Pedagogical and System Design Issues in Dynamic Open Ended Learning Environments

    Get PDF
    A Dynamic Open-Ended Learning Environment (DOELE) is a collection of learners and learning objects (LOs) that could be constantly changing. In DOELEs, learners need the support of Advanced Learning Technology (ALT), but most ALT is not designed to run in such environments. An architecture for designing advanced learning technology that is compatible with DOELEs is the ecological approach (EA). This thesis looks at how to test and develop ALT based on the EA, and argues that this process would benefit from the use of simulation. The essential components of an EA-based simulation are: simulated learners, simulated LOs, and their simulated interactions. In this thesis the value of simulation is demonstrated with two experiments. The first experiment focuses on the pedagogical issue of peer impact, how learning is impacted by the performance of peers. By systematically varying the number and type of learners and LOs in a DOELE, the simulation uncovers behaviours that would otherwise go unseen. The second experiment shows how to validate and tune a new instructional planner built on the EA, the Collaborative Filtering based on Learning Sequences planner (CFLS). When the CFLS planner is configured appropriately, simulated learners achieve higher performance measurements that those learners using the baseline planners. Simulation results lead to predictions that ultimately need to be proven in the real world, but even without real world validation such predictions can be useful to researchers to inform the ALT system design process. This thesis work shows that it is not necessary to model all the details of the real world to come to a better understanding of a pedagogical issue such as peer impact. And, simulation allowed for the design of the first known instructional planner to be based on usage data, the CFLS planner. The use of simulation for the design of EA-based systems opens new possibilities for instructional planning without knowledge engineering. Such systems can find niche learning paths that may have never been thought of by a human designer. By exploring pedagogical and ALT system design issues for DOELEs, this thesis shows that simulation is a valuable addition to the toolkit for ALT researchers

    Proceedings of the 1st International Conference on Algebras, Graphs and Ordered Sets (ALGOS 2020)

    Get PDF
    International audienceOriginating in arithmetics and logic, the theory of ordered sets is now a field of combinatorics that is intimately linked to graph theory, universal algebra and multiple-valued logic, and that has a wide range of classical applications such as formal calculus, classification, decision aid and social choice.This international conference “Algebras, graphs and ordered set” (ALGOS) brings together specialists in the theory of graphs, relational structures and ordered sets, topics that are omnipresent in artificial intelligence and in knowledge discovery, and with concrete applications in biomedical sciences, security, social networks and e-learning systems. One of the goals of this event is to provide a common ground for mathematicians and computer scientists to meet, to present their latest results, and to discuss original applications in related scientific fields. On this basis, we hope for fruitful exchanges that can motivate multidisciplinary projects.The first edition of ALgebras, Graphs and Ordered Sets (ALGOS 2020) has a particular motivation, namely, an opportunity to honour Maurice Pouzet on his 75th birthday! For this reason, we have particularly welcomed submissions in areas related to Maurice’s many scientific interests:• Lattices and ordered sets• Combinatorics and graph theory• Set theory and theory of relations• Universal algebra and multiple valued logic• Applications: formal calculus, knowledge discovery, biomedical sciences, decision aid and social choice, security, social networks, web semantics..
    corecore