42 research outputs found

    A Knowledge Representation Model Based on Select and Test Algorithm for Diagnosing Breast Cancer

    Get PDF
    There exist several terminal diseases whose fatality rate escalates with time of which breast cancer is a frontline disease among such. Computer aided systems have also been well researched through the use intelligent algorithms capable of detecting, diagnosing, and proffering treatment for breast cancer.  While good research breakthrough has been attained in terms of algorithmic solution towards diagnosis of breast cancer, however, not much has been done to sufficiently model knowledge frameworks for diagnostic algorithms that are knowledge-based. While Select and Test (ST) algorithm have proven relevant for implementing diagnostic systems, through support for reasoning, however the knowledge representation pattern that enables inference of missing or ambiguous data still limits the effectiveness of ST algorithm. This paper therefore proposes a knowledge representation model to systematically model knowledge to aid the performance of ST algorithm. Our proposal is specifically targeted at developing systematic knowledge representation for breast cancer. The approach uses the ontology web language (OWL) to implement the design of the knowledge model proposed.   This study aims at carefully crafting a knowledge model whose implementation seamlessly work with ST algorithm. Furthermore, this study adapted the proposed model into an implementation of ST algorithm an obtained an improved performance compared to the simple knowledge model proposed by the author of ST algorithm. Our knowledge mode resulted in an accuracy gain of 23.5% and obtained and AUC of (0.49, 1.0). This proposed model has therefore shown that combining an inference-oriented knowledge model with an inference-oriented reasoning algorithm improves the performance of computer aided diagnostic (CADx) systems. In future, we intend to enhance the proposed model to support rules. Keywords— Semantic web, ontology, OWL, breast cancer, Select and Test (ST) algorithm, knowledge representatio

    A Guiding Agent: smart dynamic technology for solving distributed problems.

    Get PDF
    Mobile technology is everywhere nowadays in the developed world. This technology is mature enough to support intelligent applications and smart devices. Over the last few years we have developed a number of applications for PDAs and Mobile phones. This abstract outlines an information system that incorporate a recommender agent that helps the users of a shopping centre to identify offers, to find people or to define a plan which in the shopping centre for a day. The multiagent architecture incorporates a smart deliberative agent that take decisions with the help of casebased planners. The system that uses past experiences to recommend future actions has been tested successfully

    Improved EMD Using doubly-iterative sifting and high order spline interpolation

    Get PDF
    Empirical mode decomposition (EMD) is a signal analysis method which has received much attention lately due to its application in a number of fields. The main disadvantage of EMD is that it lacks a theoretical analysis and, therefore, our understanding of EMD comes from an intuitive and experimental validation of the method. Recent research on EMD revealed improved criteria for the interpolation points selection. More specifically, it was shown that the performance of EMD can be significantly enhanced if, as interpolation points, instead of the signal extrema, the extrema of the subsignal having the higher instantaneous frequency are used. Even if the extrema of the subsignal with the higher instantaneous frequency are not known in advance, this new interpolation points criterion can be effectively exploited in doubly-iterative sifting schemes leading to improved decomposition performance. In this paper, the possibilities and limitations of the developments above are explored and the new methods are compared with the conventional EMD

    An improved Ant Colony System for the Sequential Ordering Problem

    Full text link
    It is not rare that the performance of one metaheuristic algorithm can be improved by incorporating ideas taken from another. In this article we present how Simulated Annealing (SA) can be used to improve the efficiency of the Ant Colony System (ACS) and Enhanced ACS when solving the Sequential Ordering Problem (SOP). Moreover, we show how the very same ideas can be applied to improve the convergence of a dedicated local search, i.e. the SOP-3-exchange algorithm. A statistical analysis of the proposed algorithms both in terms of finding suitable parameter values and the quality of the generated solutions is presented based on a series of computational experiments conducted on SOP instances from the well-known TSPLIB and SOPLIB2006 repositories. The proposed ACS-SA and EACS-SA algorithms often generate solutions of better quality than the ACS and EACS, respectively. Moreover, the EACS-SA algorithm combined with the proposed SOP-3-exchange-SA local search was able to find 10 new best solutions for the SOP instances from the SOPLIB2006 repository, thus improving the state-of-the-art results as known from the literature. Overall, the best known or improved solutions were found in 41 out of 48 cases.Comment: 30 pages, 8 tables, 11 figure

    Domain ontology learning from the web

    Get PDF
    El Aprendizaje de Ontologías se define como el conjunto de métodos utilizados para construir, enriquecer o adaptar una ontología existente de forma semiautomática, utilizando fuentes de información heterogéneas. En este proceso se emplea texto, diccionarios electrónicos, ontologías lingüísticas e información estructurada y semiestructurada para extraer conocimiento. Recientemente, gracias al enorme crecimiento de la Sociedad de la Información, la Web se ha convertido en una valiosa fuente de información para casi cualquier dominio. Esto ha provocado que los investigadores empiecen a considerar a la Web como un repositorio válido para Recuperar Información y Adquirir Conocimiento. No obstante, la Web presenta algunos problemas que no se observan en repositorios de información clásicos: presentación orientada al usuario, ruido, fuentes no confiables, alta dinamicidad y tamaño abrumador. Pese a ello, también presenta algunas características que pueden ser interesantes para la adquisición de conocimiento: debido a su enorme tamaño y heterogeneidad, se asume que la Web aproxima la distribución real de la información a nivel global. Este trabajo describe una aproximación novedosa para el aprendizaje de ontologías, presentando nuevos métodos para adquirir conocimiento de la Web. La propuesta se distingue de otros trabajos previos principalmente en la particular adaptación de algunas técnicas clásicas de aprendizaje al corpus Web y en la explotación de las características interesantes del entorno Web para componer una aproximación automática, no supervisada e independiente del dominio. Con respecto al proceso de construcción de la ontologías, se han desarrollado los siguientes métodos: i) extracción y selección de términos relacionados con el dominio, organizándolos de forma taxonómica; ii) descubrimiento y etiquetado de relaciones no taxonómicas entre los conceptos; iii) métodos adicionales para mejorar la estructura final, incluyendo la detección de entidades con nombre, atributos, herencia múltiple e incluso un cierto grado de desambiguación semántica. La metodología de aprendizaje al completo se ha implementado mediante un sistema distribuido basado en agentes, proporcionando una solución escalable. También se ha evaluado para varios dominios de conocimiento bien diferenciados, obteniendo resultados de buena calidad. Finalmente, se han desarrollado varias aplicaciones referentes a la estructuración automática de librerías digitales y recursos Web, y la recuperación de información basada en ontologías.Ontology Learning is defined as the set of methods used for building from scratch, enriching or adapting an existing ontology in a semi-automatic fashion using heterogeneous information sources. This data-driven procedure uses text, electronic dictionaries, linguistic ontologies and structured and semi-structured information to acquire knowledge. Recently, with the enormous growth of the Information Society, the Web has become a valuable source of information for almost every possible domain of knowledge. This has motivated researchers to start considering the Web as a valid repository for Information Retrieval and Knowledge Acquisition. However, the Web suffers from problems that are not typically observed in classical information repositories: human oriented presentation, noise, untrusted sources, high dynamicity and overwhelming size. Even though, it also presents characteristics that can be interesting for knowledge acquisition: due to its huge size and heterogeneity it has been assumed that the Web approximates the real distribution of the information in humankind. The present work introduces a novel approach for ontology learning, introducing new methods for knowledge acquisition from the Web. The adaptation of several well known learning techniques to the web corpus and the exploitation of particular characteristics of the Web environment composing an automatic, unsupervised and domain independent approach distinguishes the present proposal from previous works.With respect to the ontology building process, the following methods have been developed: i) extraction and selection of domain related terms, organising them in a taxonomical way; ii) discovery and label of non-taxonomical relationships between concepts; iii) additional methods for improving the final structure, including the detection of named entities, class features, multiple inheritance and also a certain degree of semantic disambiguation. The full learning methodology has been implemented in a distributed agent-based fashion, providing a scalable solution. It has been evaluated for several well distinguished domains of knowledge, obtaining good quality results. Finally, several direct applications have been developed, including automatic structuring of digital libraries and web resources, and ontology-based Web Information Retrieval

    Forking Uncertainties:Reliable Prediction and Model Predictive Control With Sequence Models via Conformal Risk Control

    Get PDF
    In many real-world problems, predictions are leveraged to monitor and control cyber-physical systems, demanding guarantees on the satisfaction of reliability and safety requirements. However, predictions are inherently uncertain, and managing prediction uncertainty presents significant challenges in environments characterized by complex dynamics and forking trajectories. In this work, we assume access to a pre-designed probabilistic implicit or explicit sequence model, which may have been obtained using model-based or model-free methods. We introduce probabilistic time series-conformal risk prediction (PTS-CRC), a novel post-hoc calibration procedure that operates on the predictions produced by any pre-designed probabilistic forecaster to yield reliable error bars. In contrast to existing art, PTS-CRC produces predictive sets based on an ensemble of multiple prototype trajectories sampled from the sequence model, supporting the efficient representation of forking uncertainties. Furthermore, unlike the state of the art, PTS-CRC can satisfy reliability definitions beyond coverage. This property is leveraged to devise a novel model predictive control (MPC) framework that addresses open-loop and closed-loop control problems under general average constraints on the quality or safety of the control policy. We experimentally validate the performance of PTS-CRC prediction and control by studying a number of use cases in the context of wireless networking. Across all the considered tasks, PTS-CRC predictors are shown to provide more informative predictive sets, as well as safe control policies with larger returns
    corecore