10 research outputs found

    Conceptual Graphs Based Information Retrieval in HealthAgents

    Full text link
    This paper focuses on the problem of representing, in a meaningful way, the knowledge involved in the HealthAgents project. Our work is motivated by the complexity of representing Electronic Health-care Records in a consistent manner. We present HADOM (HealthAgents Domain Ontology) which conceptualises the required HealthAgents information and propose describing the sources knowledge by the means of Conceptual Graphs (CGs). This allows to build upon the existing ontology permitting for modularity and °exibility. The novelty of our approach lies in the ease with which CGs can be placed above other formalisms and their potential for optimised querying and retrieval

    Semantic metrics

    Get PDF
    In the context of the Semantic Web, many ontology-related operations, e.g. ontology ranking, segmentation, alignment, articulation, reuse, evaluation, can be boiled down to one fundamental operation: computing the similarity and?or dissimilarity among ontological entities, and in some cases among ontologies themselves. In this paper, we review standard metrics for computing distance measures and we propose a series of semantic metrics. We give a formal account of semantic metrics drawn from a variety of research disciplines, and enrich them with semantics based on standard Description Logic constructs. We argue that concept-based metrics can be aggregated to produce numeric distances at ontology-level and we speculate on the usability of our ideas through potential areas

    Solución para el desarrollo de Sistemas de Ayuda a la Decisión para diagnóstico clínico

    Full text link
    Los Sistemas de Ayuda a la Decisión Clínica (CDSS) actuales son juzgados por su flexibilidad y adaptabilidad. Se describe el desarrollo de un CDSS distribuido basado en agentes, un motor de clasificación basado en reconocimiento de formas, y el DSS genérico Curiam. En conjunto ofrecen una solución reutilizable para el desarrollo de nuevos CDSS.Sáez Silvestre, C. (2009). Solución para el desarrollo de Sistemas de Ayuda a la Decisión para diagnóstico clínico. http://hdl.handle.net/10251/12028Archivo delegad

    Agent-based management of clinical guidelines

    Get PDF
    Les guies de pràctica clínica (GPC) contenen un conjunt d'accions i dades que ajuden a un metge a prendre decisions sobre el diagnòstic, tractament o qualsevol altre procediment a un pacient i sobre una determinada malaltia. És conegut que l'adopció d'aquestes guies en la vida diària pot millorar l'assistència mèdica als pacients, pel fet que s'estandarditzen les pràctiques. Sistemes computeritzats que utilitzen GPC poden constituir part de sistemes d'ajut a la presa de decisions més complexos amb la finalitat de proporcionar el coneixement adequat a la persona adequada, en un format correcte i en el moment precís. L'automatització de l'execució de les GPC és el primer pas per la seva implantació en els centres mèdics.Per aconseguir aquesta implantació final, hi ha diferents passos que cal solucionar com per exemple, l'adquisició i representació de les GPC, la seva verificació formal, i finalment la seva execució. Aquesta Tesi està dirigida en l'execució de GPC i proposa la implementació d'un sistema multi-agent. En aquest sistema els diferents actors dels centres mèdics coordinen les seves activitats seguint un pla global determinat per una GPC. Un dels principals problemes de qualsevol sistema que treballa en l'àmbit mèdic és el tractament del coneixement. En aquest cas s'han hagut de tractar termes mèdics i organitzatius, que s'ha resolt amb la implementació de diferents ontologies. La separació de la representació del coneixement del seu ús és intencionada i permet que el sistema d'execució de GPC sigui fàcilment adaptable a les circumstàncies concretes dels centres, on varien el personal i els recursos disponibles.En paral·lel a l'execució de GPC, el sistema proposat manega preferències del pacient per tal d'implementar serveis adaptats al pacient. En aquesta àrea concretament, a) s'han definit un conjunt de criteris, b) aquesta informació forma part del perfil de l'usuari i serveix per ordenar les propostes que el sistema li proposa, i c) un algoritme no supervisat d'aprenentatge permet adaptar les preferències del pacient segons triï.Finalment, algunes idees d'aquesta Tesi actualment s'estan aplicant en dos projectes de recerca. Per una banda, l'execució distribuïda de GPC, i per altra banda, la representació del coneixement mèdic i organitzatiu utilitzant ontologies.Clinical guidelines (CGs) contain a set of directions or principles to assist the health care practitioner with patient care decisions about appropriate diagnostic, therapeutic, or other clinical procedures for specific clinical circumstances. It is widely accepted that the adoption of guideline-execution engines in daily practice would improve the patient care, by standardising the care procedures. Guideline-based systems can constitute part of a knowledge-based decision support system in order to deliver the right knowledge to the right people in the right form at the right time. The automation of the guideline execution process is a basic step towards its widespread use in medical centres.To achieve this general goal, different topics should be tackled, such as the acquisition of clinical guidelines, its formal verification, and finally its execution. This dissertation focuses on the execution of CGs and proposes the implementation of an agent-based platform in which the actors involved in health care coordinate their activities to perform the complex task of guideline enactment. The management of medical and organizational knowledge, and the formal representation of the CGs, are two knowledge-related topics addressed in this dissertation and tackled through the design of several application ontologies. The separation of the knowledge from its use is fully intentioned, and allows the CG execution engine to be easily customisable to different medical centres with varying personnel and resources.In parallel with the execution of CGs, the system handles citizen's preferences and uses them to implement patient-centred services. With respect this issue, the following tasks have been developed: a) definition of the user's criteria, b) use of the patient's profile to rank the alternatives presented to him, c) implementation of an unsupervised learning method to adapt dynamically and automatically the user's profile.Finally, several ideas of this dissertation are being directly applied in two ongoing funded research projects, including the agent-based execution of CGs and the ontological management of medical and organizational knowledge

    Análise colaborativa de grandes conjuntos de séries temporais

    Get PDF
    The recent expansion of metrification on a daily basis has led to the production of massive quantities of data, and in many cases, these collected metrics are only useful for knowledge building when seen as a full sequence of data ordered by time, which constitutes a time series. To find and interpret meaningful behavioral patterns in time series, a multitude of analysis software tools have been developed. Many of the existing solutions use annotations to enable the curation of a knowledge base that is shared between a group of researchers over a network. However, these tools also lack appropriate mechanisms to handle a high number of concurrent requests and to properly store massive data sets and ontologies, as well as suitable representations for annotated data that are visually interpretable by humans and explorable by automated systems. The goal of the work presented in this dissertation is to iterate on existing time series analysis software and build a platform for the collaborative analysis of massive time series data sets, leveraging state-of-the-art technologies for querying, storing and displaying time series and annotations. A theoretical and domain-agnostic model was proposed to enable the implementation of a distributed, extensible, secure and high-performant architecture that handles various annotation proposals in simultaneous and avoids any data loss from overlapping contributions or unsanctioned changes. Analysts can share annotation projects with peers, restricting a set of collaborators to a smaller scope of analysis and to a limited catalog of annotation semantics. Annotations can express meaning not only over a segment of time, but also over a subset of the series that coexist in the same segment. A novel visual encoding for annotations is proposed, where annotations are rendered as arcs traced only over the affected series’ curves in order to reduce visual clutter. Moreover, the implementation of a full-stack prototype with a reactive web interface was described, directly following the proposed architectural and visualization model while applied to the HVAC domain. The performance of the prototype under different architectural approaches was benchmarked, and the interface was tested in its usability. Overall, the work described in this dissertation contributes with a more versatile, intuitive and scalable time series annotation platform that streamlines the knowledge-discovery workflow.A recente expansão de metrificação diária levou à produção de quantidades massivas de dados, e em muitos casos, estas métricas são úteis para a construção de conhecimento apenas quando vistas como uma sequência de dados ordenada por tempo, o que constitui uma série temporal. Para se encontrar padrões comportamentais significativos em séries temporais, uma grande variedade de software de análise foi desenvolvida. Muitas das soluções existentes utilizam anotações para permitir a curadoria de uma base de conhecimento que é compartilhada entre investigadores em rede. No entanto, estas ferramentas carecem de mecanismos apropriados para lidar com um elevado número de pedidos concorrentes e para armazenar conjuntos massivos de dados e ontologias, assim como também representações apropriadas para dados anotados que são visualmente interpretáveis por seres humanos e exploráveis por sistemas automatizados. O objetivo do trabalho apresentado nesta dissertação é iterar sobre o software de análise de séries temporais existente e construir uma plataforma para a análise colaborativa de grandes conjuntos de séries temporais, utilizando tecnologias estado-de-arte para pesquisar, armazenar e exibir séries temporais e anotações. Um modelo teórico e agnóstico quanto ao domínio foi proposto para permitir a implementação de uma arquitetura distribuída, extensível, segura e de alto desempenho que lida com várias propostas de anotação em simultâneo e evita quaisquer perdas de dados provenientes de contribuições sobrepostas ou alterações não-sancionadas. Os analistas podem compartilhar projetos de anotação com colegas, restringindo um conjunto de colaboradores a uma janela de análise mais pequena e a um catálogo limitado de semântica de anotação. As anotações podem exprimir significado não apenas sobre um intervalo de tempo, mas também sobre um subconjunto das séries que coexistem no mesmo intervalo. Uma nova codificação visual para anotações é proposta, onde as anotações são desenhadas como arcos traçados apenas sobre as curvas de séries afetadas de modo a reduzir o ruído visual. Para além disso, a implementação de um protótipo full-stack com uma interface reativa web foi descrita, seguindo diretamente o modelo de arquitetura e visualização proposto enquanto aplicado ao domínio AVAC. O desempenho do protótipo com diferentes decisões arquiteturais foi avaliado, e a interface foi testada quanto à sua usabilidade. Em geral, o trabalho descrito nesta dissertação contribui com uma abordagem mais versátil, intuitiva e escalável para uma plataforma de anotação sobre séries temporais que simplifica o fluxo de trabalho para a descoberta de conhecimento.Mestrado em Engenharia Informátic

    Análise de tipos de ontologias nas áreas de ciência da informação e ciência da computação

    Get PDF
    Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro de Ciências da Educação, Programa de Pós-Graduação em Ciência da Informação, Florianópolis, 2014.A emergência de tecnologias que visam complementar a web, associada às problemáticas na busca por novos modelos de recuperação de informação mais eficientes, abriram espaço para estudos que utilizam os benefícios da organização semântica da informação e do conhecimento. Sistemas de Organização do Conhecimento (SOCs) permitem representar um domínio por meio da sistematização dos conceitos e das relações semânticas que se estabelecem entre eles. Entre os tipos desses sistemas conceituais estão as ontologias, utilizadas para representar o conhecimento relativo a um dado domínio do conhecimento. A presente pesquisa tem como objetivo, por meio de uma pesquisa documental, identificar as principais características dos tipos de ontologias. Para tanto, foi empregado, nos procedimentos metodológicos, o método de Análise de Conteúdo de Laurence Bardin. Para a construção do corpus de análise foram utilizadas as bases de dados da Library and Information Science Abstracts (LISA) e da Computer and Information Systems Abstracts. A análise dos resultados permitiu identificar um predomínio significativo nas pesquisas relacionadas às ontologias de domínio, utilizando-a como ferramenta para representação de conceitos e relações que estejam inseridas na visão de mundo desejada. Diferentemente, as ontologias de topo definem os conceitos mais básicos e que sejam extensíveis a outras ações e domínios associados a sua área de abordagem. Os tipos aplicação e tarefa permitem um nível de representação mais específico, alinhado a modelagem de ambientes particulares.Abstract : The emergence of technologies that aim at complementing the internet, associated with the problematics that arise in the search for new models of information retrieval that are more efficient, have made room for studies that make use of the benefits of the semantic organization of information and knowledge. Knowledge Organization Systems (KOS) allow the representation of a domain through the systematization of concepts and semantic relations that have been stablished between them. Among these forms of conceptual systems are the ontologies, utilized in the representation of knowledge relative to a given knowledge domain. The goal of this research, therefore, is to identify the main characteristics of the types of ontologies through documentary research. For that, we have employed in the methodological procedures the Laurence Bardin Content Analysis Method. As for the corpus analysis construction we made use of the databases of the Library and Information Science Abstracts (LISA) and Computer and Information Systems Abstracts. The analysis of the results allowed the identification of a significant predominance of researches related to domain ontologies, they were used as tools for the representation of concepts and relations that are inserted in the desired world view. In contrast, top level ontologies define the most basic concepts that are extendable to other actions and domains associated to its approach area. The application and task types allow a representation that is more specific and alligned with the modeling of particular environments

    Interactive Multiagent Adaptation of Individual Classification Models for Decision Support

    Get PDF
    An essential prerequisite for informed decision-making of intelligent agents is direct access to empirical knowledge for situation assessment. This contribution introduces an agent-oriented knowledge management framework for learning agents facing impediments in self-contained acquisition of classification models. The framework enables the emergence of dynamic knowledge networks among benevolent agents forming a community of practice in open multiagent systems. Agents in an advisee role are enabled to pinpoint learning impediments in terms of critical training cases and to engage in a goal-directed discourse with an advisor panel to overcome identified issues. The advisors provide arguments supporting and hence explaining those critical cases. Using such input as additional background knowledge, advisees can adapt their models in iterative relearning organized as a search through model space. An extensive empirical evaluation in two real-world domains validates the presented approach

    Image Analysis for the Life Sciences - Computer-assisted Tumor Diagnostics and Digital Embryomics

    Get PDF
    Current research in the life sciences involves the analysis of such a huge amount of image data that automatization is required. This thesis presents several ways how pattern recognition techniques may contribute to improved tumor diagnostics and to the elucidation of vertebrate embryonic development. Chapter 1 studies an approach for exploiting spatial context for the improved estimation of metabolite concentrations from magnetic resonance spectroscopy imaging (MRSI) data with the aim of more robust tumor detection, and compares against a novel alternative. Chapter 2 describes a software library for training, testing and validating classification algorithms that estimate tumor probability based on MRSI. It allows flexible adaptation towards changed experimental conditions, classifier comparison and quality control without need for expertise in pattern recognition. Chapter 3 studies several models for learning tumor classifiers that allow for the common unreliability of human segmentations. For the first time, models are used for this task that additionally employ the objective image information. Chapter 4 encompasses two contributions to an image analysis pipeline for automatically reconstructing zebrafish embryonic development based on time-resolved microscopy: Two approaches for nucleus segmentation are experimentally compared, and a procedure for tracking nuclei over time is presented and evaluated

    Development of a nanobody-based amperometric immunocapturing assay for sensitive and specific detection of Toxocara canis excretory-secretory antigen

    Full text link
    Introduction Human Toxocariasis (HT) is a zoonosis that, despite of its wide distribution around the world, remains poorly diagnosed. The identification of specific IgG immunoglobulins against the Toxocara canis Excretory-Secretory antigen (TES), a mix of glycoproteins that the parasite releases during its migration to the target organs in infected patients, is currently the only laboratory tool to detect the disease. The main drawbacks of this test are the inability to distinguish past and active infections together with lack of specificity. These factors seriously hamper the diagnosis, follow-up and control of the disease. Aim To develop an amperometric immunocapturing diagnostic assay based on single domain immunoglobulins from camelids (nanobodies) for specific and sensitive detection of TES. Methods After immunization of an alpaca (Vicugna pacos) with TES, RNA from peripheral blood lymphocytes was used as template for cDNA amplification with oligo dT primers and library construction. Isolation and screening of TES-specific nanobodies were carried out by biopanning and the resulting nanobodies were expressed in Escherichia coli. Two-epitopes amperometric immunocapturing assay was designed using paramagnetic beads coated with streptavidin and bivalent nanobodies. Detection of the system was carried out with nanobodies chemically coupled to horseradish peroxidase. The reaction was measured by amperometry and the limit of detection (LOD) was compared to conventional sandwich ELISA. Results We obtained three nanobodies that specifically recognize TES with no-cross reactivity to antigens of Ascaris lumbricoides and A. suum. The LOD of the assay using PBST20 0.05% as diluent was 100 pg/ml, 10 times more sensitive than sandwich ELISA. Conclusion Sensitive and specific detection of TES for discrimination of active and past infections is one of the most difficult challenges of T. canis diagnosis. The main advantage of our system is the use of two different nanobodies that specifically recognize two different epitopes in TES with a highly sensitive and straightforward readout. Considering that the amounts of TES available for detection in clinical samples are in the range of picograms or a few nanograms maximum, the LOD found in our experiments suggests that the test is potentially useful for the detection of clinically relevant cases of HT
    corecore