267 research outputs found

    Impact of language skills and system experience on medical information retrieval

    No full text

    El modelo cortical HTM y su aplicación al conocimiento lingüístico

    Get PDF
    El problema que aborda este trabajo de investigación es encontrar un modelo neurocomputacional de representación y comprensión del conocimiento léxico, utilizando para ello el algoritmo cortical HTM, que modela el mecanismo según el cual se procesa la información en el neocórtex humano. La comprensión automática del lenguaje natural implica que las máquinas tengan un conocimiento profundo del lenguaje natural, lo que, actualmente, está muy lejos de conseguirse. En general, los modelos computacionales para el Procesamiento del Lenguaje Natural (PLN), tanto en su vertiente de análisis y comprensión como en la de generación, utilizan algoritmos fundamentados en modelos matemáticos y lingüísticos que intentan emular la forma en la que tradicionalmente se ha procesado el lenguaje, por ejemplo, obteniendo la estructura jerárquica implícita de las frases o las desinencias de las palabras. Estos modelos son útiles porque sirven para construir aplicaciones concretas como la extracción de datos, la clasificación de textos o el análisis de opinión. Sin embargo, a pesar de su utilidad, las máquinas realmente no entienden lo que hacen con ninguno de estos modelos. Por tanto, la pregunta que se aborda en este trabajo es si, realmente, es posible modelar computacionalmente los procesos neocorticales humanos que regulan el tratamiento de la información de tipo semántico del léxico. Esta cuestión de investigación constituye el primer nivel para comprender el procesamiento del lenguaje natural a niveles lingüísticos superiores..

    Social Media Based Deep Auto-Encoder Model for Clinical Recommendation

    Get PDF
    One of the most actively studied topics in modern medicine is the use of deep learning and patient clinical data to make medication and ADR recommendations. However, the clinical community still has some work to do in order to build a model that hybridises the recommendation system. As a social media learning based deep auto-encoder model for clinical recommendation, this research proposes a hybrid model that combines deep self-decoder with Top n similar co-patient information to produce a joint optimisation function (SAeCR). Implicit clinical information can be extracted using the network representation learning technique. Three experiments were conducted on two real-world social network data sets to assess the efficacy of the SAeCR model. As demonstrated by the experiments, the suggested model outperforms the other classification method on a larger and sparser data set. In addition, social network data can help doctors determine the nature of a patient's relationship with a co-patient. The SAeCR model is more effective since it incorporates insights from network representation learning and social theory

    Context, Relevance, and Labor

    Get PDF
    Since information science concerns the transmission of records, it concerns context. The transmission of documents ensures their arrival in new contexts. Documents and their copies are spread across times and places. The amount of labor required to discover and retrieve relevant documents is also formulated by context. Thus, any serious consideration of communication and of information technologies quickly leads to a concern with context, relevance, and labor. Information scientists have developed many theories of context, relevance, and labor but not a framework for organizing them and describing their relationship with one another. We propose the words context and relevance can be used to articulate a useful framework for considering the diversity of approaches to context and relevance in information science, as well as their relations with each other and with labor

    Evaluation of taxonomic and neural embedding methods for calculating semantic similarity

    Full text link
    Modelling semantic similarity plays a fundamental role in lexical semantic applications. A natural way of calculating semantic similarity is to access handcrafted semantic networks, but similarity prediction can also be anticipated in a distributional vector space. Similarity calculation continues to be a challenging task, even with the latest breakthroughs in deep neural language models. We first examined popular methodologies in measuring taxonomic similarity, including edge-counting that solely employs semantic relations in a taxonomy, as well as the complex methods that estimate concept specificity. We further extrapolated three weighting factors in modelling taxonomic similarity. To study the distinct mechanisms between taxonomic and distributional similarity measures, we ran head-to-head comparisons of each measure with human similarity judgements from the perspectives of word frequency, polysemy degree and similarity intensity. Our findings suggest that without fine-tuning the uniform distance, taxonomic similarity measures can depend on the shortest path length as a prime factor to predict semantic similarity; in contrast to distributional semantics, edge-counting is free from sense distribution bias in use and can measure word similarity both literally and metaphorically; the synergy of retrofitting neural embeddings with concept relations in similarity prediction may indicate a new trend to leverage knowledge bases on transfer learning. It appears that a large gap still exists on computing semantic similarity among different ranges of word frequency, polysemous degree and similarity intensity

    Synonym Detection Using Syntactic Dependency And Neural Embeddings

    Full text link
    Recent advances on the Vector Space Model have significantly improved some NLP applications such as neural machine translation and natural language generation. Although word co-occurrences in context have been widely used in counting-/predicting-based distributional models, the role of syntactic dependencies in deriving distributional semantics has not yet been thoroughly investigated. By comparing various Vector Space Models in detecting synonyms in TOEFL, we systematically study the salience of syntactic dependencies in accounting for distributional similarity. We separate syntactic dependencies into different groups according to their various grammatical roles and then use context-counting to construct their corresponding raw and SVD-compressed matrices. Moreover, using the same training hyperparameters and corpora, we study typical neural embeddings in the evaluation. We further study the effectiveness of injecting human-compiled semantic knowledge into neural embeddings on computing distributional similarity. Our results show that the syntactically conditioned contexts can interpret lexical semantics better than the unconditioned ones, whereas retrofitting neural embeddings with semantic knowledge can significantly improve synonym detection

    Integrative Levels of Knowing

    Get PDF
    Diese Dissertation beschäftigt sich mit einer systematischen Organisation der epistemologischen Dimension des menschlichen Wissens in Bezug auf Perspektiven und Methoden. Insbesondere wird untersucht inwieweit das bekannte Organisationsprinzip der integrativen Ebenen, das eine Hierarchie zunehmender Komplexität und Integration beschreibt, geeignet ist für eine grundlegende Klassifikation von Perspektiven bzw. epistemischen Bezugsrahmen. Die zentrale These dieser Dissertation geht davon aus, dass eine angemessene Analyse solcher epistemischen Kontexte in der Lage sein sollte, unterschiedliche oder gar konfligierende Bezugsrahmen anhand von kontextübergreifenden Standards und Kriterien vergleichen und bewerten zu können. Diese Aufgabe erfordert theoretische und methodologische Grundlagen, welche die Beschränkungen eines radikalen Kontextualismus vermeiden, insbesondere die ihm innewohnende Gefahr einer Fragmentierung des Wissens aufgrund der angeblichen Inkommensurabilität epistemischer Kontexte. Basierend auf Jürgen Habermas‘ Theorie des kommunikativen Handelns und seiner Methodologie des hermeneutischen Rekonstruktionismus, wird argumentiert, dass epistemischer Pluralismus nicht zwangsläufig zu epistemischem Relativismus führen muss und dass eine systematische Organisation der Perspektivenvielfalt von bereits existierenden Modellen zur kognitiven Entwicklung profitieren kann, wie sie etwa in der Psychologie oder den Sozial- und Kulturwissenschaften rekonstruiert werden. Der vorgestellte Ansatz versteht sich als ein Beitrag zur multi-perspektivischen Wissensorganisation, der sowohl neue analytische Werkzeuge für kulturvergleichende Betrachtungen von Wissensorganisationssystemen bereitstellt als auch neue Organisationsprinzipien vorstellt für eine Kontexterschließung, die dazu beitragen kann die Ausdrucksstärke bereits vorhandener Dokumentationssprachen zu erhöhen. Zudem enthält der Anhang eine umfangreiche Zusammenstellung von Modellen integrativer Wissensebenen.This dissertation is concerned with a systematic organization of the epistemological dimension of human knowledge in terms of viewpoints and methods. In particular, it will be explored to what extent the well-known organizing principle of integrative levels that presents a developmental hierarchy of complexity and integration can be applied for a basic classification of viewpoints or epistemic outlooks. The central thesis pursued in this investigation is that an adequate analysis of such epistemic contexts requires tools that allow to compare and evaluate divergent or even conflicting frames of reference according to context-transcending standards and criteria. This task demands a theoretical and methodological foundation that avoids the limitation of radical contextualism and its inherent threat of a fragmentation of knowledge due to the alleged incommensurability of the underlying frames of reference. Based on Jürgen Habermas’s Theory of Communicative Action and his methodology of hermeneutic reconstructionism, it will be argued that epistemic pluralism does not necessarily imply epistemic relativism and that a systematic organization of the multiplicity of perspectives can benefit from already existing models of cognitive development as reconstructed in research fields like psychology, social sciences, and humanities. The proposed cognitive-developmental approach to knowledge organization aims to contribute to a multi-perspective knowledge organization by offering both analytical tools for cross-cultural comparisons of knowledge organization systems (e.g., Seven Epitomes and Dewey Decimal Classification) and organizing principles for context representation that help to improve the expressiveness of existing documentary languages (e.g., Integrative Levels Classification). Additionally, the appendix includes an extensive compilation of conceptions and models of Integrative Levels of Knowing from a broad multidisciplinary field

    A Nurse Anaesthetist model for practice in South Africa

    Get PDF
    Abstract: For over 150 years, nurses have been providing anaesthesia services independently. With varying autonomy, today nurse anaesthetists provide services in many countries around the globe, on continents including Europe, North America, South America, Asia, and Africa. It is estimated that 5 billion of the world’s population do not have access to safe and affordable surgical and anaesthesia care, many of whom live in low and middle-income countries. One-third of the global burden of diseases could be addressed by surgical care, which encompasses surgery, obstetrics, trauma, and anaesthesia. Deleterious to this burden of disease, there is a severe shortage of anaesthesia providers in low and middleincome countries like South Africa. The World Health Assembly approved a resolution that recognises the importance of emergency and essential surgical care and anaesthesia as part of universal health coverage. There is currently no professional registration available in South Africa for nurse anaesthetists by which they prescribe and administer anaesthesia and chemical pain management services to patients. The question can be asked: What can be done to ensure that safe and affordable anaesthesia services can be provided in South Africa? In response, this study’s primary purpose is to propose a model and guidelines for implementation for a nurse anaesthetist model for practice which is contextually appropriate for South Africa. To achieve this, this study followed a theory-generating, qualitative, exploratory, descriptive, and contextual design. Chinn and Kramer’s (2015:165-183; 2018:169-186) guidelines on theory development were used as a foundation for this study. This study was conducted in four steps:...D.Cur. (Professional Nursing

    “Syntax seems not to be all about algorithms” vs. “It seems that Syntax is not all about algorithms”: una perspectiva cognitivista y funcionalista de por qué los hablantes de inglés optan por uno o por el otro perfilamiento : Del objeto sintáctico al contexto real de uso

    Get PDF
    En este trabajo de investigación abordamos el análisis del par opositivo a) “Syntax seems not to be all about algorithms” y b) “It seems that Syntax is not all about algorithms” del inglés. Realizamos este análisis a partir de, por un lado, la Gramática cognitiva, más precisamente desde el Enfoque-Cognitivo Prototípico, y por otro lado, desde teorías gramaticales de corte funcionalista.Facultad de Humanidades y Ciencias de la Educació

    Resolving XML Semantic Ambiguity

    Get PDF
    ABSTRACT XML semantic-aware processing has become a motivating and important challenge in Web data management, data processing, and information retrieval. While XML data is semi-structured, yet it remains prone to lexical ambiguity, and thus requires dedicated semantic analysis and sense disambiguation processes to assign well-defined meaning to XML elements and attributes. This becomes crucial in an array of applications ranging over semantic-aware query rewriting, semantic document clustering and classification, schema matching, as well as blog analysis and event detection in social networks and tweets. Most existing approaches in this context: i) ignore the problem of identifying ambiguous XML nodes, ii) only partially consider their structural relations/context, iii) use syntactic information in processing XML data regardless of the semantics involved, and iv) are static in adopting fixed disambiguation constraints thus limiting user involvement. In this paper, we provide a new XML Semantic Disambiguation Framework titled XSDF designed to address each of the above motivations, taking as input: an XML document and a general purpose semantic network, and then producing as output a semantically augmented XML tree made of unambiguous semantic concepts. Experiments demonstrate the effectiveness of our approach in comparison with alternative methods. Categories and Subject Descriptors General Terms Algorithms, Measurement, Performance, Design, Experimentation. Keywords XML semantic-aware processing, a m b i g u i t y d e g r e e , s p h e r e neighborhood, XML context vector, semantic network, semantic disambiguation
    corecore