23,334 research outputs found

    Controlled Language and Baby Turing Test for General Conversational Intelligence

    Full text link
    General conversational intelligence appears to be an important part of artificial general intelligence. Respectively, it requires accessible measures of the intelligence quality and controllable ways of its achievement, ideally - having the linguistic and semantic models represented in a reasonable way. Our work is suggesting to use Baby Turing Test approach to extend the classic Turing Test for conversational intelligence and controlled language based on semantic graph representation extensible for arbitrary subject domain. We describe how the two can be used together to build a general-purpose conversational system such as an intelligent assistant for online media and social network data processing.Comment: 10 pages, 4 figure

    Health Analytics: a systematic review of approaches to detect phenotype cohorts using electronic health records

    Full text link
    The paper presents a systematic review of state-of-the-art approaches to identify patient cohorts using electronic health records. It gives a comprehensive overview of the most commonly de-tected phenotypes and its underlying data sets. Special attention is given to preprocessing of in-put data and the different modeling approaches. The literature review confirms natural language processing to be a promising approach for electronic phenotyping. However, accessibility and lack of natural language process standards for medical texts remain a challenge. Future research should develop such standards and further investigate which machine learning approaches are best suited to which type of medical data

    Recent Advances in Zero-shot Recognition

    Full text link
    With the recent renaissance of deep convolution neural networks, encouraging breakthroughs have been achieved on the supervised recognition tasks, where each class has sufficient training data and fully annotated training data. However, to scale the recognition to a large number of classes with few or now training samples for each class remains an unsolved problem. One approach to scaling up the recognition is to develop models capable of recognizing unseen categories without any training instances, or zero-shot recognition/ learning. This article provides a comprehensive review of existing zero-shot recognition techniques covering various aspects ranging from representations of models, and from datasets and evaluation settings. We also overview related recognition tasks including one-shot and open set recognition which can be used as natural extensions of zero-shot recognition when limited number of class samples become available or when zero-shot recognition is implemented in a real-world setting. Importantly, we highlight the limitations of existing approaches and point out future research directions in this existing new research area.Comment: accepted by IEEE Signal Processing Magazin

    Reverse enGENEering of regulatory networks from Big Data: a guide for a biologist

    Full text link
    Omics technologies enable unbiased investigation of biological systems through massively parallel sequence acquisition or molecular measurements, bringing the life sciences into the era of Big Data. A central challenge posed by such omics datasets is how to transform this data into biological knowledge. For example, how to use this data to answer questions such as: which functional pathways are involved in cell differentiation? Which genes should we target to stop cancer? Network analysis is a powerful and general approach to solve this problem consisting of two fundamental stages, network reconstruction and network interrogation. Herein, we provide an overview of network analysis including a step by step guide on how to perform and use this approach to investigate a biological question. In this guide, we also include the software packages that we and others employ for each of the steps of a network analysis workflow

    Verification, Validation and Integrity of Distributed and Interchanged Rule Based Policies and Contracts in the Semantic Web

    Full text link
    Rule-based policy and contract systems have rarely been studied in terms of their software engineering properties. This is a serious omission, because in rule-based policy or contract representation languages rules are being used as a declarative programming language to formalize real-world decision logic and create IS production systems upon. This paper adopts an SE methodology from extreme programming, namely test driven development, and discusses how it can be adapted to verification, validation and integrity testing (V&V&I) of policy and contract specifications. Since, the test-driven approach focuses on the behavioral aspects and the drawn conclusions instead of the structure of the rule base and the causes of faults, it is independent of the complexity of the rule language and the system under test and thus much easier to use and understand for the rule engineer and the user.Comment: A.Paschke: Verification, Validation, Integrity of Rule Based Policies and Contracts in the Semantic Web, 2nd International Semantic Web Policy Workshop (SWPW'06), Nov. 5-9, 2006, Athens, GA, US

    A User-based Visual Analytics Workflow for Exploratory Model Analysis

    Full text link
    Many visual analytics systems allow users to interact with machine learning models towards the goals of data exploration and insight generation on a given dataset. However, in some situations, insights may be less important than the production of an accurate predictive model for future use. In that case, users are more interested in generating of diverse and robust predictive models, verifying their performance on holdout data, and selecting the most suitable model for their usage scenario. In this paper, we consider the concept of Exploratory Model Analysis (EMA), which is defined as the process of discovering and selecting relevant models that can be used to make predictions on a data source. We delineate the differences between EMA and the well-known term exploratory data analysis in terms of the desired outcome of the analytic process: insights into the data or a set of deployable models. The contributions of this work are a visual analytics system workflow for EMA, a user study, and two use cases validating the effectiveness of the workflow. We found that our system workflow enabled users to generate complex models, to assess them for various qualities, and to select the most relevant model for their task

    AI in software engineering : current developments and future prospects

    Get PDF
    Artificial intelligences techniques such as knowledge based systems, neural networks, fuzzy logic and data mining have been advocated by many researchers and developers as the way to improve many of the software development activities. As with many other disciplines, software development quality improves with the experience, knowledge of the developers, past projects and expertise. Software also evolves as it operates in changing and volatile environments. Hence, there is significant potential for using AI for improving all phases of the software development life cycle. This chapter provides a survey on the use of AI for software engineering that covers the main software development phases and AI methods such as natural language processing techniques, neural networks, genetic algorithms, fuzzy logic, ant colony optimization, and planning method

    Ontology-Based Users & Requests Clustering in Customer Service Management System

    Full text link
    Customer Service Management is one of major business activities to better serve company customers through the introduction of reliable processes and procedures. Today this kind of activities is implemented through e-services to directly involve customers into business processes. Traditionally Customer Service Management involves application of data mining techniques to discover usage patterns from the company knowledge memory. Hence grouping of customers/requests to clusters is one of major technique to improve the level of company customization. The goal of this paper is to present an efficient for implementation approach for clustering users and their requests. The approach uses ontology as knowledge representation model to improve the semantic interoperability between units of the company and customers. Some fragments of the approach tested in an industrial company are also presented in the paper.Comment: 15 pages, 4 figures, published in Lecture Notes in Computer Scienc
    • …
    corecore