16,965 research outputs found

    Improve and Implement an Open Source Question Answering System

    Get PDF
    A question answer system takes queries from the user in natural language and returns a short concise answer which best fits the response to the question. This report discusses the integration and implementation of question answer systems for English and Hindi as part of the open source search engine Yioop. We have implemented a question answer system for English and Hindi, keeping in mind users who use these languages as their primary language. The user should be able to query a set of documents and should get the answers in the same language. English and Hindi are very different when it comes to language structure, characters etc. We have implemented the Question Answer System so that it supports localization and improved Part of Speech tagging performance by storing the lexicon in the database instead of a file based lexicon. We have implemented a brill tagger variant for Part of Speech tagging of Hindi phrases and grammar rules for triplet extraction. We also improve Yioop’s lexical data handling support by allowing the user to add named entities. Our improvements to Yioop were then evaluated by comparing the retrieved answers against a dataset of answers known to be true. The test data for the question answering system included creating 2 indexes, 1 each for English and Hindi. These were created by configuring Yioop to crawl 200,000 wikipedia pages for each crawl. The crawls were configured to be domain specific so that English index consists of pages restricted to English text and Hindi index is restricted to pages with Hindi text. We then used a set of 50 questions on the English and Hindi systems. We recored, Hindi system to have an accuracy of about 55% for simple factoid questions and English question answer system to have an accuracy of 63%

    Development of a Real-time Embedded System for Speech Emotion Recognition

    Get PDF
    Speech emotion recognition is one of the latest challenges in speech processing and Human Computer Interaction (HCI) in order to address the operational needs in real world applications. Besides human facial expressions, speech has proven to be one of the most promising modalities for automatic human emotion recognition. Speech is a spontaneous medium of perceiving emotions which provides in-depth information related to different cognitive states of a human being. In this context, we introduce a novel approach using a combination of prosody features (i.e. pitch, energy, Zero crossing rate), quality features (i.e. Formant Frequencies, Spectral features etc.), derived features ((i.e.) Mel-Frequency Cepstral Coefficient (MFCC), Linear Predictive Coding Coefficients (LPCC)) and dynamic feature (Mel-Energy spectrum dynamic Coefficients (MEDC)) for robust automatic recognition of speaker’s emotional states. Multilevel SVM classifier is used for identification of seven discrete emotional states namely angry, disgust, fear, happy, neutral, sad and surprise in ‘Five native Assamese Languages’. The overall experimental results using MATLAB simulation revealed that the approach using combination of features achieved an average accuracy rate of 82.26% for speaker independent cases. Real time implementation of this algorithm is prepared on ARM CORTEX M3 board

    The ixiQuarks: merging code and GUI in one creative space

    Get PDF
    This paper reports on ixiQuarks; an environment of instruments and effects that is built on top of the audio programming language SuperCollider. The rationale of these instruments is to explore alternative ways of designing musical interaction in screen-based software, and investigate how semiotics in interface design affects the musical output. The ixiQuarks are part of external libraries available to SuperCollider through the Quarks system. They are software instruments based on a non- realist design ideology that rejects the simulation of acoustic instruments or music hardware and focuses on experimentation at the level of musical interaction. In this environment we try to merge the graphical with the textual in the same instruments, allowing the user to reprogram and change parts of them in runtime. After a short introduction to SuperCollider and the Quark system, we will describe the ixiQuarks and the philosophical basis of their design. We conclude by looking at how they can be seen as epistemic tools that influence the musician in a complex hermeneutic circle of interpretation and signification

    Gesture Recognition Based on Computer Vision on a Standalone System

    Get PDF
    Our project uses computer vision methods gesture recognition in which a camera interfaced to a system captures real time images and after further processing able to recognize the gesture shown to be interpreted. Our project mainly aims at hand gestures and after extracting information we try to produce it as an audio or in some visual form. We have used adaptive background subtraction with Haar classifiers to implement segmentation then we used convex hull and convex defects along with other feature extraction algorithms to interpret the gesture. First, this is implemented on a PC or laptop and then to produce a standalone system, we have to perform all this steps on a system which is dedicated to perform only the given specified task. For this we have chosen Beaglebone Black as a platform to implement our idea. The development comes with ARM Cortex A8 processor supported by NEON processor for video and image processing. It works on a clock frequency of maximum 1 GHz. It is 32 bit processor but it can be used in thumb mode i.e. it can work in 16 bit mode. This board supports Ubuntu, Android with some modification. Our first task is to interface a camera to the board so that it can capture images and store those as matrices followed by our steps to modify the installed Operating System to our purpose and implement all the above processes so that we can come up with a system which can perform gesture recognition

    Knowledge Based Systems: A Critical Survey of Major Concepts, Issues, and Techniques

    Get PDF
    This Working Paper Series entry presents a detailed survey of knowledge based systems. After being in a relatively dormant state for many years, only recently is Artificial Intelligence (AI) - that branch of computer science that attempts to have machines emulate intelligent behavior - accomplishing practical results. Most of these results can be attributed to the design and use of Knowledge-Based Systems, KBSs (or ecpert systems) - problem solving computer programs that can reach a level of performance comparable to that of a human expert in some specialized problem domain. These systems can act as a consultant for various requirements like medical diagnosis, military threat analysis, project risk assessment, etc. These systems possess knowledge to enable them to make intelligent desisions. They are, however, not meant to replace the human specialists in any particular domain. A critical survey of recent work in interactive KBSs is reported. A case study (MYCIN) of a KBS, a list of existing KBSs, and an introduction to the Japanese Fifth Generation Computer Project are provided as appendices. Finally, an extensive set of KBS-related references is provided at the end of the report

    Human computer interaction for international development: past present and future

    Get PDF
    Recent years have seen a burgeoning interest in research into the use of information and communication technologies (ICTs) in the context of developing regions, particularly into how such ICTs might be appropriately designed to meet the unique user and infrastructural requirements that we encounter in these cross-cultural environments. This emerging field, known to some as HCI4D, is the product of a diverse set of origins. As such, it can often be difficult to navigate prior work, and/or to piece together a broad picture of what the field looks like as a whole. In this paper, we aim to contextualize HCI4D—to give it some historical background, to review its existing literature spanning a number of research traditions, to discuss some of its key issues arising from the work done so far, and to suggest some major research objectives for the future

    An overview of the research evidence on ethnicity and communication in healthcare

    Get PDF
    • The aim of the present study was to identify and review the available research evidence on 'ethnicity and communication' in areas relevant to ensuring effective provision of mainstream services (e.g. via interpreter, advocacy and translation services); provision of services targeted on communication (e.g. speech and language therapy, counselling, psychotherapy); consensual/ participatory activities (e.g. consent to interventions), and; procedures for managing and planning for linguistic diversity
    corecore