4,303 research outputs found

    User Intent Communication in Robot-Assisted Shopping for the Blind

    Get PDF
    The research reported in this chapter describes our work on robot-assisted shopping for the blind. In our previous research, we developed RoboCart, a robotic shopping cart for the visually impaired (Gharpure, 2008; Kulyukin et al., 2008; Kulyukin et al., 2005). RoboCart's operation includes four steps: 1) the blind shopper (henceforth the shopper) selects

    Weblog and short text feature extraction and impact on categorisation

    Full text link
    The characterisation and categorisation of weblogs and other short texts has become an important research theme in the areas of topic/trend detection, and pattern recognition, amongst others. The value of analysing and characterising short text is to understand and identify the features that can identify and distinguish them, thereby improving input to the classification process. In this research work, we analyse a large number of text features and establish which combinations are useful to discriminate between the different genres of short text. Having identified the most promising features, we then confirm our findings by performing the categorisation task using three approaches: the Gaussian and SVM classifiers and the K-means clustering algorithm. Several hundred combinations of features were analysed in order to identify the best combinations and the results confirmed the observations made. The novel aspect of our work is the detection of the best combination of individual metrics which are identified as potential features to be used for the categorisation process.The research work of the third author is partially funded by the WIQ-EI (IRSES grant n. 269180) and DIANA APPLICATIONS (TIN2012-38603-C02-01), and done in the framework of the VLC/Campus Microcluster on Multimodal Interaction in Intelligent Systems.Perez Tellez, F.; Cardiff, J.; Rosso, P.; Pinto Avendaño, DE. (2014). Weblog and short text feature extraction and impact on categorisation. Journal of Intelligent and Fuzzy Systems. 27(5):2529-2544. https://doi.org/10.3233/IFS-141227S2529254427

    A study of the problems of man-computer dialogues for naive users

    Get PDF
    The success of an interactive computing facility will depend, to a large extent, upon the effectiveness of the man-computer dialogue which it supports. Comparatively little work has been directed towards the design of effective dialogues for situations in which the 'man' is a 'naive' user i.e. a person without training or experience of computer procedures. Thus the aim of this project has been to produce a series of specialised guidelines for designers of dialogues for naive users. An examination of the literature reveals that published dialogue guidelines tend to be of a general purpose nature and therefore cannot be applied directly to specific situations. Furthermore, as each set of recommendations is based upon a limited range of experience, authors opinions appear to contradict or be in need of further qualification. At a practical level, a survey of computer games, intended to be self-explanatory and therefore suitable for naive users, bears out the widely held feeling that the dialogue interface is often a poorly considered aspect of interactive program writing. Pilot studies highlight the need for experimental work into man-computer dialogues to be carried out under conditions conforming as closely as possible to a 'real world' environment. The main study focuses upon the general public as users of a local information system developed and installed in Leicester's Information Bureau. Monitoring the public's usage of and reactions to the system has enabled a series of dialogue guidelines for public information systems to be produced. A review of the literature provides supplementary recommendations. The influence of dialogue recommendations on the software writing community is considered. Less than half of a sample of application programmers are found to refer to material of this kind. Follow up interviews indicate that the concept of a dialogue guideline is too narrow and should be broadened to cover all types of dialogue design information. This would render it more applicable to differing design situations. For designers who do not refer to published material, it is suggested that .sound principles can be communicated via trained experts and the use of library subroutines supporting dialogue creation. An example is considered of a routine to process textual inputs. A number of paths for future research are described concerning the development of experimental methodology suitable for testing man-computer dialogues, an evaluation of the proposed strategy for communicating dialogue design principles and the application of new input/output techniques to public information systems. It is also suggested that the likely social consequences of computerised information facilities should be determined

    Template-Based Question Answering over Linked Data using Recursive Neural Networks

    Get PDF
    abstract: The Semantic Web contains large amounts of related information in the form of knowledge graphs such as DBpedia. These knowledge graphs are typically enormous and are not easily accessible for users as they need specialized knowledge in query languages (such as SPARQL) as well as deep familiarity of the ontologies used by these knowledge graphs. So, to make these knowledge graphs more accessible (even for non- experts) several question answering (QA) systems have been developed over the last decade. Due to the complexity of the task, several approaches have been undertaken that include techniques from natural language processing (NLP), information retrieval (IR), machine learning (ML) and the Semantic Web (SW). At a higher level, most question answering systems approach the question answering task as a conversion from the natural language question to its corresponding SPARQL query. These systems then utilize the query to retrieve the desired entities or literals. One approach to solve this problem, that is used by most systems today, is to apply deep syntactic and semantic analysis on the input question to derive the SPARQL query. This has resulted in the evolution of natural language processing pipelines that have common characteristics such as answer type detection, segmentation, phrase matching, part-of-speech-tagging, named entity recognition, named entity disambiguation, syntactic or dependency parsing, semantic role labeling, etc. This has lead to NLP pipeline architectures that integrate components that solve a specific aspect of the problem and pass on the results to subsequent components for further processing eg: DBpedia Spotlight for named entity recognition, RelMatch for relational mapping, etc. A major drawback in this approach is error propagation that is a common problem in NLP. This can occur due to mistakes early on in the pipeline that can adversely affect successive steps further down the pipeline. Another approach is to use query templates either manually generated or extracted from existing benchmark datasets such as Question Answering over Linked Data (QALD) to generate the SPARQL queries that is basically a set of predefined queries with various slots that need to be filled. This approach potentially shifts the question answering problem into a classification task where the system needs to match the input question to the appropriate template (class label). This thesis proposes a neural network approach to automatically learn and classify natural language questions into its corresponding template using recursive neural networks. An obvious advantage of using neural networks is the elimination for the need of laborious feature engineering that can be cumbersome and error prone. The input question would be encoded into a vector representation. The model will be trained and evaluated on the LC-QuAD Dataset (Large-scale Complex Question Answering Dataset). The dataset was created explicitly for machine learning based QA approaches for learning complex SPARQL queries. The dataset consists of 5000 questions along with their corresponding SPARQL queries over the DBpedia dataset spanning 5042 entities and 615 predicates. These queries were annotated based on 38 unique templates that the model will attempt to classify. The resulting model will be evaluated against both the LC-QuAD dataset and the Question Answering Over Linked Data (QALD-7) dataset. The recursive neural network achieves template classification accuracy of 0.828 on the LC-QuAD dataset and an accuracy of 0.618 on the QALD-7 dataset. When the top-2 most likely templates were considered the model achieves an accuracy of 0.945 on the LC-QuAD dataset and 0.786 on the QALD-7 dataset. After slot filling, the overall system achieves a macro F-score 0.419 on the LC- QuAD dataset and a macro F-score of 0.417 on the QALD-7 dataset.Dissertation/ThesisMasters Thesis Software Engineering 201
    • …
    corecore