484 research outputs found

    Evaluating sentiment in financial news articles: Working paper series--11-10

    Get PDF
    We investigate the pairing of a financial news article prediction system, AZFinText, with sentiment analysis techniques. From our comparisons we found that news articles of a subjective nature were easier to predict in both price direction (59.0% vs 50.4% without sentiment) and through a simple trading engine (3.30% return vs 2.41% without sentiment). Looking into sentiment further, we found that news articles of a negative sentiment were easiest to predict in both price direction (50.9% vs 50.4% without sentiment) and our simple trading engine (3.04% return vs 2.41% without sentiment). Investigating the negative sentiment further, we found that AZFinText was best able to predict price decreases in articles of a positive sentiment (53.5%) and price increases in articles of a negative or neutral sentiment (52.4% and 49.5% respectively)

    Personalizing Interactions with Information Systems

    Get PDF
    Personalization constitutes the mechanisms and technologies necessary to customize information access to the end-user. It can be defined as the automatic adjustment of information content, structure, and presentation tailored to the individual. In this chapter, we study personalization from the viewpoint of personalizing interaction. The survey covers mechanisms for information-finding on the web, advanced information retrieval systems, dialog-based applications, and mobile access paradigms. Specific emphasis is placed on studying how users interact with an information system and how the system can encourage and foster interaction. This helps bring out the role of the personalization system as a facilitator which reconciles the user’s mental model with the underlying information system’s organization. Three tiers of personalization systems are presented, paying careful attention to interaction considerations. These tiers show how progressive levels of sophistication in interaction can be achieved. The chapter also surveys systems support technologies and niche application domains

    Alter ego, state of the art on user profiling: an overview of the most relevant organisational and behavioural aspects regarding User Profiling.

    Get PDF
    This report gives an overview of the most relevant organisational and\ud behavioural aspects regarding user profiling. It discusses not only the\ud most important aims of user profiling from both an organisation’s as\ud well as a user’s perspective, it will also discuss organisational motives\ud and barriers for user profiling and the most important conditions for\ud the success of user profiling. Finally recommendations are made and\ud suggestions for further research are given

    Integrated models, frameworks and decision support tools to guide management and planning in Northern Australia. Final report

    Get PDF
    [Extract] There is a lot of interest in developing northern Australia while also caring for the unique Australian landscape (Commonwealth of Australia 2015). However, trying to decide how to develop and protect at the same time can be a challenge. There are many modelling tools available to inform these decisions, including integrated models, frameworks, and decision support tools, but there are so many different kinds that it’s difficult to determine which might be best suited to inform different decisions. To support planning and development decisions across northern Australia, this project aimed to create resources to help end-users (practitioners) to assess: 1. the availability and suitability of particular modelling tools; and 2. the feasibility of using, developing, and maintaining different types of modelling tools

    Semi-Supervised Learning For Identifying Opinions In Web Content

    Get PDF
    Thesis (Ph.D.) - Indiana University, Information Science, 2011Opinions published on the World Wide Web (Web) offer opportunities for detecting personal attitudes regarding topics, products, and services. The opinion detection literature indicates that both a large body of opinions and a wide variety of opinion features are essential for capturing subtle opinion information. Although a large amount of opinion-labeled data is preferable for opinion detection systems, opinion-labeled data is often limited, especially at sub-document levels, and manual annotation is tedious, expensive and error-prone. This shortage of opinion-labeled data is less challenging in some domains (e.g., movie reviews) than in others (e.g., blog posts). While a simple method for improving accuracy in challenging domains is to borrow opinion-labeled data from a non-target data domain, this approach often fails because of the domain transfer problem: Opinion detection strategies designed for one data domain generally do not perform well in another domain. However, while it is difficult to obtain opinion-labeled data, unlabeled user-generated opinion data are readily available. Semi-supervised learning (SSL) requires only limited labeled data to automatically label unlabeled data and has achieved promising results in various natural language processing (NLP) tasks, including traditional topic classification; but SSL has been applied in only a few opinion detection studies. This study investigates application of four different SSL algorithms in three types of Web content: edited news articles, semi-structured movie reviews, and the informal and unstructured content of the blogosphere. SSL algorithms are also evaluated for their effectiveness in sparse data situations and domain adaptation. Research findings suggest that, when there is limited labeled data, SSL is a promising approach for opinion detection in Web content. Although the contributions of SSL varied across data domains, significant improvement was demonstrated for the most challenging data domain--the blogosphere--when a domain transfer-based SSL strategy was implemented

    Knowledge Modelling and Learning through Cognitive Networks

    Get PDF
    One of the most promising developments in modelling knowledge is cognitive network science, which aims to investigate cognitive phenomena driven by the networked, associative organization of knowledge. For example, investigating the structure of semantic memory via semantic networks has illuminated how memory recall patterns influence phenomena such as creativity, memory search, learning, and more generally, knowledge acquisition, exploration, and exploitation. In parallel, neural network models for artificial intelligence (AI) are also becoming more widespread as inferential models for understanding which features drive language-related phenomena such as meaning reconstruction, stance detection, and emotional profiling. Whereas cognitive networks map explicitly which entities engage in associative relationships, neural networks perform an implicit mapping of correlations in cognitive data as weights, obtained after training over labelled data and whose interpretation is not immediately evident to the experimenter. This book aims to bring together quantitative, innovative research that focuses on modelling knowledge through cognitive and neural networks to gain insight into mechanisms driving cognitive processes related to knowledge structuring, exploration, and learning. The book comprises a variety of publication types, including reviews and theoretical papers, empirical research, computational modelling, and big data analysis. All papers here share a commonality: they demonstrate how the application of network science and AI can extend and broaden cognitive science in ways that traditional approaches cannot

    The genesis and emergence of Web 3.0: a study in the integration of artificial intelligence and the semantic web in knowledge creation

    Get PDF
    The web as we know it has evolved rapidly over the last decade. We have gone from a phase of rapid growth as seen with the dot.com boom where business was king to the current web 2.0 phase where social networking, Wiki’s, Blogs and other related tools flood the bandwidth of the world wide web. The empowerment of the web user with web 2.0 technologies has led to the exponential growth of data, information and knowledge on the web. With this rapid change, there is a need to logically categorise this information and knowledge so it can be fully utilised by all. It can be argued that the power of the knowledge held on the web is not fully exposed under its current structure and to improve this we need to explore the foundations of the web. This dissertation will explore the evolution of the web from its early days to the present day. It will examine the way web content is stored and discuss the new semantic technologies now available to represent this content. The research aims to demonstrate the possibilities of efficient knowledge extraction from a knowledge portal such as a Wiki or SharePoint portal using these semantic technologies. This generation of dynamic knowledge content within a limited domain will attempt to demonstrate the benefits of semantic web to the knowledge age
    corecore