7 research outputs found

    Liquidity Risk and Investors' Mood: Linking the Financial Market Liquidity to Sentiment Analysis through Twitter in the S&P500 Index

    Full text link
    [EN] Microblogging services can enrich the information investors use to make financial decisions on the stock markets. As liquidity has immediate consequences for a trader's movements, this risk is an attractive area of interest for both academics and those who participate in the financial markets. This paper focuses on market liquidity and studies the impact on liquidity and trading costs of the popular Twitter microblogging service. Sentiment analysis extracted from Twitter and different popular liquidity measures were gathered to analyze the relationship between liquidity and investors' opinions. The results, based on the analysis of the S&P 500 Index, found that the investors' mood had little influence on the spread of the index.Guijarro, F.; Moya Clemente, I.; Saleemi, J. (2019). Liquidity Risk and Investors' Mood: Linking the Financial Market Liquidity to Sentiment Analysis through Twitter in the S&P500 Index. Sustainability. 11(24):1-13. https://doi.org/10.3390/su11247048S113112

    Using Twitter trust network for stock market analysis

    Get PDF
    Online social networks are now attracting a lot of attention not only from their users but also from researchers in various fields. Many researchers believe that the public mood or sentiment expressed in social media is related to financial markets. We propose to use trust among users as a filtering and amplifying mechanism for the social media to increase its correlation with financial data in the stock market. Therefore, we used the real stock market data as ground truth for our trust management system. We collected stock-related data (tweets) from Twitter, which is a very popular Micro-blogging forum, to see the correlation between the Twitter sentiment valence and abnormal stock returns for eight firms in the S&P 500. We developed a trust management framework to build a user-to-user trust network for Twitter users. Compared with existing works, in addition to analyzing and accumulating tweets’ sentiment, we take into account the source of tweets – their authors. Authors are differentiated by their power or reputation in the whole community, where power is determined by the user-to-user trust network. To validate our trust management system, we did the Pearson correlation test for an eight months period (the trading days from 01/01/2015 through 08/31/2015). Compared with treating all the authors equally important, or weighting them by their number of followers, our trust network based reputation mechanism can amplify the correlation between a specific firm’s Twitter sentiment valence and the firm’s stock abnormal returns. To further consider the possible auto-correlation property of abnormal stock returns, we constructed a linear regression model, which includes historical stock abnormal returns, to test the relation between the Twitter sentiment valence and abnormal stock returns. Again, our results showed that by using our trust network power based method to weight tweets, Twitter sentiment valence reflect abnormal stock returns better than treating all the authors equally important or weighting them by their number of followers

    A Trust Management Framework for Decision Support Systems

    Get PDF
    In the era of information explosion, it is critical to develop a framework which can extract useful information and help people to make “educated” decisions. In our lives, whether we are aware of it, trust has turned out to be very helpful for us to make decisions. At the same time, cognitive trust, especially in large systems, such as Facebook, Twitter, and so on, needs support from computer systems. Therefore, we need a framework that can effectively, but also intuitively, let people express their trust, and enable the system to automatically and securely summarize the massive amounts of trust information, so that a user of the system can make “educated” decisions, or at least not blind decisions. Inspired by the similarities between human trust and physical measurements, this dissertation proposes a measurement theory based trust management framework. It consists of three phases: trust modeling, trust inference, and decision making. Instead of proposing specific trust inference formulas, this dissertation proposes a fundamental framework which is flexible and can be adapted by many different inference formulas. Validation experiments are done on two data sets: the Epinions.com data set and the Twitter data set. This dissertation also adapts the measurement theory based trust management framework for two decision support applications. In the first application, the real stock market data is used as ground truth for the measurement theory based trust management framework. Basically, the correlation between the sentiment expressed on Twitter and stock market data is measured. Compared with existing works which do not differentiate tweets’ authors, this dissertation analyzes trust among stock investors on Twitter and uses the trust network to differentiate tweets’ authors. The results show that by using the measurement theory based trust framework, Twitter sentiment valence is able to reflect abnormal stock returns better than treating all the authors as equally important or weighting them by their number of followers. In the second application, the measurement theory based trust management framework is used to help to detect and prevent from being attacked in cloud computing scenarios. In this application, each single flow is treated as a measurement. The simulation results show that the measurement theory based trust management framework is able to provide guidance for cloud administrators and customers to make decisions, e.g. migrating tasks from suspect nodes to trustworthy nodes, dynamically allocating resources according to trust information, and managing the trade-off between the degree of redundancy and the cost of resources

    The Big Five:Addressing Recurrent Multimodal Learning Data Challenges

    Get PDF
    The analysis of multimodal data in learning is a growing field of research, which has led to the development of different analytics solutions. However, there is no standardised approach to handle multimodal data. In this paper, we describe and outline a solution for five recurrent challenges in the analysis of multimodal data: the data collection, storing, annotation, processing and exploitation. For each of these challenges, we envision possible solutions. The prototypes for some of the proposed solutions will be discussed during the Multimodal Challenge of the fourth Learning Analytics & Knowledge Hackathon, a two-day hands-on workshop in which the authors will open up the prototypes for trials, validation and feedback

    Multimodal Challenge: Analytics Beyond User-computer Interaction Data

    Get PDF
    This contribution describes one the challenges explored in the Fourth LAK Hackathon. This challenge aims at shifting the focus from learning situations which can be easily traced through user-computer interactions data and concentrate more on user-world interactions events, typical of co-located and practice-based learning experiences. This mission, pursued by the multimodal learning analytics (MMLA) community, seeks to bridge the gap between digital and physical learning spaces. The “multimodal” approach consists in combining learners’ motoric actions with physiological responses and data about the learning contexts. These data can be collected through multiple wearable sensors and Internet of Things (IoT) devices. This Hackathon table will confront with three main challenges arising from the analysis and valorisation of multimodal datasets: 1) the data collection and storing, 2) the data annotation, 3) the data processing and exploitation. Some research questions which will be considered in this Hackathon challenge are the following: how to process the raw sensor data streams and extract relevant features? which data mining and machine learning techniques can be applied? how can we compare two action recordings? How to combine sensor data with Experience API (xAPI)? what are meaningful visualisations for these data
    corecore