92 research outputs found

    An Application of Sentiment Analysis Techniques to Determine Public Opinion in Social Media

    Get PDF
    This paper describes a prototype application that gathers textual data from the microblogging platform Twitter and carries out sentiment analysis to determine the polarity and subjectivity in relation to Brexit, the UK® s exit from the European Union. The design, implementation and testing of the developed prototype will be discussed and an experimental evaluation of the product described. Specifically we provide insight into how events affect public opinion and how sentiment and public mood may be gathered from textual twitter data and propose this as an alternative to opinion polls. Traditional approaches to opinion polling face growing challenges in capturing the public mood. Small sample response and the time it takes to capture swings in public opinion make it difficult to provide accurate data for the political process. With over 500 million daily messages posted worldwide, the social media platform Twitter is an untapped resource of information. Users post short real time messages views and opinions on many topics, often signed with a ‘#hashtag’ to classify and document the subject matter in discussion. In this paper we apply automated sentiment analysis methods to tweets giving a measure of public support or hostility to a topic (‘Brexit’). The data were collected during several periods to determine changes in opinion. Using machine learning techniques we show that changes in opinion were also related to external events. Limitations of the method are that age, location and education are confounding factors where Twitter users over represent a young, urban public. However, the economic advantage of the method over real-time telephone polling are considerable

    What tweets tell us about MOOC participation

    Get PDF
    In this research paper, the authors analyze the collected Twitter data output during MobiMOOC 2011. This six-week data stream includes all tweets that contain the MOOC's hashtag (#mobiMOOC) and it has been analyzed using qualitative methodology. The analysis sought to examine the emotive vocabulary used, to determine if there was content-sharing via tweets, and to analyze the folksonomic trends of the tweets. In Addition sought a deeper understanding of what, and how, MOOC participants share what they share on the MOOC's Twitter channel. The aim of this study is to provide a little more insight into MOOC learner behaviors on Twitter so that future MOOC designers and facilitators can better engage with their learners.Facultad de Ciencias Exacta

    What tweets tell us about MOOC participation

    Get PDF
    In this research paper, the authors analyze the collected Twitter data output during MobiMOOC 2011. This six-week data stream includes all tweets that contain the MOOC's hashtag (#mobiMOOC) and it has been analyzed using qualitative methodology. The analysis sought to examine the emotive vocabulary used, to determine if there was content-sharing via tweets, and to analyze the folksonomic trends of the tweets. In Addition sought a deeper understanding of what, and how, MOOC participants share what they share on the MOOC's Twitter channel. The aim of this study is to provide a little more insight into MOOC learner behaviors on Twitter so that future MOOC designers and facilitators can better engage with their learners.Facultad de Ciencias Exacta

    Governance in the age of social machines: the web observatory

    Get PDF
    The World Wide Web has provided unprecedented access to information; as humans and machines increasingly interact with it they provide more and more data. The challenge is how to analyse and interpret this data within the context that it was created, and to present it in a way that both researchers and practitioners can more easily make sense of. The first step is to have access to open and interoperable data sets, which Governments around the world are increasingly subscribing to. But having ‘open’ data is just the beginning and does not necessarily lead to better decision making or policy development. This is because data do not provide the answers – they need to be analysed, interpreted and understood within the context of their creation, and the business imperative of the organisation using them. The major corporate entities, such as Google, Amazon, Microsoft, Apple and Facebook, have the capabilities to do this, but are driven by their own commercial imperatives, and their data are largely siloed and held within ‘walled gardens’ of information. All too often governments and non-profit groups lack these capabilities, and are driven by very different mandates. In addition they have far more complex community relationships, and must abide by regulatory constraints which dictate how they can use the data they hold. As such they struggle to maximise the value of this emerging ‘digital currency’ and are therefore largely beholden to commercial vendors. What has emerged is a public-private data ecosystem that has huge policy implications (including the twin challenges of privacy and security). Many within the public sector lack the skills to address these challenges because they lack the literacy required within the digital context. This project seeks to address some of these problems by bringing together a safe and secure Australian-based data platform (facilitating the sharing of data, analytics and visualisation) with policy analysis and governance expertise in order to create a collaborative working model of a ‘Government Web Observatory’. This neutral space, hosted by an Australian university, can serve as a powerful complement to existing Open Data initiatives in Australia, and enable research and education to combine to support the development of a more digitally literate public service. The project aims to explore where, and in which contexts, people, things, data and the Internet meet and result in evolving observable phenomena which can inform better government policy development and service delivery.&nbsp

    Twitter and society

    Get PDF

    Building Embodied Conversational Agents:Observations on human nonverbal behaviour as a resource for the development of artificial characters

    Get PDF
    "Wow this is so cool!" This is what I most probably yelled, back in the 90s, when my first computer program on our MSX computer turned out to do exactly what I wanted it to do. The program contained the following instruction: COLOR 10(1.1) After hitting enter, it would change the screen color from light blue to dark yellow. A few years after that experience, Microsoft Windows was introduced. Windows came with an intuitive graphical user interface that was designed to allow all people, so also those who would not consider themselves to be experienced computer addicts, to interact with the computer. This was a major step forward in human-computer interaction, as from that point forward no complex programming skills were required anymore to perform such actions as adapting the screen color. Changing the background was just a matter of pointing the mouse to the desired color on a color palette. "Wow this is so cool!". This is what I shouted, again, 20 years later. This time my new smartphone successfully skipped to the next song on Spotify because I literally told my smartphone, with my voice, to do so. Being able to operate your smartphone with natural language through voice-control can be extremely handy, for instance when listening to music while showering. Again, the option to handle a computer with voice instructions turned out to be a significant optimization in human-computer interaction. From now on, computers could be instructed without the use of a screen, mouse or keyboard, and instead could operate successfully simply by telling the machine what to do. In other words, I have personally witnessed how, within only a few decades, the way people interact with computers has changed drastically, starting as a rather technical and abstract enterprise to becoming something that was both natural and intuitive, and did not require any advanced computer background. Accordingly, while computers used to be machines that could only be operated by technically-oriented individuals, they had gradually changed into devices that are part of many people’s household, just as much as a television, a vacuum cleaner or a microwave oven. The introduction of voice control is a significant feature of the newer generation of interfaces in the sense that these have become more "antropomorphic" and try to mimic the way people interact in daily life, where indeed the voice is a universally used device that humans exploit in their exchanges with others. The question then arises whether it would be possible to go even one step further, where people, like in science-fiction movies, interact with avatars or humanoid robots, whereby users can have a proper conversation with a computer-simulated human that is indistinguishable from a real human. An interaction with a human-like representation of a computer that behaves, talks and reacts like a real person would imply that the computer is able to not only produce and understand messages transmitted auditorily through the voice, but also could rely on the perception and generation of different forms of body language, such as facial expressions, gestures or body posture. At the time of writing, developments of this next step in human-computer interaction are in full swing, but the type of such interactions is still rather constrained when compared to the way humans have their exchanges with other humans. It is interesting to reflect on how such future humanmachine interactions may look like. When we consider other products that have been created in history, it sometimes is striking to see that some of these have been inspired by things that can be observed in our environment, yet at the same do not have to be exact copies of those phenomena. For instance, an airplane has wings just as birds, yet the wings of an airplane do not make those typical movements a bird would produce to fly. Moreover, an airplane has wheels, whereas a bird has legs. At the same time, an airplane has made it possible for a humans to cover long distances in a fast and smooth manner in a way that was unthinkable before it was invented. The example of the airplane shows how new technologies can have "unnatural" properties, but can nonetheless be very beneficial and impactful for human beings. This dissertation centers on this practical question of how virtual humans can be programmed to act more human-like. The four studies presented in this dissertation all have the equivalent underlying question of how parts of human behavior can be captured, such that computers can use it to become more human-like. Each study differs in method, perspective and specific questions, but they are all aimed to gain insights and directions that would help further push the computer developments of human-like behavior and investigate (the simulation of) human conversational behavior. The rest of this introductory chapter gives a general overview of virtual humans (also known as embodied conversational agents), their potential uses and the engineering challenges, followed by an overview of the four studies

    Building Embodied Conversational Agents:Observations on human nonverbal behaviour as a resource for the development of artificial characters

    Get PDF
    "Wow this is so cool!" This is what I most probably yelled, back in the 90s, when my first computer program on our MSX computer turned out to do exactly what I wanted it to do. The program contained the following instruction: COLOR 10(1.1) After hitting enter, it would change the screen color from light blue to dark yellow. A few years after that experience, Microsoft Windows was introduced. Windows came with an intuitive graphical user interface that was designed to allow all people, so also those who would not consider themselves to be experienced computer addicts, to interact with the computer. This was a major step forward in human-computer interaction, as from that point forward no complex programming skills were required anymore to perform such actions as adapting the screen color. Changing the background was just a matter of pointing the mouse to the desired color on a color palette. "Wow this is so cool!". This is what I shouted, again, 20 years later. This time my new smartphone successfully skipped to the next song on Spotify because I literally told my smartphone, with my voice, to do so. Being able to operate your smartphone with natural language through voice-control can be extremely handy, for instance when listening to music while showering. Again, the option to handle a computer with voice instructions turned out to be a significant optimization in human-computer interaction. From now on, computers could be instructed without the use of a screen, mouse or keyboard, and instead could operate successfully simply by telling the machine what to do. In other words, I have personally witnessed how, within only a few decades, the way people interact with computers has changed drastically, starting as a rather technical and abstract enterprise to becoming something that was both natural and intuitive, and did not require any advanced computer background. Accordingly, while computers used to be machines that could only be operated by technically-oriented individuals, they had gradually changed into devices that are part of many people’s household, just as much as a television, a vacuum cleaner or a microwave oven. The introduction of voice control is a significant feature of the newer generation of interfaces in the sense that these have become more "antropomorphic" and try to mimic the way people interact in daily life, where indeed the voice is a universally used device that humans exploit in their exchanges with others. The question then arises whether it would be possible to go even one step further, where people, like in science-fiction movies, interact with avatars or humanoid robots, whereby users can have a proper conversation with a computer-simulated human that is indistinguishable from a real human. An interaction with a human-like representation of a computer that behaves, talks and reacts like a real person would imply that the computer is able to not only produce and understand messages transmitted auditorily through the voice, but also could rely on the perception and generation of different forms of body language, such as facial expressions, gestures or body posture. At the time of writing, developments of this next step in human-computer interaction are in full swing, but the type of such interactions is still rather constrained when compared to the way humans have their exchanges with other humans. It is interesting to reflect on how such future humanmachine interactions may look like. When we consider other products that have been created in history, it sometimes is striking to see that some of these have been inspired by things that can be observed in our environment, yet at the same do not have to be exact copies of those phenomena. For instance, an airplane has wings just as birds, yet the wings of an airplane do not make those typical movements a bird would produce to fly. Moreover, an airplane has wheels, whereas a bird has legs. At the same time, an airplane has made it possible for a humans to cover long distances in a fast and smooth manner in a way that was unthinkable before it was invented. The example of the airplane shows how new technologies can have "unnatural" properties, but can nonetheless be very beneficial and impactful for human beings. This dissertation centers on this practical question of how virtual humans can be programmed to act more human-like. The four studies presented in this dissertation all have the equivalent underlying question of how parts of human behavior can be captured, such that computers can use it to become more human-like. Each study differs in method, perspective and specific questions, but they are all aimed to gain insights and directions that would help further push the computer developments of human-like behavior and investigate (the simulation of) human conversational behavior. The rest of this introductory chapter gives a general overview of virtual humans (also known as embodied conversational agents), their potential uses and the engineering challenges, followed by an overview of the four studies

    What tweets tell us about MOOC participation

    Get PDF
    In this research paper, the authors analyze the collected Twitter data output during MobiMOOC 2011. This six-week data stream includes all tweets that contain the MOOC's hashtag (#mobiMOOC) and it has been analyzed using qualitative methodology. The analysis sought to examine the emotive vocabulary used, to determine if there was content-sharing via tweets, and to analyze the folksonomic trends of the tweets. In Addition sought a deeper understanding of what, and how, MOOC participants share what they share on the MOOC's Twitter channel. The aim of this study is to provide a little more insight into MOOC learner behaviors on Twitter so that future MOOC designers and facilitators can better engage with their learners.Facultad de Ciencias Exacta

    Advances in crowdsourcing: Surveys, social media and geospatial analysis: Towards a big data toolkit

    Get PDF
    The collection, mining and analysis of social media are arguably one of the core examples of “big data” sets for the social sciences. The dynamic nature of the media makes it a new and emerging base for the analysis of human behaviour and brings new opportunities to understand groups, movements and society. Analysing the results of billions of conversations has already revolutionised marketing and advertising. However, these datasets, by their very nature, are complex, time-consuming and computationally difficult to analyse. We put in place a series of examples to utilise such datasets with a view of exploring non-complex workflows via the use of new toolkits, linking into data collection via the crowd and opening up systems for analysis
    • 

    corecore