8,091 research outputs found
Impact of Biases in Big Data
The underlying paradigm of big data-driven machine learning reflects the
desire of deriving better conclusions from simply analyzing more data, without
the necessity of looking at theory and models. Is having simply more data
always helpful? In 1936, The Literary Digest collected 2.3M filled in
questionnaires to predict the outcome of that year's US presidential election.
The outcome of this big data prediction proved to be entirely wrong, whereas
George Gallup only needed 3K handpicked people to make an accurate prediction.
Generally, biases occur in machine learning whenever the distributions of
training set and test set are different. In this work, we provide a review of
different sorts of biases in (big) data sets in machine learning. We provide
definitions and discussions of the most commonly appearing biases in machine
learning: class imbalance and covariate shift. We also show how these biases
can be quantified and corrected. This work is an introductory text for both
researchers and practitioners to become more aware of this topic and thus to
derive more reliable models for their learning problems
The pictures we like are our image: continuous mapping of favorite pictures into self-assessed and attributed personality traits
Flickr allows its users to tag the pictures they like as “favorite”. As a result, many users of the popular photo-sharing platform produce galleries of favorite pictures. This article proposes new approaches, based on Computational Aesthetics, capable to infer the personality traits of Flickr users from the galleries above. In particular, the approaches map low-level features extracted from the pictures into numerical scores corresponding to the Big-Five Traits, both self-assessed and attributed. The experiments were performed over 60,000 pictures tagged as favorite by 300 users (the PsychoFlickr Corpus). The results show that it is possible to predict beyond chance both self-assessed and attributed traits. In line with the state-of-the art of Personality Computing, these latter are predicted with higher effectiveness (correlation up to 0.68 between actual and predicted traits)
Co-training for Demographic Classification Using Deep Learning from Label Proportions
Deep learning algorithms have recently produced state-of-the-art accuracy in
many classification tasks, but this success is typically dependent on access to
many annotated training examples. For domains without such data, an attractive
alternative is to train models with light, or distant supervision. In this
paper, we introduce a deep neural network for the Learning from Label
Proportion (LLP) setting, in which the training data consist of bags of
unlabeled instances with associated label distributions for each bag. We
introduce a new regularization layer, Batch Averager, that can be appended to
the last layer of any deep neural network to convert it from supervised
learning to LLP. This layer can be implemented readily with existing deep
learning packages. To further support domains in which the data consist of two
conditionally independent feature views (e.g. image and text), we propose a
co-training algorithm that iteratively generates pseudo bags and refits the
deep LLP model to improve classification accuracy. We demonstrate our models on
demographic attribute classification (gender and race/ethnicity), which has
many applications in social media analysis, public health, and marketing. We
conduct experiments to predict demographics of Twitter users based on their
tweets and profile image, without requiring any user-level annotations for
training. We find that the deep LLP approach outperforms baselines for both
text and image features separately. Additionally, we find that co-training
algorithm improves image and text classification by 4% and 8% absolute F1,
respectively. Finally, an ensemble of text and image classifiers further
improves the absolute F1 measure by 4% on average
DOC: Deep Open Classification of Text Documents
Traditional supervised learning makes the closed-world assumption that the
classes appeared in the test data must have appeared in training. This also
applies to text learning or text classification. As learning is used
increasingly in dynamic open environments where some new/test documents may not
belong to any of the training classes, identifying these novel documents during
classification presents an important problem. This problem is called open-world
classification or open classification. This paper proposes a novel deep
learning based approach. It outperforms existing state-of-the-art techniques
dramatically.Comment: accepted at EMNLP 201
A Visual Analytics System for Making Sense of Real-Time Twitter Streams
Through social media platforms, massive amounts of data are being produced. Twitter, as one such platform, enables users to post “tweets” on an unprecedented scale. Once analyzed by machine learning (ML) techniques and in aggregate, Twitter data can be an invaluable resource for gaining insight. However, when applied to real-time data streams, due to covariate shifts in the data (i.e., changes in the distributions of the inputs of ML algorithms), existing ML approaches result in different types of biases and provide uncertain outputs. This thesis describes a visual analytics system (i.e., a tool that combines data visualization, human-data interaction, and ML) to help users make sense of the real-time streams on Twitter. As proofs of concept, public-health and political discussions were analyzed. The system not only provides categorized and aggregate results but also enables the stakeholders to diagnose and to heuristically suggest fixes for the errors in the outcome
Audio-Visual Sentiment Analysis for Learning Emotional Arcs in Movies
Stories can have tremendous power -- not only useful for entertainment, they
can activate our interests and mobilize our actions. The degree to which a
story resonates with its audience may be in part reflected in the emotional
journey it takes the audience upon. In this paper, we use machine learning
methods to construct emotional arcs in movies, calculate families of arcs, and
demonstrate the ability for certain arcs to predict audience engagement. The
system is applied to Hollywood films and high quality shorts found on the web.
We begin by using deep convolutional neural networks for audio and visual
sentiment analysis. These models are trained on both new and existing
large-scale datasets, after which they can be used to compute separate audio
and visual emotional arcs. We then crowdsource annotations for 30-second video
clips extracted from highs and lows in the arcs in order to assess the
micro-level precision of the system, with precision measured in terms of
agreement in polarity between the system's predictions and annotators' ratings.
These annotations are also used to combine the audio and visual predictions.
Next, we look at macro-level characterizations of movies by investigating
whether there exist `universal shapes' of emotional arcs. In particular, we
develop a clustering approach to discover distinct classes of emotional arcs.
Finally, we show on a sample corpus of short web videos that certain emotional
arcs are statistically significant predictors of the number of comments a video
receives. These results suggest that the emotional arcs learned by our approach
successfully represent macroscopic aspects of a video story that drive audience
engagement. Such machine understanding could be used to predict audience
reactions to video stories, ultimately improving our ability as storytellers to
communicate with each other.Comment: Data Mining (ICDM), 2017 IEEE 17th International Conference o
- …