1,369 research outputs found

    Towards Hybrid Cloud-assisted Crowdsourced Live Streaming: Measurement and Analysis

    Full text link
    Crowdsourced Live Streaming (CLS), most notably Twitch.tv, has seen explosive growth in its popularity in the past few years. In such systems, any user can lively broadcast video content of interest to others, e.g., from a game player to many online viewers. To fulfill the demands from both massive and heterogeneous broadcasters and viewers, expensive server clusters have been deployed to provide video ingesting and transcoding services. Despite the existence of highly popular channels, a significant portion of the channels is indeed unpopular. Yet as our measurement shows, these broadcasters are consuming considerable system resources; in particular, 25% (resp. 30%) of bandwidth (resp. computation) resources are used by the broadcasters who do not have any viewers at all. In this paper, we closely examine the challenge of handling unpopular live-broadcasting channels in CLS systems and present a comprehensive solution for service partitioning on hybrid cloud. The trace-driven evaluation shows that our hybrid cloud-assisted design can smartly assign ingesting and transcoding tasks to the elastic cloud virtual machines, providing flexible system deployment cost-effectively

    An Image Is Worth More than a Thousand Favorites: Surfacing the Hidden Beauty of Flickr Pictures

    Get PDF
    The dynamics of attention in social media tend to obey power laws. Attention concentrates on a relatively small number of popular items and neglecting the vast majority of content produced by the crowd. Although popularity can be an indication of the perceived value of an item within its community, previous research has hinted to the fact that popularity is distinct from intrinsic quality. As a result, content with low visibility but high quality lurks in the tail of the popularity distribution. This phenomenon can be particularly evident in the case of photo-sharing communities, where valuable photographers who are not highly engaged in online social interactions contribute with high-quality pictures that remain unseen. We propose to use a computer vision method to surface beautiful pictures from the immense pool of near-zero-popularity items, and we test it on a large dataset of creative-commons photos on Flickr. By gathering a large crowdsourced ground truth of aesthetics scores for Flickr images, we show that our method retrieves photos whose median perceived beauty score is equal to the most popular ones, and whose average is lower by only 1.5%.Comment: ICWSM 201

    Identifying Professional Photographers Through Image Quality and Aesthetics in Flickr

    Full text link
    In our generation, there is an undoubted rise in the use of social media and specifically photo and video sharing platforms. These sites have proved their ability to yield rich data sets through the users' interaction which can be used to perform a data-driven evaluation of capabilities. Nevertheless, this study reveals the lack of suitable data sets in photo and video sharing platforms and evaluation processes across them. In this way, our first contribution is the creation of one of the largest labelled data sets in Flickr with the multimodal data which has been open sourced as part of this contribution. Predicated on these data, we explored machine learning models and concluded that it is feasible to properly predict whether a user is a professional photographer or not based on self-reported occupation labels and several feature representations out of the user, photo and crowdsourced sets. We also examined the relationship between the aesthetics and technical quality of a picture and the social activity of that picture. Finally, we depicted which characteristics differentiate professional photographers from non-professionals. As far as we know, the results presented in this work represent an important novelty for the users' expertise identification which researchers from various domains can use for different applications

    A tag is worth a thousand pictures : A framework for an empirically grounded typology of relational values through social media

    Get PDF
    Unidad de excelencia MarĂ­a de Maeztu CEX2019-000940-MEnvironmental values depend on social-ecological interactions and, in turn, influence the production of the underlying biophysical ecosystems. Understanding the nuanced nature of the values that humans ascribe to the environment is thus a key frontier for environmental science and planning. The development of many of these values depends on social-ecological interactions, such as outdoor recreation, landscape aesthetic appreciation or educational experiences with and within nature that can be articulated through the framework of cultural ecosystem services (CES). However, the non-material and intangible nature of CES has challenged previous attempts to assess the multiple and subjective values that people attach to them. In particular, this study focuses on assessing relational values ascribed to CES, here defined as values resonating with core principles of justice, reciprocity, care, and responsibility towards humans and more-than-humans. Building on emerging approaches for inferring relational CES values through social media (SM) images, this research explores the additional potential of a combined analysis of both the visual and textual content of SM data. To do so, we developed an inductive, empirically grounded coding protocol as well as a values typology that could be iteratively tested and verified by three different researchers to improve the consistency and replicability of the assessment. As a case study, we collected images and texts shared on the photo-sharing platform Flickr between 2004 and 2017 that were geotagged within the peri-urban park of Collserola, at the outskirts of Barcelona, Spain. Results reveal a wide spectrum of nine CES values within the park boundaries that show positive and negative correlations among each other, providing useful information for landscape planning and management. Moreover, the study highlights the need for spatial, temporal and demographic analysis, as well as for supervised machine learning techniques to further leverage SM data into contextual and just decision-making and planning

    Visual Affect Around the World: A Large-scale Multilingual Visual Sentiment Ontology

    Get PDF
    Every culture and language is unique. Our work expressly focuses on the uniqueness of culture and language in relation to human affect, specifically sentiment and emotion semantics, and how they manifest in social multimedia. We develop sets of sentiment- and emotion-polarized visual concepts by adapting semantic structures called adjective-noun pairs, originally introduced by Borth et al. (2013), but in a multilingual context. We propose a new language-dependent method for automatic discovery of these adjective-noun constructs. We show how this pipeline can be applied on a social multimedia platform for the creation of a large-scale multilingual visual sentiment concept ontology (MVSO). Unlike the flat structure in Borth et al. (2013), our unified ontology is organized hierarchically by multilingual clusters of visually detectable nouns and subclusters of emotionally biased versions of these nouns. In addition, we present an image-based prediction task to show how generalizable language-specific models are in a multilingual context. A new, publicly available dataset of >15.6K sentiment-biased visual concepts across 12 languages with language-specific detector banks, >7.36M images and their metadata is also released.Comment: 11 pages, to appear at ACM MM'1
    • …
    corecore