42 research outputs found

    Social media for crisis management: clustering approaches for sub-event detection

    Get PDF
    Social media is getting increasingly important for crisis management, as it enables the public to provide information in different forms: text, image and video which can be valuable for crisis management. Such information is usually spatial and time-oriented, useful for understanding the emergency needs, performing decision making and supporting learning/training after the emergency. Due to the huge amount of data gathered during a crisis, automatic processing of the data is needed to support crisis management. One way of automating the process is to uncover sub-events (i.e., special hotspots) in the data collected from social media to enable better understanding of the crisis. We propose in the present paper clustering approaches for sub-event detection that operate on Flickr and YouTube data since multimedia data is of particular importance to understand the situation. Different clustering algorithms are assessed using the textual annotations (i.e., title, tags and description) and additional metadata information, like time and location. The empirical study shows in particular that social multimedia combined with clustering in the context of crisis management is worth using for detecting sub-events. It serves to integrate social media into crisis management without cumbersome manual monitoring

    Online indexing and clustering of social media data for emergency management

    Get PDF
    Social media becomes a vital part in our daily communication practice, creating a huge amount of data and covering different real-world situations. Currently, there is a tendency in making use of social media during emergency management and response. Most of this effort is performed by a huge number of volunteers browsing through social media data and preparing maps that can be used by professional first responders. Automatic analysis approaches are needed to directly support the response teams in monitoring and also understanding the evolution of facts in social media during an emergency situation. In this paper, we investigate the problem of real-time sub-events identification in social media data (i.e., Twitter, Flickr and YouTube) during emergencies. A processing framework is presented serving to generate situational reports/summaries from social media data. This framework relies in particular on online indexing and online clustering of media data streams. Online indexing aims at tracking the relevant vocabulary to capture the evolution of sub-events over time. Online clustering, on the other hand, is used to detect and update the set of sub-events using the indices built during online indexing. To evaluate the framework, social media data related to Hurricane Sandy 2012 was collected and used in a series of experiments. In particular some online indexing methods have been tested against a proposed method to show their suitability. Moreover, the quality of online clustering has been studied using standard clustering indices. Overall the framework provides a great opportunity for supporting emergency responders as demonstrated in real-world emergency exercises

    QoE-Assured 4K HTTP live streaming via transient segment holding at mobile edge

    Get PDF
    HTTP-based live streaming has become increasingly popular in recent years, and more users have started generating 4K live streams from their devices (e.g., mobile phones) through social-media service providers like Facebook or YouTube. If the audience is located far from a live stream source across the global Internet, TCP throughput becomes substantially suboptimal due to slow-start and congestion control mechanisms. This is especially the case when the end-to-end content delivery path involves radio access network (RAN) at the last mile. As a result, the data rate perceived by a mobile receiver may not meet the high requirement of 4K video streams, which causes deteriorated Quality-of-Experience (QoE). In this paper, we propose a scheme named Edge-based Transient Holding of Live sEgment (ETHLE), which addresses the issue above by performing context-aware transient holding of video segments at the mobile edge with virtualized content caching capability. Through holding the minimum number of live video segments at the mobile edge cache in a context-aware manner, the ETHLE scheme is able to achieve seamless 4K live streaming experiences across the global Internet by eliminating buffering and substantially reducing initial startup delay and live stream latency. It has been deployed as a virtual network function at an LTE-A network, and its performance has been evaluated using real live stream sources that are distributed around the world. The significance of this paper is that by leveraging on virtualized caching resources at the mobile edge, we have addressed the conventional transport-layer bottleneck and enabled QoE-assured Internet-wide live streaming to support the emerging live streaming services with high data rate requirements

    Batch-based Active Learning: Application to Social Media Data for Crisis Management

    Get PDF
    Classification of evolving data streams is a challenging task, which is suitably tackled with online learning approaches. Data is processed instantly requiring the learning machinery to (self-)adapt by adjusting its model. However for high velocity streams, it is usually difficult to obtain labeled samples to train the classification model. Hence, we propose a novel online batch-based active learning algorithm (OBAL) to perform the labeling. OBAL is developed for crisis management applications where data streams are generated by the social media community. OBAL is applied to discriminate relevant from irrelevant social media items. An emergency management user will be interactively queried to label chosen items. OBAL exploits the boundary items for which it is highly uncertain about their class and makes use of two classifiers: k-Nearest Neighbors (kNN) and Support Vector Machine (SVM). OBAL is equipped with a labeling budget and a set of uncertainty strategies to identify the items for labeling. An extensive analysis is carried out to show OBAL's performance, the sensitivity of its parameters, and the contribution of the individual uncertainty strategies. Two types of datasets are used: synthetic and social media datasets related to crises. The empirical results illustrate that OBAL has a very good discrimination power

    Active Online Learning for Social Media Analysis to Support Crisis Management

    Get PDF
    People use social media (SM) to describe and discuss different situations they are involved in, like crises. It is therefore worthwhile to exploit SM contents to support crisis management, in particular by revealing useful and unknown information about the crises in real-time. Hence, we propose a novel active online multiple-prototype classifier, called AOMPC. It identifies relevant data related to a crisis. AOMPC is an online learning algorithm that operates on data streams and which is equipped with active learning mechanisms to actively query the label of ambiguous unlabeled data. The number of queries is controlled by a fixed budget strategy. Typically, AOMPC accommodates partly labeled data streams. AOMPC was evaluated using two types of data: (1) synthetic data and (2) SM data from Twitter related to two crises, Colorado Floods and Australia Bushfires. To provide a thorough evaluation, a whole set of known metrics was used to study the quality of the results. Moreover, a sensitivity analysis was conducted to show the effect of AOMPC’s parameters on the accuracy of the results. A comparative study of AOMPC against other available online learning algorithms was performed. The experiments showed very good behavior of AOMPC for dealing with evolving, partly labeled data streams

    Context-aware hoarding of multimedia content in a largescale tour guide scenario: a case study on scaling issues of a multimedia tour guide

    Get PDF
    Abstract: This paper discusses scaling issues of a mobile multimedia tour guide. Making tourist-information available in a substantially large geographical area (e.g. a federal state in Austria) raises new questions, compared to providing similar information in a limited area (such as a museum). First, we have to assume a heterogeneous network infrastructure containing high and low bandwidth links and even total network loss. Video streaming is therefore not possible at any place. Secondly, the total amount of data grows linearly to the number of Points of Interest (POIs) which are augmented by the tour guide. Therefore, a preloading of all data onto a device with limited storage is not possible. A possible solution to these problems is hoarding, i.e. preloading an "appropriate" subset of data. The crucial question is to find the proper subset in dependence of the actual context. The paper discusses the questions o

    Fast adaptation decision taking for cross-modal multimedia content adaptation

    No full text
    In order to enable transparent and convenient use of multimedia content across a wide range of networks and devices, content adaptation is an important issue within multimedia frameworks. The so called Digital Item Adaptation (DIA) standard is one of the core concepts of the MPEG-21 framework that will support the adaptation of multimedia resources according to device capabilities, underlying network characteristics, and user preferences. Most multimedia adaptation engines for providing Universal Multimedia Access (UMA) scale the content with respect to terminal capabilities and resource constraints. This paper focuses on the cross-modal adaptation decision taking process considering the user environment and terminal capabilities as well as resource limitations on the server, network, and client side. This approach represents a step toward increased Universal Multimedia Experience (UME). Based on four different algorithms for solving this optimization process, we present an evaluation of results gained by running their implementations on different test networks. 1

    Knapsack problem and piece-picking algorithms for layered video streaming

    No full text
    corecore