45,224 research outputs found

    Visual analytics of location-based social networks for decision support

    Get PDF
    Recent advances in technology have enabled people to add location information to social networks called Location-Based Social Networks (LBSNs) where people share their communication and whereabouts not only in their daily lives, but also during abnormal situations, such as crisis events. However, since the volume of the data exceeds the boundaries of human analytical capabilities, it is almost impossible to perform a straightforward qualitative analysis of the data. The emerging field of visual analytics has been introduced to tackle such challenges by integrating the approaches from statistical data analysis and human computer interaction into highly interactive visual environments. Based on the idea of visual analytics, this research contributes the techniques of knowledge discovery in social media data for providing comprehensive situational awareness. We extract valuable hidden information from the huge volume of unstructured social media data and model the extracted information for visualizing meaningful information along with user-centered interactive interfaces. We develop visual analytics techniques and systems for spatial decision support through coupling modeling of spatiotemporal social media data, with scalable and interactive visual environments. These systems allow analysts to detect and examine abnormal events within social media data by integrating automated analytical techniques and visual methods. We provide comprehensive analysis of public behavior response in disaster events through exploring and examining the spatial and temporal distribution of LBSNs. We also propose a trajectory-based visual analytics of LBSNs for anomalous human movement analysis during crises by incorporating a novel classification technique. Finally, we introduce a visual analytics approach for forecasting the overall flow of human crowds

    Visual Event Cueing in Linked Spatiotemporal Data

    Get PDF
    abstract: The media disperses a large amount of information daily pertaining to political events social movements, and societal conflicts. Media pertaining to these topics, no matter the format of publication used, are framed a particular way. Framing is used not for just guiding audiences to desired beliefs, but also to fuel societal change or legitimize/delegitimize social movements. For this reason, tools that can help to clarify when changes in social discourse occur and identify their causes are of great use. This thesis presents a visual analytics framework that allows for the exploration and visualization of changes that occur in social climate with respect to space and time. Focusing on the links between data from the Armed Conflict Location and Event Data Project (ACLED) and a streaming RSS news data set, users can be cued into interesting events enabling them to form and explore hypothesis. This visual analytics framework also focuses on improving intervention detection, allowing users to hypothesize about correlations between events and happiness levels, and supports collaborative analysis.Dissertation/ThesisMasters Thesis Computer Science 201

    Passive Visual Analytics of Social Media Data for Detection of Unusual Events

    Get PDF
    Now that social media sites have gained substantial traction, huge amounts of un-analyzed valuable data are being generated. Posts containing images and text have spatiotemporal data attached as well, having immense value for increasing situational awareness of local events, providing insights for investigations and understanding the extent of incidents, their severity, and consequences, as well as their time-evolving nature. However, the large volume of unstructured social media data hinders exploration and examination. To analyze such social media data, the S.M.A.R.T system provides the analyst with an interactive visual spatiotemporal analysis and spatial decision support environment that assists in evacuation planning and disaster management. S.M.A.R.T fetches data from various social media sources and arranges them in a perceivable manner, which is visually appealing. This in turn is a huge aid in finding and understanding abnormal events. Introducing a passive mode makes the tool more efficient, where it automatically detects idle time and gives a summary of all the anomalies encountered in the inactive period as soon as the analyst resumes monitoring. Using the tool, the analyst can first extract major topics from a set of selected messages and rank them probabilistically. The case studies in the past show improved situational awareness by using the methods mentioned before

    Web-Based Interactive Social Media Visual Analytics

    Get PDF
    Real-time social media platforms enable quick information broadcasting and response during disasters and emergencies. Analyzing the massive amount of generated data to understand the human behavior requires data collection and acquisition, parsing, filtering, augmentation, processing, and representation. Visual analytics approaches allow decision makers to observe trends and abnormalities, correlate them with other variables and gain invaluable insight into these situations. In this paper, we propose a set of visual analytic tools for analyzing and understanding real-time social media data in times of crisis and emergency situations. First, we model the degree of risk of individuals’ movement based on evacuation zones and post-event damaged areas. Identified movement patterns are extracted using clustering algorithms and represented in a visual and interactive manner. We use Twitter data posted in New York City during Hurricane Sandy in 2012 to demonstrate the efficacy of our approach. Second, we extend the Social Media Analytics and Reporting Toolkit (SMART) to supporting the spatial clustering analysis and temporal visualization. Our work would help first responders enhance awareness and understand human behavior in times of emergency, improving future events’ times of response and the ability to predict the human reaction. Our findings prove that today’s high-resolution geo-located social media platforms can enable new types of human behavior analysis and comprehension, helping decision makers take advantage of social media

    Web-based Visual Analytics for Social Media Data

    Get PDF
    Social media data provides valuable information about different events, trends and happenings around the world. Visual data analysis tasks for social media data have large computational and storage space requirements. Due to these restrictions, subdivision of data analysis tools into several layers such as Data, Business Logic or Algorithms, and Presentation Layer is often necessary to make them accessible for variety of clients. On server side, social media data analysis algorithms can be implemented and published in the form of web services. Visual Interface can then be implemented in the form of thin clients that call these web services for data querying, exploration, and analysis tasks. In our work, we have implemented a web-based visual analytics tool for social media data analysis. Initially, we extended our existing desktop-based Twitter data analysis application named “ScatterBlog” to create web services based API that provides access to all the data analysis algorithms. In the second phase, we are creating web based visual interface consuming these web services. Some major components of the visual interface include map view, content lens view, abnormal event detection view, Tweets summary view and filtering / visual query module. The tool can then be used by parties from various fields of interest, requiring only a browser to perform social media data analysis tasks

    Robust Image Sentiment Analysis Using Progressively Trained and Domain Transferred Deep Networks

    Full text link
    Sentiment analysis of online user generated content is important for many social media analytics tasks. Researchers have largely relied on textual sentiment analysis to develop systems to predict political elections, measure economic indicators, and so on. Recently, social media users are increasingly using images and videos to express their opinions and share their experiences. Sentiment analysis of such large scale visual content can help better extract user sentiments toward events or topics, such as those in image tweets, so that prediction of sentiment from visual content is complementary to textual sentiment analysis. Motivated by the needs in leveraging large scale yet noisy training data to solve the extremely challenging problem of image sentiment analysis, we employ Convolutional Neural Networks (CNN). We first design a suitable CNN architecture for image sentiment analysis. We obtain half a million training samples by using a baseline sentiment algorithm to label Flickr images. To make use of such noisy machine labeled data, we employ a progressive strategy to fine-tune the deep network. Furthermore, we improve the performance on Twitter images by inducing domain transfer with a small number of manually labeled Twitter images. We have conducted extensive experiments on manually labeled Twitter images. The results show that the proposed CNN can achieve better performance in image sentiment analysis than competing algorithms.Comment: 9 pages, 5 figures, AAAI 201

    A Visual Analytic Environment to Co-locate Peoples' Tweets with City Factual Data

    Full text link
    Social Media platforms (e.g., Twitter, Facebook, etc.) are used heavily by public to provide news, opinions, and reactions towards events or topics. Integrating such data with the event or topic factual data could provide a more comprehensive understanding of the underlying event or topic. Targeting this, we present our visual analytics tool, called VC-FaT, that integrates peoples' tweet data regarding crimes in San Francisco city with the city factual crime data. VC-FaT provides a number of interactive visualizations using both data sources for better understanding and exploration of crime activities happened in the city during a period of five years.Comment: 2 page

    Multimodal Video Annotation for Retrieval and Discovery of Newsworthy Video in a News Verification Scenario

    Get PDF
    © 2019, Springer Nature Switzerland AG. This paper describes the combination of advanced technologies for social-media-based story detection, story-based video retrieval and concept-based video (fragment) labeling under a novel approach for multimodal video annotation. This approach involves textual metadata, structural information and visual concepts - and a multimodal analytics dashboard that enables journalists to discover videos of news events, posted to social networks, in order to verify the details of the events shown. It outlines the characteristics of each individual method and describes how these techniques are blended to facilitate the content-based retrieval, discovery and summarization of (parts of) news videos. A set of case-driven experiments conducted with the help of journalists, indicate that the proposed multimodal video annotation mechanism - combined with a professional analytics dashboard which presents the collected and generated metadata about the news stories and their visual summaries - can support journalists in their content discovery and verification work
    • …
    corecore