281 research outputs found
Socializing in emergencies—A review of the use of social media in emergency situations
AbstractSocial media tools are integrated in most parts of our daily lives, as citizens, netizens, researchers or emergency responders. Lessons learnt from disasters and emergencies that occurred globally in the last few years have shown that social media tools may serve as an integral and significant component of crisis response. Communication is one of the fundamental tools of emergency management. It becomes crucial when there are dozens of agencies and organizations responding to a disaster. Regardless of the type of emergency, whether a terrorist attack, a hurricane or an earthquake, communication lines may be overloaded and cellular networks overwhelmed as too many people attempt to use them to access information. Social scientists have presented that post-disaster active public participation was largely altruistic, including activities such as search and rescue, first aid treatment, victim evacuation, and on-line help. Social media provides opportunities for engaging citizens in the emergency management by both disseminating information to the public and accessing information from them. During emergency events, individuals are exposed to large quantities of information without being aware of their validity or risk of misinformation, but users are usually swift to correct them, thus making the social media “self-regulating”
Recommended from our members
Ranking for Scalable Information Extraction
Information extraction systems are complex software tools that discover structured information in natural language text. For instance, an information extraction system trained to extract tuples for an Occurs-in(Natural Disaster, Location) relation may extract the tuple from the sentence: "A tsunami swept the coast of Hawaii." Having information in structured form enables more sophisticated querying and data mining than what is possible over the natural language text. Unfortunately, information extraction is a time-consuming task. For example, a state-of-the-art information extraction system to extract Occurs-in tuples may take up to two hours to process only 1,000 text documents. Since document collections routinely contain millions of documents or more, improving the efficiency and scalability of the information extraction process over these collections is critical. As a significant step towards this goal, this dissertation presents approaches for (i) enabling the deployment of efficient information extraction systems and (ii) scaling the information extraction process to large volumes of text.
To enable the deployment of efficient information extraction systems, we have developed two crucial building blocks for this task. As a first contribution, we have created REEL, a toolkit to easily implement, evaluate, and deploy full-fledged relation extraction systems. REEL, in contrast to existing toolkits, effectively modularizes the key components involved in relation extraction systems and can integrate other long-established text processing and machine learning toolkits. To define a relation extraction system for a new relation and text collection, users only need to specify the desired configuration, which makes REEL a powerful framework for both research and application building. As a second contribution, we have addressed the problem of building representative extraction task-specific document samples from collections, a step often required by approaches for efficient information extraction. Specifically, we devised fully automatic document sampling techniques for information extraction that can produce better-quality document samples than the state-of-the-art sampling strategies; furthermore, our techniques are substantially more efficient than the existing alternative approaches.
To scale the information extraction process to large volumes of text, we have developed approaches that address the efficiency and scalability of the extraction process by focusing the extraction effort on the collections, documents, and sentences worth processing for a given extraction task. For collections, we have studied both (adaptations of) state-of-the art approaches for estimating the number of documents in a collection that lead to the extraction of tuples as well as information extraction-specific approaches. Using these estimations we can identify the collections worth processing and ignore the rest, for efficiency. For documents, we have developed an adaptive document ranking approach that relies on learning-to-rank techniques to prioritize the documents that are likely to produce tuples for an extraction task of choice. Our approach revises the (learned) ranking decisions periodically as the extraction process progresses and new characteristics of the useful documents are revealed. Finally, for sentences, we have developed an approach based on the sparse group selection problem that identifies sentences|modeled as groups of words|that best characterize the extraction task. Beyond identifying sentences worth processing, our approach aims at selecting sentences that lead to the extraction of unseen, novel tuples. Our approaches are lightweight and efficient, and dramatically improve the efficiency and scalability of the information extraction process. We can often complete the extraction task by focusing on just a very small fraction of the available text, namely, the text that contains relevant information for the extraction task at hand. Our approaches therefore constitute a substantial step towards efficient and scalable information extraction over large volumes of text
Bootstrapping Web Archive Collections From Micro-Collections in Social Media
In a Web plagued by disappearing resources, Web archive collections provide a valuable means of preserving Web resources important to the study of past events. These archived collections start with seed URIs (Uniform Resource Identifiers) hand-selected by curators. Curators produce high quality seeds by removing non-relevant URIs and adding URIs from credible and authoritative sources, but this ability comes at a cost: it is time consuming to collect these seeds. The result of this is a shortage of curators, a lack of Web archive collections for various important news events, and a need for an automatic system for generating seeds.
We investigate the problem of generating seed URIs automatically, and explore the state of the art in collection building and seed selection. Attempts toward generating seeds automatically have mostly relied on scraping Web or social media Search Engine Result Pages (SERPs). In this work, we introduce a novel source for generating seeds from URIs in the threaded conversations of social media posts created by single or multiple users. Users on social media sites routinely create and share narratives about news events consisting of hand-selected URIs of news stories, tweets, videos, etc. In this work, we call these posts Micro-collections, whether shared on Reddit or Twitter, and we consider them as an important source for seeds. This is because, the effort taken to create Micro-collections is an indication of editorial activity and a demonstration of domain expertise. Therefore, we propose a model for generating seeds from Micro-collections. We begin by introducing a simple vocabulary, called post class for describing social media posts across different platforms, and extract seeds from the Micro-collections post class. We further propose Quality Proxies for seeds by extending the idea of collection comparison to evaluation, and present our Micro-collection/Quality Proxy (MCQP) framework for bootstrapping Web archive collections from Micro-collections in social media
DYNAMICS OF IDENTITY THREATS IN ONLINE SOCIAL NETWORKS: MODELLING INDIVIDUAL AND ORGANIZATIONAL PERSPECTIVES
This dissertation examines the identity threats perceived by individuals and organizations in Online Social Networks (OSNs). The research constitutes two major studies. Using the concepts of Value Focused Thinking and the related methodology of Multiple Objectives Decision Analysis, the first research study develops the qualitative and quantitative value models to explain the social identity threats perceived by individuals in Online Social Networks. The qualitative value model defines value hierarchy i.e. the fundamental objectives to prevent social identity threats and taxonomy of user responses, referred to as Social Identity Protection Responses (SIPR), to avert the social identity threats. The quantitative value model describes the utility of the current social networking sites and SIPR to achieve the fundamental objectives for averting social identity threats in OSNs. The second research study examines the threats to the external identity of organizations i.e. Information Security Reputation (ISR) in the aftermath of a data breach. The threat analysis is undertaken by examining the discourses related to the data breach at Home Depot and JPMorgan Chase in the popular microblogging website, Twitter, to identify: 1) the dimensions of information security discussed in the Twitter postings; 2) the attribution of data breach responsibility and the related sentiments expressed in the Twitter postings; and 3) the subsequent diffusion of the tweets that threaten organizational reputation
Система обліку надзвичайних подій соціально-політичного характеру
Метою роботи є створення веб-системи обліку надзвичайних ситуацій соціально-політичного характеру з інтерактивною картою України. Система надає звичайним користувачам можливість дослідити дані виниклих надзвичайних ситуацій по різних регіонах України у різні роки. Адміністратор має доступ до створення нових маркерів на карті з внесенням інформації про нові надзвичайні ситуації, а також редагування значення існуючих маркерів.The purpose of this work is to create a web-based system for managing socio-political emergencies data with an interactive map of Ukraine. The system provides regular users with the opportunity to study the data of emergency situations that occurred in different regions of Ukraine in different years. Administrator has access to creating new markers on the map with information about new emergencies, as well as changing the value of existing markers.Целью работы является создание веб-системы учета чрезвычайных ситуаций социально-политического характера с интерактивной картой Украины. Система предоставляет обычным пользователям возможность исследовать данные возникших чрезвычайных ситуаций по разным регионам Украины в разные годы. Администратор имеет доступ к созданию новых маркеров на карте с внесением информации о новые чрезвычайные ситуации, а также редактирования существующих маркеров
- …