789 research outputs found

    Automatic summarization of narrative video

    Get PDF
    The amount of digital video content available to users is rapidly increasing. Developments in computer, digital network, and storage technologies all contribute to broaden the offer of digital video. Only users’ attention and time remain scarce resources. Users face the problem of choosing the right content to watch among hundreds of potentially interesting offers. Video and audio have a dynamic nature: they cannot be properly perceived without considering their temporal dimension. This property makes it difficult to get a good idea of what a video item is about without watching it. Video previews aim at solving this issue by providing compact representations of video items that can help users making choices in massive content collections. This thesis is concerned with solving the problem of automatic creation of video previews. To allow fast and convenient content selection, a video preview should take into consideration more than thirty requirements that we have collected by analyzing related literature on video summarization and film production. The list has been completed with additional requirements elicited by interviewing end-users, experts and practitioners in the field of video editing and multimedia. This list represents our collection of user needs with respect to video previews. The requirements, presented from the point of view of the end-users, can be divided into seven categories: duration, continuity, priority, uniqueness, exclusion, structural, and temporal order. Duration requirements deal with the durations of the preview and its subparts. Continuity requirements request video previews to be as continuous as possible. Priority requirements indicate which content should be included in the preview to convey as much information as possible in the shortest time. Uniqueness requirements aim at maximizing the efficiency of the preview by minimizing redundancy. Exclusion requirements indicate which content should not be included in the preview. Structural requirements are concerned with the structural properties of video, while temporal order requirements set the order of the sequences included in the preview. Based on these requirements, we have introduced a formal model of video summarization specialized for the generation of video previews. The basic idea is to translate the requirements into score functions. Each score function is defined to have a non-positive value if a requirement is not met, and to increase depending on the degree of fulfillment of the requirement. A global objective function is then defined that combines all the score functions and the problem of generating a preview is translated into the problem of finding the parts of the initial content that maximize the objective function. Our solution approach is based on two main steps: preparation and selection. In the preparation step, the raw audiovisual data is analyzed and segmented into basic elements that are suitable for being included in a preview. The segmentation of the raw data is based on a shot-cut detection algorithm. In the selection step various content analysis algorithms are used to perform scene segmentation, advertisements detection and to extract numerical descriptors of the content that, introduced in the objective function, allow to estimate the quality of a video preview. The core part of the selection step is the optimization step that consists in searching the set of segments that maximizes the objective function in the space of all possible previews. Instead of solving the optimization problem exactly, an approximate solution is found by means of a local search algorithm using simulated annealing. We have performed a numerical evaluation of the quality of the solutions generated by our algorithm with respect to previews generated randomly or by selecting segments uniformly in time. The results on thirty content items have shown that the local search approach outperforms the other methods. However, based on this evaluation, we cannot conclude that the degree of fulfillment of the requirements achieved by our method satisfies the end-user needs completely. To validate our approach and assess end-user satisfaction, we conducted a user evaluation study in which we compared six aspects of previews generated using our algorithm to human-made previews and to previews generated by subsampling. The results have shown that previews generated using our optimization-based approach are not as good as manually made previews, but have higher quality than previews created using subsample. The differences between the previews are statistically significant

    Integrated urban data visualising and decision-making framework

    Get PDF
    The work package (WP) 2 on Basic Exploration, Stakeholder Studies and Requirement Analysis created the scientific fundament of the project and produced essential knowledge for the conceptualisation of UrbanData2Decide. Task 2.5 brought together the previous research results and elaborated an integrated research model as well as a stakeholder requirements catalogue with first use case scenarios. In this integrated deliverable previous results of WP2 were combined to define a first blueprint for the UrbanData2Decide system as it will be developed later in the project

    The role of ICT in natural disaster management communication:a systematic literature review

    Get PDF
    Abstract. The number and severity of natural hazards has increased in recent decades. These natural hazards cause billions in financial damage, as well as loss of life every year. Fortunately, societies have learned to adapt to these phenomena and invested in managing and mitigating their effects. Communication plays a key role in managing these natural disasters and the effects they smite upon communities. At the same time, information and communication technologies have become ubiquitous and integral part of our lives. However, the available technologies and the ability to utilize these technologies vary. Thus, there is a need for up-to-date review of the use of these technologies. In this thesis, the role of information and communication technologies in natural disaster management communication is examined. The purpose of this thesis is to aggregate scientific knowledge on the role of information and communication technologies in natural disaster management communication. As a result, this study used systematic literature review as a research method. In addition, the aim is to identify possible best practices and discuss the findings of the systematic literature review. The results are used to inform future work on developing an opensource-based system for natural disaster management. The main contribution of this thesis is the summarization of the findings. These findings can be used as a knowledge base or to reflect upon new solutions in natural disaster management. The search strategy used in this study identified 584 studies in total from which 24 primary studies were selected. Recommended future actions involve further studying the identified best practices and their application in practice. In addition, further developing the proposed artifact is recommended

    Information Refinement Technologies for Crisis Informatics: User Expectations and Design Implications for Social Media and Mobile Apps in Crises

    Get PDF
    In the past 20 years, mobile technologies and social media have not only been established in everyday life, but also in crises, disasters, and emergencies. Especially large-scale events, such as 2012 Hurricane Sandy or the 2013 European Floods, showed that citizens are not passive victims but active participants utilizing mobile and social information and communication technologies (ICT) for crisis response (Reuter, Hughes, et al., 2018). Accordingly, the research field of crisis informatics emerged as a multidisciplinary field which combines computing and social science knowledge of disasters and is rooted in disciplines such as human-computer interaction (HCI), computer science (CS), computer supported cooperative work (CSCW), and information systems (IS). While citizens use personal ICT to respond to a disaster to cope with uncertainty, emergency services such as fire and police departments started using available online data to increase situational awareness and improve decision making for a better crisis response (Palen & Anderson, 2016). When looking at even larger crises, such as the ongoing COVID-19 pandemic, it becomes apparent the challenges of crisis informatics are amplified (Xie et al., 2020). Notably, information is often not available in perfect shape to assist crisis response: the dissemination of high-volume, heterogeneous and highly semantic data by citizens, often referred to as big social data (Olshannikova et al., 2017), poses challenges for emergency services in terms of access, quality and quantity of information. In order to achieve situational awareness or even actionable information, meaning the right information for the right person at the right time (Zade et al., 2018), information must be refined according to event-based factors, organizational requirements, societal boundary conditions and technical feasibility. In order to research the topic of information refinement, this dissertation combines the methodological framework of design case studies (Wulf et al., 2011) with principles of design science research (Hevner et al., 2004). These extended design case studies consist of four phases, each contributing to research with distinct results. This thesis first reviews existing research on use, role, and perception patterns in crisis informatics, emphasizing the increasing potentials of public participation in crisis response using social media. Then, empirical studies conducted with the German population reveal positive attitudes and increasing use of mobile and social technologies during crises, but also highlight barriers of use and expectations towards emergency services to monitor and interact in media. The findings led to the design of innovative ICT artefacts, including visual guidelines for citizens’ use of social media in emergencies (SMG), an emergency service web interface for aggregating mobile and social data (ESI), an efficient algorithm for detecting relevant information in social media (SMO), and a mobile app for bidirectional communication between emergency services and citizens (112.social). The evaluation of artefacts involved the participation of end-users in the application field of crisis management, pointing out potentials for future improvements and research potentials. The thesis concludes with a framework on information refinement for crisis informatics, integrating event-based, organizational, societal, and technological perspectives

    Enhancing disaster situational awareness through scalable curation of social media

    Get PDF
    Online social media is today used during humanitarian disasters by victims, responders, journalists and others, to publicly exchange accounts of ongoing events, requests for help, aggregate reports, reflections and commentary. In many cases, incident reports become available on social media before being picked up by traditional information channels, and often include rich evidence such as photos and video recordings. However, individual messages are sparse in content and message inflow rates can reach hundreds of thousands of items per hour during large scale events. Current information management methods struggle to make sense of this vast body of knowledge, due to limitations in terms of accuracy and scalability of processing, summarization capabilities, organizational acceptance and even basic understanding of users’ needs. If solutions to these problems can be found, social media can be mined to offer disaster responders unprecedented levels of situational awareness. This thesis provides a first comprehensive overview of humanitarian disaster stakeholders and their information needs, against which the utility of the proposed and future information management solutions can be assessed. The research then shows how automated online textclustering techniques can provide report de-duplication, timely event detection, ranking and summarization of content in rapid social media streams. To identify and filter out reports that correspond to the information needs of specific stakeholders, crowdsourced information extraction is combined with supervised classification techniques to generalize human annotation behaviour and scale up processing capacity several orders of magnitude. These hybrid processing techniques are implemented in CrisisTracker, a novel software tool, and evaluated through deployment in a large-scale multi-language disaster information management setting. Evaluation shows that the proposed techniques can effectively make social media an accessible complement to currently relied-on information collection methods, which enables disaster analysts to detect and comprehend unfolding events more quickly, deeply and with greater coverage.Actualmente, m´ıdias sociais s˜ao utilizadas em crises humanit´arias por v´ıtimas, apoios de emergˆencia, jornalistas e outros, para partilhar publicamente eventos, pedidos ajuda, relat´orios, reflex˜oes e coment´arios. Frequentemente, relat´orios de incidentes est˜ao dispon´ıveis nestes servic¸o muito antes de estarem dispon´ıveis nos canais de informac¸˜ao comuns e incluem recursos adicionais, tais como fotografia e video. No entanto, mensagens individuais s˜ao escassas em conteu´do e o fluxo destas pode chegar aos milhares de unidades por hora durante grandes eventos. Actualmente, sistemas de gest˜ao de informac¸˜ao s˜ao ineficientes, em grande parte devido a limita¸c˜oes em termos de rigor e escalabilidade de processamento, sintetiza¸c˜ao, aceitac¸˜ao organizacional ou simplesmente falta de compreens˜ao das necessidades dos utilizadores. Se existissem solu¸c˜oes eficientes para extrair informa¸c˜ao de m´ıdias sociais em tempos de crise, apoios de emergˆencia teriam acesso a informac¸˜ao rigorosa, resultando em respostas mais eficientes. Esta tese cont´em a primeira lista exaustiva de parte interessada em ajuda humanit´aria e suas necessidades de informa¸c˜ao, v´alida para a utilizac¸˜ao do sistema proposto e futuras soluc¸˜oes. A investiga¸c˜ao nesta tese demonstra que sistemas de aglomera¸c˜ao de texto autom´atico podem remover redundˆancia de termos; detectar eventos; ordenar por relevˆancia e sintetizar conteu´do dinˆamico de m´ıdias sociais. Para identificar e filtrar relat´orios relevantes para diversos parte interessada, algoritmos de inteligˆencia artificial s˜ao utilizados para generalizar anotac¸˜oes criadas por utilizadores e automatizar consideravelmente o processamento. Esta solu¸c˜ao inovadora, CrisisTracker, foi testada em situa¸c˜oes de grande escala, em diversas l´ınguas, para gest˜ao de informa¸c˜ao em casos de crise humanit´aria. Os resultados demonstram que os m´etodos propostos podem efectivamente tornar a informa¸c˜ao de m´ıdias sociais acess´ıvel e complementam os m´etodos actuais utilizados para gest˜ao de informa¸c˜ao por analistas de crises, para detectar e compreender eventos eficientemente, com maior detalhe e cobertura

    Digital tools in media studies: analysis and research. An overview

    Get PDF
    Digital tools are increasingly used in media studies, opening up new perspectives for research and analysis, while creating new problems at the same time. In this volume, international media scholars and computer scientists present their projects, varying from powerful film-historical databases to automatic video analysis software, discussing their application of digital tools and reporting on their results. This book is the first publication of its kind and a helpful guide to both media scholars and computer scientists who intend to use digital tools in their research, providing information on applications, standards, and problems

    Digital Tools in Media Studies

    Get PDF
    Digital tools are increasingly used in media studies, opening up new perspectives for research and analysis, while creating new problems at the same time. In this volume, international media scholars and computer scientists present their projects, varying from powerful film-historical databases to automatic video analysis software, discussing their application of digital tools and reporting on their results. This book is the first publication of its kind and a helpful guide to both media scholars and computer scientists who intend to use digital tools in their research, providing information on applications, standards, and problems
    corecore