53 research outputs found

    AI in Production: Video Analysis and Machine Learning for Expanded Live Events Coverage

    Get PDF
    In common with many industries, TV and video production is likely to be transformed by Artificial Intelligence (AI) and Machine Learning (ML), with software and algorithms assisting production tasks that, conventionally, could only be carried out by people. Expanded coverage of a diverse range of live events is particularly constrained by the relative scarcity of skilled people, and is a strong use case for AI-based automation. This paper describes recent BBC research into potential production benefits of AI algorithms, using visual analysis and other techniques. Rigging small, static UHD cameras, we have enabled a one-person crew to crop UHD footage in multiple ways and cut between the resulting shots, effectively creating multi-camera HD coverage of events that cannot accommodate a camera crew. By working with programme makers to develop simple deterministic rules and, increasingly, training systems using advanced video analysis, we are developing a system of algorithms to automatically frame, sequence and select shots, and construct acceptable multicamera coverage of previously untelevised types of event

    Semantic Annotation of Digital Objects by Multiagent Computing: Applications in Digital Heritage

    Get PDF
    Heritage organisations around the world are participating in broad scale digitisation projects, where traditional forms of heritage materials are being transcribed into digital representations in order to assist with their long-term preservation, facilitate cataloguing, and increase their accessibility to researchers and the general public. These digital formats open up a new world of opportunities for applying computational information retrieval techniques to heritage collections, making it easier than ever before to explore and document these materials. One of the key benefits of being able to easily share digital heritage collections is the strengthening and support of community memory, where members of a community contribute their perceptions and recollections of historical and cultural events so that this knowledge is not forgotten and lost over time. With the ever-growing popularity of digitally-native media and the high level of computer literacy in modern society, this is set to become a critical area for preservation in the immediate future

    Improving Collection Understanding for Web Archives with Storytelling: Shining Light Into Dark and Stormy Archives

    Get PDF
    Collections are the tools that people use to make sense of an ever-increasing number of archived web pages. As collections themselves grow, we need tools to make sense of them. Tools that work on the general web, like search engines, are not a good fit for these collections because search engines do not currently represent multiple document versions well. Web archive collections are vast, some containing hundreds of thousands of documents. Thousands of collections exist, many of which cover the same topic. Few collections include standardized metadata. Too many documents from too many collections with insufficient metadata makes collection understanding an expensive proposition. This dissertation establishes a five-process model to assist with web archive collection understanding. This model aims to produce a social media story – a visualization with which most web users are familiar. Each social media story contains surrogates which are summaries of individual documents. These surrogates, when presented together, summarize the topic of the story. After applying our storytelling model, they summarize the topic of a web archive collection. We develop and test a framework to select the best exemplars that represent a collection. We establish that algorithms produced from these primitives select exemplars that are otherwise undiscoverable using conventional search engine methods. We generate story metadata to improve the information scent of a story so users can understand it better. After an analysis showing that existing platforms perform poorly for web archives and a user study establishing the best surrogate type, we generate document metadata for the exemplars with machine learning. We then visualize the story and document metadata together and distribute it to satisfy the information needs of multiple personas who benefit from our model. Our tools serve as a reference implementation of our Dark and Stormy Archives storytelling model. Hypercane selects exemplars and generates story metadata. MementoEmbed generates document metadata. Raintale visualizes and distributes the story based on the story metadata and the document metadata of these exemplars. By providing understanding immediately, our stories save users the time and effort of reading thousands of documents and, most importantly, help them understand web archive collections

    Museum Digitisations and Emerging Curatorial Agencies Online

    Get PDF
    This open access book explores the multiple forms of curatorial agencies that develop when museum collection digitisations, narratives and new research findings circulate online. Focusing on Viking Age objects, it tracks the effects of antagonistic debates on discussion forums and the consequences of search engines, personalisation, and machine learning on American-based online platforms. Furthermore, it considers eco-systemic processes comprising computation, rare-earth minerals, electrical currents and data centres and cables as novel forms of curatorial actions. Thus, it explores curatorial agency as social constructivist, semiotic, algorithmic, and material. This book is of interest to scholars and students in the fields of museum studies, cultural heritage and media studies. It also appeals to museum practitioners concerned with curatorial innovation at the intersection of humanist interpretations and new materialist and more-than-human frameworks

    Museum Digitisations and Emerging Curatorial Agencies Online

    Get PDF
    This open access book explores the multiple forms of curatorial agencies that develop when museum collection digitisations, narratives and new research findings circulate online. Focusing on Viking Age objects, it tracks the effects of antagonistic debates on discussion forums and the consequences of search engines, personalisation, and machine learning on American-based online platforms. Furthermore, it considers eco-systemic processes comprising computation, rare-earth minerals, electrical currents and data centres and cables as novel forms of curatorial actions. Thus, it explores curatorial agency as social constructivist, semiotic, algorithmic, and material. This book is of interest to scholars and students in the fields of museum studies, cultural heritage and media studies. It also appeals to museum practitioners concerned with curatorial innovation at the intersection of humanist interpretations and new materialist and more-than-human frameworks

    Advanced Techniques for Improving the Efficacy of Digital Forensics Investigations

    Get PDF
    Digital forensics is the science concerned with discovering, preserving, and analyzing evidence on digital devices. The intent is to be able to determine what events have taken place, when they occurred, who performed them, and how they were performed. In order for an investigation to be effective, it must exhibit several characteristics. The results produced must be reliable, or else the theory of events based on the results will be flawed. The investigation must be comprehensive, meaning that it must analyze all targets which may contain evidence of forensic interest. Since any investigation must be performed within the constraints of available time, storage, manpower, and computation, investigative techniques must be efficient. Finally, an investigation must provide a coherent view of the events under question using the evidence gathered. Unfortunately the set of currently available tools and techniques used in digital forensic investigations does a poor job of supporting these characteristics. Many tools used contain bugs which generate inaccurate results; there are many types of devices and data for which no analysis techniques exist; most existing tools are woefully inefficient, failing to take advantage of modern hardware; and the task of aggregating data into a coherent picture of events is largely left to the investigator to perform manually. To remedy this situation, we developed a set of techniques to facilitate more effective investigations. To improve reliability, we developed the Forensic Discovery Auditing Module, a mechanism for auditing and enforcing controls on accesses to evidence. To improve comprehensiveness, we developed ramparser, a tool for deep parsing of Linux RAM images, which provides previously inaccessible data on the live state of a machine. To improve efficiency, we developed a set of performance optimizations, and applied them to the Scalpel file carver, creating order of magnitude improvements to processing speed and storage requirements. Last, to facilitate more coherent investigations, we developed the Forensic Automated Coherence Engine, which generates a high-level view of a system from the data generated by low-level forensics tools. Together, these techniques significantly improve the effectiveness of digital forensic investigations conducted using them

    Deliverable D6.4 Scenario Demonstrators (v2)

    Get PDF
    This deliverable describes the final LinkedTV scenario demonstrators, which have been implemented with the most recent versions of the LinkedTV technology set. The demonstrators use real broadcaster TV programming (news from RBB and cultural heritage from AVRO) and show the benefits of LinkedTV through providing seamless access during the programme to related information and content from the Internet. They also validate the maturity of the LinkedTV technologies which were used to implement the scenario demonstrators

    The Spatial Historian: Creating a Spatially Aware Historical Research System

    Get PDF
    The intent of this study is to design a geospatial information system capable of facilitating the extraction and analysis of the fragmentary snapshots of history contained in hand-written historical documents. This customized system necessarily bypasses off-the-shelf GIS in order to support these unstructured primary historical research materials and bring long dormant spatial stories previously hidden in archives, libraries, and other documentary storage locations to life. The software platform discussed here integrates the tasks of information extraction, data management, and analysis while simultaneously giving primary emphasis to supporting the spatial and humanistic analysis and interpretation of the data contents. The premise of this research study is that by integrating the collection of data, the extraction of content, and the analysis of information from what has traditionally been post-data collection analysis and research process, more efficient processing and more effective historical research can be achieved

    Introduction: Ways of Machine Seeing

    Get PDF
    How do machines, and, in particular, computational technologies, change the way we see the world? This special issue brings together researchers from a wide range of disciplines to explore the entanglement of machines and their ways of seeing from new critical perspectives. This 'editorial' is for a special issue of AI & Society, which includes contributions from: María Jesús Schultz Abarca, Peter Bell, Tobias Blanke, Benjamin Bratton, Claudio Celis Bueno, Kate Crawford, Iain Emsley, Abelardo Gil-Fournier, Daniel Chávez Heras, Vladan Joler, Nicolas Malevé, Lev Manovich, Nicholas Mirzoeff, Perle Møhl, Bruno Moreschi, Fabian Offert, Trevor Paglan, Jussi Parikka, Luciana Parisi, Matteo Pasquinelli, Gabriel Pereira, Carloalberto Treccani, Rebecca Uliasz, and Manuel van der Veen
    • …
    corecore