53 research outputs found
AI in Production: Video Analysis and Machine Learning for Expanded Live Events Coverage
In common with many industries, TV and video production is likely to be
transformed by Artificial Intelligence (AI) and Machine Learning (ML), with
software and algorithms assisting production tasks that, conventionally,
could only be carried out by people. Expanded coverage of a diverse
range of live events is particularly constrained by the relative scarcity of
skilled people, and is a strong use case for AI-based automation.
This paper describes recent BBC research into potential production
benefits of AI algorithms, using visual analysis and other techniques.
Rigging small, static UHD cameras, we have enabled a one-person crew
to crop UHD footage in multiple ways and cut between the resulting shots,
effectively creating multi-camera HD coverage of events that cannot
accommodate a camera crew. By working with programme makers to
develop simple deterministic rules and, increasingly, training systems
using advanced video analysis, we are developing a system of algorithms
to automatically frame, sequence and select shots, and construct
acceptable multicamera coverage of previously untelevised types of event
Semantic Annotation of Digital Objects by Multiagent Computing: Applications in Digital Heritage
Heritage organisations around the world are participating in broad scale digitisation projects, where traditional forms of heritage materials are being transcribed into digital representations in order to assist with their long-term preservation, facilitate cataloguing, and increase their accessibility to researchers and the general public. These digital formats open up a new world of opportunities for applying computational information retrieval techniques to heritage collections, making it easier than ever before to explore and document these materials. One of the key benefits of being able to easily share digital heritage collections is the strengthening and support of community memory, where members of a community contribute their perceptions and recollections of historical and cultural events so that this knowledge is not forgotten and lost over time. With the ever-growing popularity of digitally-native media and the high level of computer literacy in modern society, this is set to become a critical area for preservation in the immediate future
Improving Collection Understanding for Web Archives with Storytelling: Shining Light Into Dark and Stormy Archives
Collections are the tools that people use to make sense of an ever-increasing number of archived web pages. As collections themselves grow, we need tools to make sense of them. Tools that work on the general web, like search engines, are not a good fit for these collections because search engines do not currently represent multiple document versions well. Web archive collections are vast, some containing hundreds of thousands of documents. Thousands of collections exist, many of which cover the same topic. Few collections include standardized metadata. Too many documents from too many collections with insufficient metadata makes collection understanding an expensive proposition.
This dissertation establishes a five-process model to assist with web archive collection understanding. This model aims to produce a social media story – a visualization with which most web users are familiar. Each social media story contains surrogates which are summaries of individual documents. These surrogates, when presented together, summarize the topic of the story. After applying our storytelling model, they summarize the topic of a web archive collection.
We develop and test a framework to select the best exemplars that represent a collection. We establish that algorithms produced from these primitives select exemplars that are otherwise undiscoverable using conventional search engine methods. We generate story metadata to improve the information scent of a story so users can understand it better. After an analysis showing that existing platforms perform poorly for web archives and a user study establishing the best surrogate type, we generate document metadata for the exemplars with machine learning. We then visualize the story and document metadata together and distribute it to satisfy the information needs of multiple personas who benefit from our model.
Our tools serve as a reference implementation of our Dark and Stormy Archives storytelling model. Hypercane selects exemplars and generates story metadata. MementoEmbed generates document metadata. Raintale visualizes and distributes the story based on the story metadata and the document metadata of these exemplars. By providing understanding immediately, our stories save users the time and effort of reading thousands of documents and, most importantly, help them understand web archive collections
Museum Digitisations and Emerging Curatorial Agencies Online
This open access book explores the multiple forms of curatorial agencies that develop when museum collection digitisations, narratives and new research findings circulate online. Focusing on Viking Age objects, it tracks the effects of antagonistic debates on discussion forums and the consequences of search engines, personalisation, and machine learning on American-based online platforms. Furthermore, it considers eco-systemic processes comprising computation, rare-earth minerals, electrical currents and data centres and cables as novel forms of curatorial actions. Thus, it explores curatorial agency as social constructivist, semiotic, algorithmic, and material. This book is of interest to scholars and students in the fields of museum studies, cultural heritage and media studies. It also appeals to museum practitioners concerned with curatorial innovation at the intersection of humanist interpretations and new materialist and more-than-human frameworks
Museum Digitisations and Emerging Curatorial Agencies Online
This open access book explores the multiple forms of curatorial agencies that develop when museum collection digitisations, narratives and new research findings circulate online. Focusing on Viking Age objects, it tracks the effects of antagonistic debates on discussion forums and the consequences of search engines, personalisation, and machine learning on American-based online platforms. Furthermore, it considers eco-systemic processes comprising computation, rare-earth minerals, electrical currents and data centres and cables as novel forms of curatorial actions. Thus, it explores curatorial agency as social constructivist, semiotic, algorithmic, and material. This book is of interest to scholars and students in the fields of museum studies, cultural heritage and media studies. It also appeals to museum practitioners concerned with curatorial innovation at the intersection of humanist interpretations and new materialist and more-than-human frameworks
Recommended from our members
Selecting and tailoring of images for online news content: a mixed-methods investigation of the needs and behaviour of image users in online journalism
This mixed-methods investigation explores how image professionals in online journalism search for, select and use images from large online collections. Further, findings from this exploration are used to devise and evaluate a needs-based practical solution for improvement to image retrieval.
The exploratory stage included semi-structured interviews and observations in situ and provided several important contributions to the current understanding of the needs and behaviour of image users in fully disintermediated environment of the online newsroom. This study found that these image users are creative professionals and self-taught, yet, confident image searchers. When illustrating news content, they apply a shared knowledge of how a specific image function (e.g., dominant image) must be presented visually to reach its full communication potential. This common understanding of image communicative functions has two implications on how these professionals search for and select images. Firstly, they begin searches with clear image needs pre-defined on multiple levels of image description, including visual image features, and their behaviour is consistent with targeted searching. This contradicts previously reported preference for browsing as the typical mode of searching in online image collections. Secondly, they do not easily compromise on image needs related to visual features. When searches prove ineffective, they resort to editing skills and tailor the available images to match their original needs.
Further, it was found that the choice of images for headline content can in fact be predicted by a set of 11 visual image features. The features were extracted from a collection of artefacts created in the observation sessions and described by means of the Visual Social Semiotics (VSS) framework. The feature set was implemented as a filtering mechanism in a prototype and evaluated in a within-subjects experimental design study with image professionals. This experiment showed a significant positive change in the behaviour of users when interacting with images pre-filtered strictly to their visual needs, not observed in the baseline system. This was demonstrated through users’ ability to immediately engage in the inspection of images on a level of detail, and to make straightforward selections. Images from the experimental sets required no or only minimal tailoring as confirmed in the final VSS-based survey with independent image experts.
Other important contributions of this investigation include the updated models. Firstly, the illustration task process framework, originally proposed in Markkula and Sormunen (2000), has been refined to include the image tailoring phase where creative professionals apply editorial treatment before publication. Further, the observations revealed that verifying of images, consistent with the feature in Ellis et al.’s model (Ellis et al., 1993), was an activity critical to making selection decision in online journalism. Therefore, Conniss et al.’s model of the image searching process (Conniss et al., 2000) has been updated to include the verifying phase.
The investigation concludes that in order to meet the needs of creative image professionals in online journalism, image retrieval systems must support targeted searching, and facilitate direct access to required images that can be easily verified for authenticity. The proposed multi-feature filtering system firmly rooted in the image users’ needs, appears to be a step towards automating image retrieval
Advanced Techniques for Improving the Efficacy of Digital Forensics Investigations
Digital forensics is the science concerned with discovering, preserving, and analyzing evidence on digital devices. The intent is to be able to determine what events have taken place, when they occurred, who performed them, and how they were performed. In order for an investigation to be effective, it must exhibit several characteristics. The results produced must be reliable, or else the theory of events based on the results will be flawed. The investigation must be comprehensive, meaning that it must analyze all targets which may contain evidence of forensic interest. Since any investigation must be performed within the constraints of available time, storage, manpower, and computation, investigative techniques must be efficient. Finally, an investigation must provide a coherent view of the events under question using the evidence gathered. Unfortunately the set of currently available tools and techniques used in digital forensic investigations does a poor job of supporting these characteristics. Many tools used contain bugs which generate inaccurate results; there are many types of devices and data for which no analysis techniques exist; most existing tools are woefully inefficient, failing to take advantage of modern hardware; and the task of aggregating data into a coherent picture of events is largely left to the investigator to perform manually. To remedy this situation, we developed a set of techniques to facilitate more effective investigations. To improve reliability, we developed the Forensic Discovery Auditing Module, a mechanism for auditing and enforcing controls on accesses to evidence. To improve comprehensiveness, we developed ramparser, a tool for deep parsing of Linux RAM images, which provides previously inaccessible data on the live state of a machine. To improve efficiency, we developed a set of performance optimizations, and applied them to the Scalpel file carver, creating order of magnitude improvements to processing speed and storage requirements. Last, to facilitate more coherent investigations, we developed the Forensic Automated Coherence Engine, which generates a high-level view of a system from the data generated by low-level forensics tools. Together, these techniques significantly improve the effectiveness of digital forensic investigations conducted using them
Deliverable D6.4 Scenario Demonstrators (v2)
This deliverable describes the final LinkedTV scenario demonstrators, which have been implemented with the most recent versions of the LinkedTV technology set. The demonstrators use real broadcaster TV programming (news from RBB and cultural heritage from AVRO) and show the benefits of LinkedTV through providing seamless access during the programme to related information and content from the Internet. They also validate the maturity of the LinkedTV technologies which were used to implement the scenario demonstrators
The Spatial Historian: Creating a Spatially Aware Historical Research System
The intent of this study is to design a geospatial information system capable of facilitating the extraction and analysis of the fragmentary snapshots of history contained in hand-written historical documents. This customized system necessarily bypasses off-the-shelf GIS in order to support these unstructured primary historical research materials and bring long dormant spatial stories previously hidden in archives, libraries, and other documentary storage locations to life. The software platform discussed here integrates the tasks of information extraction, data management, and analysis while simultaneously giving primary emphasis to supporting the spatial and humanistic analysis and interpretation of the data contents. The premise of this research study is that by integrating the collection of data, the extraction of content, and the analysis of information from what has traditionally been post-data collection analysis and research process, more efficient processing and more effective historical research can be achieved
Introduction: Ways of Machine Seeing
How do machines, and, in particular, computational technologies, change the way we see the world? This special issue brings together researchers from a wide range of disciplines to explore the entanglement of machines and their ways of seeing from new critical perspectives.
This 'editorial' is for a special issue of AI & Society, which includes contributions from: MarĂa JesĂşs Schultz Abarca, Peter Bell, Tobias Blanke, Benjamin Bratton, Claudio Celis Bueno, Kate Crawford, Iain Emsley, Abelardo Gil-Fournier, Daniel Chávez Heras, Vladan Joler, Nicolas MalevĂ©, Lev Manovich, Nicholas Mirzoeff, Perle Møhl, Bruno Moreschi, Fabian Offert, Trevor Paglan, Jussi Parikka, Luciana Parisi, Matteo Pasquinelli, Gabriel Pereira, Carloalberto Treccani, Rebecca Uliasz, and Manuel van der Veen
- …