6,245 research outputs found

    Sound of Violent Images / Violence of Sound Images: Pulling apart Tom and Jerry

    Get PDF
    Violence permeates Tom and Jerry in the repetitive, physically violent gags and scenes of humiliation and mocking, yet unarguably, there is comedic value in the onscreen violence.The musical scoring of Tom and Jerry in the early William Hanna and Joseph Barbera period of production (pre-1958) by Scott Bradley played a key role in conveying the comedic impact of violent gags due to the close synchronisation of music and sound with visual action and is typified by a form of sound design characteristic of zip crash animation as described by Paul Taberham (2012), in which sound actively participates in the humour and directly influences the viewer’s interpretation of the visual action. This research investigates the sound-image relationships in Tom and Jerry through practice, by exploring how processes of decontextualisation and desynchronisation of sound and image elements of violent gags unmask the underlying violent subtext of Tom and Jerry’s slapstick comedy. This research addresses an undertheorised area in animation related to the role of sound-image synchronisation and presents new knowledge derived from the novel application of audiovisual analysis of Tom and Jerry source material and the production of audiovisual artworks. The findings of this research are discussed from a pan theoretical perspective drawing on theorisation of film sound and cognitivist approaches to film music. This investigation through practice, supports the notion that intrinsic and covert processes of sound-image synchronisation as theorised by Kevin Donnelly (2014), play a key role in the reading of slapstick violence as comedic. Therefore, this practice-based research can be viewed as a case study that demonstrates the potential of a sampling-based creative practice to enable new readings to emerge from sampled source material. Novel artefacts were created in the form of audiovisual works that embody specific knowledge of factors related to the reconfiguration of sound-image relations and their impact in altering viewers’ readings of violence contained within Tom and Jerry. Critically, differences emerged between the artworks in terms of the extent to which they unmasked underlying themes of violence and potential mediating factors are discussed related to the influence of asynchrony on comical framing, the role of the unseen voice, perceived musicality and perceptions of interiority in the audiovisual artworks. The research findings yielded new knowledge regarding a potential gender-based bias in the perception of the human voice in the animated artworks produced. This research also highlights the role of intra-animation dimensions pertaining to the use of the single frame, the use of blank spaces and the relationship of sound-image synchronisation to the notion of the acousmatic imaginary. The PhD includes a portfolio of experimental audiovisual artworks produced during the testing and experimental phases of the research on which the textual dissertation critically reflects

    An Optimized Deep Learning Based Optimization Algorithm for the Detection of Colon Cancer Using Deep Recurrent Neural Networks

    Get PDF
    Colon cancer is the second leading dreadful disease-causing death. The challenge in the colon cancer detection is the accurate identification of the lesion at the early stage such that mortality and morbidity can be reduced. In this work, a colon cancer classification method is identified out using Dragonfly-based water wave optimization (DWWO) based deep recurrent neural network. Initially, the input cancer images subjected to carry a pre-processing, in which outer artifacts are removed. The pre-processed image is forwarded for segmentation then the images are converted into segments using Generative adversarial networks (GAN). The obtained segments are forwarded for attribute selection module, where the statistical features like mean, variance, kurtosis, entropy, and textual features, like LOOP features are effectively extracted. Finally, the colon cancer classification is solved by using the deep RNN, which is trained by the proposed Dragonfly-based water wave optimization algorithm. The proposed DWWO algorithm is developed by integrating the Dragonfly algorithm and water wave optimization

    TransEdge: Supporting Efficient Read Queries Across Untrusted Edge Nodes

    Full text link
    We propose Transactional Edge (TransEdge), a distributed transaction processing system for untrusted environments such as edge computing systems. What distinguishes TransEdge is its focus on efficient support for read-only transactions. TransEdge allows reading from different partitions consistently using one round in most cases and no more than two rounds in the worst case. TransEdge design is centered around this dependency tracking scheme including the consensus and transaction processing protocols. Our performance evaluation shows that TransEdge's snapshot read-only transactions achieve an 9-24x speedup compared to current byzantine systems

    Normalizing Flow Ensembles for Rich Aleatoric and Epistemic Uncertainty Modeling

    Full text link
    In this work, we demonstrate how to reliably estimate epistemic uncertainty while maintaining the flexibility needed to capture complicated aleatoric distributions. To this end, we propose an ensemble of Normalizing Flows (NF), which are state-of-the-art in modeling aleatoric uncertainty. The ensembles are created via sets of fixed dropout masks, making them less expensive than creating separate NF models. We demonstrate how to leverage the unique structure of NFs, base distributions, to estimate aleatoric uncertainty without relying on samples, provide a comprehensive set of baselines, and derive unbiased estimates for differential entropy. The methods were applied to a variety of experiments, commonly used to benchmark aleatoric and epistemic uncertainty estimation: 1D sinusoidal data, 2D windy grid-world (WetChicken\it{Wet Chicken}), Pendulum\it{Pendulum}, and Hopper\it{Hopper}. In these experiments, we setup an active learning framework and evaluate each model's capability at measuring aleatoric and epistemic uncertainty. The results show the advantages of using NF ensembles in capturing complicated aleatoric while maintaining accurate epistemic uncertainty estimates

    The Type 2 Diabetes Knowledge Portal: an open access genetic resource dedicated to type 2 diabetes and related traits

    Get PDF
    Associations between human genetic variation and clinical phenotypes have become a foundation of biomedical research. Most repositories of these data seek to be disease-agnostic and therefore lack disease-focused views. The Type 2 Diabetes Knowledge Portal (T2DKP) is a public resource of genetic datasets and genomic annotations dedicated to type 2 diabetes (T2D) and related traits. Here, we seek to make the T2DKP more accessible to prospective users and more useful to existing users. First, we evaluate the T2DKP's comprehensiveness by comparing its datasets with those of other repositories. Second, we describe how researchers unfamiliar with human genetic data can begin using and correctly interpreting them via the T2DKP. Third, we describe how existing users can extend their current workflows to use the full suite of tools offered by the T2DKP. We finally discuss the lessons offered by the T2DKP toward the goal of democratizing access to complex disease genetic results

    Modular lifelong machine learning

    Get PDF
    Deep learning has drastically improved the state-of-the-art in many important fields, including computer vision and natural language processing (LeCun et al., 2015). However, it is expensive to train a deep neural network on a machine learning problem. The overall training cost further increases when one wants to solve additional problems. Lifelong machine learning (LML) develops algorithms that aim to efficiently learn to solve a sequence of problems, which become available one at a time. New problems are solved with less resources by transferring previously learned knowledge. At the same time, an LML algorithm needs to retain good performance on all encountered problems, thus avoiding catastrophic forgetting. Current approaches do not possess all the desired properties of an LML algorithm. First, they primarily focus on preventing catastrophic forgetting (Diaz-Rodriguez et al., 2018; Delange et al., 2021). As a result, they neglect some knowledge transfer properties. Furthermore, they assume that all problems in a sequence share the same input space. Finally, scaling these methods to a large sequence of problems remains a challenge. Modular approaches to deep learning decompose a deep neural network into sub-networks, referred to as modules. Each module can then be trained to perform an atomic transformation, specialised in processing a distinct subset of inputs. This modular approach to storing knowledge makes it easy to only reuse the subset of modules which are useful for the task at hand. This thesis introduces a line of research which demonstrates the merits of a modular approach to lifelong machine learning, and its ability to address the aforementioned shortcomings of other methods. Compared to previous work, we show that a modular approach can be used to achieve more LML properties than previously demonstrated. Furthermore, we develop tools which allow modular LML algorithms to scale in order to retain said properties on longer sequences of problems. First, we introduce HOUDINI, a neurosymbolic framework for modular LML. HOUDINI represents modular deep neural networks as functional programs and accumulates a library of pre-trained modules over a sequence of problems. Given a new problem, we use program synthesis to select a suitable neural architecture, as well as a high-performing combination of pre-trained and new modules. We show that our approach has most of the properties desired from an LML algorithm. Notably, it can perform forward transfer, avoid negative transfer and prevent catastrophic forgetting, even across problems with disparate input domains and problems which require different neural architectures. Second, we produce a modular LML algorithm which retains the properties of HOUDINI but can also scale to longer sequences of problems. To this end, we fix the choice of a neural architecture and introduce a probabilistic search framework, PICLE, for searching through different module combinations. To apply PICLE, we introduce two probabilistic models over neural modules which allows us to efficiently identify promising module combinations. Third, we phrase the search over module combinations in modular LML as black-box optimisation, which allows one to make use of methods from the setting of hyperparameter optimisation (HPO). We then develop a new HPO method which marries a multi-fidelity approach with model-based optimisation. We demonstrate that this leads to improvement in anytime performance in the HPO setting and discuss how this can in turn be used to augment modular LML methods. Overall, this thesis identifies a number of important LML properties, which have not all been attained in past methods, and presents an LML algorithm which can achieve all of them, apart from backward transfer

    On the importance of low-frequency signals in functional and molecular photoacoustic computed tomography

    Full text link
    In photoacoustic computed tomography (PACT) with short-pulsed laser excitation, wideband acoustic signals are generated in biological tissues with frequencies related to the effective shapes and sizes of the optically absorbing targets. Low-frequency photoacoustic signal components correspond to slowly varying spatial features and are often omitted during imaging due to the limited detection bandwidth of the ultrasound transducer, or during image reconstruction as undesired background that degrades image contrast. Here we demonstrate that low-frequency photoacoustic signals, in fact, contain functional and molecular information, and can be used to enhance structural visibility, improve quantitative accuracy, and reduce spare-sampling artifacts. We provide an in-depth theoretical analysis of low-frequency signals in PACT, and experimentally evaluate their impact on several representative PACT applications, such as mapping temperature in photothermal treatment, measuring blood oxygenation in a hypoxia challenge, and detecting photoswitchable molecular probes in deep organs. Our results strongly suggest that low-frequency signals are important for functional and molecular PACT

    Resilience and food security in a food systems context

    Get PDF
    This open access book compiles a series of chapters written by internationally recognized experts known for their in-depth but critical views on questions of resilience and food security. The book assesses rigorously and critically the contribution of the concept of resilience in advancing our understanding and ability to design and implement development interventions in relation to food security and humanitarian crises. For this, the book departs from the narrow beaten tracks of agriculture and trade, which have influenced the mainstream debate on food security for nearly 60 years, and adopts instead a wider, more holistic perspective, framed around food systems. The foundation for this new approach is the recognition that in the current post-globalization era, the food and nutritional security of the world’s population no longer depends just on the performance of agriculture and policies on trade, but rather on the capacity of the entire (food) system to produce, process, transport and distribute safe, affordable and nutritious food for all, in ways that remain environmentally sustainable. In that context, adopting a food system perspective provides a more appropriate frame as it incites to broaden the conventional thinking and to acknowledge the systemic nature of the different processes and actors involved. This book is written for a large audience, from academics to policymakers, students to practitioners

    Blending the Material and Digital World for Hybrid Interfaces

    Get PDF
    The development of digital technologies in the 21st century is progressing continuously and new device classes such as tablets, smartphones or smartwatches are finding their way into our everyday lives. However, this development also poses problems, as these prevailing touch and gestural interfaces often lack tangibility, take little account of haptic qualities and therefore require full attention from their users. Compared to traditional tools and analog interfaces, the human skills to experience and manipulate material in its natural environment and context remain unexploited. To combine the best of both, a key question is how it is possible to blend the material world and digital world to design and realize novel hybrid interfaces in a meaningful way. Research on Tangible User Interfaces (TUIs) investigates the coupling between physical objects and virtual data. In contrast, hybrid interfaces, which specifically aim to digitally enrich analog artifacts of everyday work, have not yet been sufficiently researched and systematically discussed. Therefore, this doctoral thesis rethinks how user interfaces can provide useful digital functionality while maintaining their physical properties and familiar patterns of use in the real world. However, the development of such hybrid interfaces raises overarching research questions about the design: Which kind of physical interfaces are worth exploring? What type of digital enhancement will improve existing interfaces? How can hybrid interfaces retain their physical properties while enabling new digital functions? What are suitable methods to explore different design? And how to support technology-enthusiast users in prototyping? For a systematic investigation, the thesis builds on a design-oriented, exploratory and iterative development process using digital fabrication methods and novel materials. As a main contribution, four specific research projects are presented that apply and discuss different visual and interactive augmentation principles along real-world applications. The applications range from digitally-enhanced paper, interactive cords over visual watch strap extensions to novel prototyping tools for smart garments. While almost all of them integrate visual feedback and haptic input, none of them are built on rigid, rectangular pixel screens or use standard input modalities, as they all aim to reveal new design approaches. The dissertation shows how valuable it can be to rethink familiar, analog applications while thoughtfully extending them digitally. Finally, this thesis‚Äô extensive work of engineering versatile research platforms is accompanied by overarching conceptual work, user evaluations and technical experiments, as well as literature reviews.Die Durchdringung digitaler Technologien im 21. Jahrhundert schreitet stetig voran und neue Ger√§teklassen wie Tablets, Smartphones oder Smartwatches erobern unseren Alltag. Diese Entwicklung birgt aber auch Probleme, denn die vorherrschenden ber√ľhrungsempfindlichen Oberfl√§chen ber√ľcksichtigen kaum haptische Qualit√§ten und erfordern daher die volle Aufmerksamkeit ihrer Nutzer:innen. Im Vergleich zu traditionellen Werkzeugen und analogen Schnittstellen bleiben die menschlichen F√§higkeiten ungenutzt, die Umwelt mit allen Sinnen zu begreifen und wahrzunehmen. Um das Beste aus beiden Welten zu vereinen, stellt sich daher die Frage, wie neuartige hybride Schnittstellen sinnvoll gestaltet und realisiert werden k√∂nnen, um die materielle und die digitale Welt zu verschmelzen. In der Forschung zu Tangible User Interfaces (TUIs) wird die Verbindung zwischen physischen Objekten und virtuellen Daten untersucht. Noch nicht ausreichend erforscht wurden hingegen hybride Schnittstellen, die speziell darauf abzielen, physische Gegenst√§nde des Alltags digital zu erweitern und anhand geeigneter Designparameter und Entwurfsr√§ume systematisch zu untersuchen. In dieser Dissertation wird daher untersucht, wie Materialit√§t und Digitalit√§t nahtlos ineinander √ľbergehen k√∂nnen. Es soll erforscht werden, wie k√ľnftige Benutzungsschnittstellen n√ľtzliche digitale Funktionen bereitstellen k√∂nnen, ohne ihre physischen Eigenschaften und vertrauten Nutzungsmuster in der realen Welt zu verlieren. Die Entwicklung solcher hybriden Ans√§tze wirft jedoch √ľbergreifende Forschungsfragen zum Design auf: Welche Arten von physischen Schnittstellen sind es wert, betrachtet zu werden? Welche Art von digitaler Erweiterung verbessert das Bestehende? Wie k√∂nnen hybride Konzepte ihre physischen Eigenschaften beibehalten und gleichzeitig neue digitale Funktionen erm√∂glichen? Was sind geeignete Methoden, um verschiedene Designs zu erforschen? Wie kann man Technologiebegeisterte bei der Erstellung von Prototypen unterst√ľtzen? F√ľr eine systematische Untersuchung st√ľtzt sich die Arbeit auf einen designorientierten, explorativen und iterativen Entwicklungsprozess unter Verwendung digitaler Fabrikationsmethoden und neuartiger Materialien. Im Hauptteil werden vier Forschungsprojekte vorgestellt, die verschiedene visuelle und interaktive Prinzipien entlang realer Anwendungen diskutieren. Die Szenarien reichen von digital angereichertem Papier, interaktiven Kordeln √ľber visuelle Erweiterungen von Uhrarmb√§ndern bis hin zu neuartigen Prototyping-Tools f√ľr intelligente Kleidungsst√ľcke. Um neue Designans√§tze aufzuzeigen, integrieren nahezu alle visuelles Feedback und haptische Eingaben, um Alternativen zu Standard-Eingabemodalit√§ten auf starren Pixelbildschirmen zu schaffen. Die Dissertation hat gezeigt, wie wertvoll es sein kann, bekannte, analoge Anwendungen zu √ľberdenken und sie dabei gleichzeitig mit Bedacht digital zu erweitern. Dabei umfasst die vorliegende Arbeit sowohl realisierte technische Forschungsplattformen als auch √ľbergreifende konzeptionelle Arbeiten, Nutzerstudien und technische Experimente sowie die Analyse existierender Forschungsarbeiten

    Plausibility Verification for 3D Object Detectors Using Energy-Based Optimization

    Get PDF
    Environmental perception obtained via object detectors have no predictable safety layer encoded into their model schema, which creates the question of trustworthiness about the system\u27s prediction. As can be seen from recent adversarial attacks, most of the current object detection networks are vulnerable to input tampering, which in the real world could compromise the safety of autonomous vehicles. The problem would be amplified even more when uncertainty errors could not propagate into the submodules, if these are not a part of the end-to-end system design. To address these concerns, a parallel module which verifies the predictions of the object proposals coming out of Deep Neural Networks are required. This work aims to verify 3D object proposals from MonoRUn model by proposing a plausibility framework that leverages cross sensor streams to reduce false positives. The verification metric being proposed uses prior knowledge in the form of four different energy functions, each utilizing a certain prior to output an energy value leading to a plausibility justification for the hypothesis under consideration. We also employ a novel two-step schema to improve the optimization of the composite energy function representing the energy model
    • ‚Ķ
    corecore