724 research outputs found

    Sound of Violent Images / Violence of Sound Images: Pulling apart Tom and Jerry

    Get PDF
    Violence permeates Tom and Jerry in the repetitive, physically violent gags and scenes of humiliation and mocking, yet unarguably, there is comedic value in the onscreen violence.The musical scoring of Tom and Jerry in the early William Hanna and Joseph Barbera period of production (pre-1958) by Scott Bradley played a key role in conveying the comedic impact of violent gags due to the close synchronisation of music and sound with visual action and is typified by a form of sound design characteristic of zip crash animation as described by Paul Taberham (2012), in which sound actively participates in the humour and directly influences the viewer’s interpretation of the visual action. This research investigates the sound-image relationships in Tom and Jerry through practice, by exploring how processes of decontextualisation and desynchronisation of sound and image elements of violent gags unmask the underlying violent subtext of Tom and Jerry’s slapstick comedy. This research addresses an undertheorised area in animation related to the role of sound-image synchronisation and presents new knowledge derived from the novel application of audiovisual analysis of Tom and Jerry source material and the production of audiovisual artworks. The findings of this research are discussed from a pan theoretical perspective drawing on theorisation of film sound and cognitivist approaches to film music. This investigation through practice, supports the notion that intrinsic and covert processes of sound-image synchronisation as theorised by Kevin Donnelly (2014), play a key role in the reading of slapstick violence as comedic. Therefore, this practice-based research can be viewed as a case study that demonstrates the potential of a sampling-based creative practice to enable new readings to emerge from sampled source material. Novel artefacts were created in the form of audiovisual works that embody specific knowledge of factors related to the reconfiguration of sound-image relations and their impact in altering viewers’ readings of violence contained within Tom and Jerry. Critically, differences emerged between the artworks in terms of the extent to which they unmasked underlying themes of violence and potential mediating factors are discussed related to the influence of asynchrony on comical framing, the role of the unseen voice, perceived musicality and perceptions of interiority in the audiovisual artworks. The research findings yielded new knowledge regarding a potential gender-based bias in the perception of the human voice in the animated artworks produced. This research also highlights the role of intra-animation dimensions pertaining to the use of the single frame, the use of blank spaces and the relationship of sound-image synchronisation to the notion of the acousmatic imaginary. The PhD includes a portfolio of experimental audiovisual artworks produced during the testing and experimental phases of the research on which the textual dissertation critically reflects

    Blockchain Large Language Models

    Get PDF
    This paper presents a dynamic, real-time approach to detecting anomalous blockchain transactions. The proposed tool, BlockGPT, generates tracing representations of blockchain activity and trains from scratch a large language model to act as a real-time Intrusion Detection System. Unlike traditional methods, BlockGPT is designed to offer an unrestricted search space and does not rely on predefined rules or patterns, enabling it to detect a broader range of anomalies. We demonstrate the effectiveness of BlockGPT through its use as an anomaly detection tool for Ethereum transactions. In our experiments, it effectively identifies abnormal transactions among a dataset of 68M transactions and has a batched throughput of 2284 trans- actions per second on average. Our results show that, BlockGPT identifies abnormal transactions by ranking 49 out of 124 attacks among the top-3 most abnormal transactions interacting with their victim contracts. This work makes contributions to the field of blockchain transaction analysis by introducing a custom data encoding compatible with the transformer architecture, a domain-specific tokenization technique, and a tree encoding method specifically crafted for the Ethereum Virtual Machine (EVM) trace representation

    Wiring Circuits Is Easy as {0,1,ω}, or Is It...

    Get PDF
    Quantitative Type-Systems support fine-grained reasoning about term usage in our programming languages. Hardware Design Languages are another style of language in which quantitative typing would be beneficial. When wiring components together we must ensure that there are no unused ports, dangling wires, or accidental fan-ins and fan-outs. Although many wire usage checks are detectable using static analysis tools, such as Verilator, quantitative typing supports making these extrinsic checks an intrinsic aspect of the type-system. With quantitative typing of bound terms, we can provide design-time checks that all wires and ports have been used, and ensure that all wiring decisions are explicitly made, and are neither implicit nor accidental. We showcase the use of quantitative types in hardware design languages by detailing how we can retrofit quantitative types onto SystemVerilog netlists, and the impact that such a quantitative type-system has when creating designs. Netlists are gate-level descriptions of hardware that are produced as the result of synthesis, and it is from these netlists that hardware is generated (fabless or fabbed). First, we present a simple structural type-system for a featherweight version of SystemVerilog netlists that demonstrates how we can type netlists using standard structural techniques, and what it means for netlists to be type-safe but still lead to ill-wired designs. We then detail how to retrofit the language with quantitative types, make the type-system sub-structural, and detail how our new type-safety result ensures that wires and ports are used once. Our ideas have been proven both practically and formally by realising our work in Idris2, through which we can construct a verified language implementation that can type-check existing designs. From this work we can look to promote quantitative typing back up the synthesis chain to a more comprehensive hardware description language; and to help develop new and better hardware description languages with quantitative typing

    Behavior quantification as the missing link between fields: Tools for digital psychiatry and their role in the future of neurobiology

    Full text link
    The great behavioral heterogeneity observed between individuals with the same psychiatric disorder and even within one individual over time complicates both clinical practice and biomedical research. However, modern technologies are an exciting opportunity to improve behavioral characterization. Existing psychiatry methods that are qualitative or unscalable, such as patient surveys or clinical interviews, can now be collected at a greater capacity and analyzed to produce new quantitative measures. Furthermore, recent capabilities for continuous collection of passive sensor streams, such as phone GPS or smartwatch accelerometer, open avenues of novel questioning that were previously entirely unrealistic. Their temporally dense nature enables a cohesive study of real-time neural and behavioral signals. To develop comprehensive neurobiological models of psychiatric disease, it will be critical to first develop strong methods for behavioral quantification. There is huge potential in what can theoretically be captured by current technologies, but this in itself presents a large computational challenge -- one that will necessitate new data processing tools, new machine learning techniques, and ultimately a shift in how interdisciplinary work is conducted. In my thesis, I detail research projects that take different perspectives on digital psychiatry, subsequently tying ideas together with a concluding discussion on the future of the field. I also provide software infrastructure where relevant, with extensive documentation. Major contributions include scientific arguments and proof of concept results for daily free-form audio journals as an underappreciated psychiatry research datatype, as well as novel stability theorems and pilot empirical success for a proposed multi-area recurrent neural network architecture.Comment: PhD thesis cop

    Improving the census of open clusters in the Milky Way with data from Gaia

    Get PDF
    For over a century, open clusters have been a key tool for understanding stellar and galactic evolution. Now, thanks to groundbreaking new astrometric and photometric data from the European Space Agency's Gaia satellite, it is possible to study open clusters to never before seen levels of accuracy and precision. In this thesis, I develop and apply new methodologies to improve the census of open clusters with data from Gaia. I focus on using modern, efficient, and statistically rigorous techniques, aiming to maximise the reliability and usefulness of the open cluster census despite the many challenges of working with the billion-star dataset of Gaia. Firstly, I conducted a comparative study of clustering algorithms for retrieving open clusters blindly from Gaia data. I found that a previously untrialed algorithm, HDBSCAN, is the most sensitive algorithm for open cluster recovery. Next, using this methodology, I used Gaia DR3 data to create the largest homogeneous catalogue of open clusters to date, recovering a total of 7167 clusters -- 2387 of which are candidate new objects. I developed an approximate Bayesian neural network for classifying the reliability of the colour-magnitude diagrams of the clusters in the census. Additionally, I used a modification of this network to infer parameters such as the age and extinction of these clusters. Finally, since many of the objects in my catalogue appeared more compatible with moving groups, I measured accurate masses, Jacobi radii, and velocity dispersions for these clusters, thus creating the largest catalogue of these parameters for open clusters to date. Using said parameters, I showed that no more than 5619 of the clusters in my catalogue are compatible with bound open clusters. I used my mass estimates to derive an approximate completeness estimate for the Gaia DR3 open cluster census, finding that the approximate 100% completeness limit depends strongly on cluster mass. The results of this thesis show that it is possible to reliably create a catalogue of open clusters with a single blind search, in addition to measuring parameters for these objects. The methods developed in this thesis will be applicable to future data releases from Gaia and other sources

    Webb Protocol: A cross-chain private application and governance protocol.

    Get PDF
    In this paper, we present the Webb Protocol, a system for building and governing cross-chain applications that can support shared anonymity set functionality across a set of identical bridged systems on compatible blockchains. The Webb Protocol is composed of two major protocols that deal with storing, updating, and validating of data and state changes that occur on a bridge and that are relevant to replicate on each connected chain. State is efficiently verifiable through the use of merkle trees and privacy is provided using zero-knowledge proofs of membership. Together, one can create applications leveraging distributed state with private property testing capabilities. Both financial and non-financial applications are described as motivating examples within the paper

    Robust Distortion-free Watermarks for Language Models

    Full text link
    We propose a methodology for planting watermarks in text from an autoregressive language model that are robust to perturbations without changing the distribution over text up to a certain maximum generation budget. We generate watermarked text by mapping a sequence of random numbers -- which we compute using a randomized watermark key -- to a sample from the language model. To detect watermarked text, any party who knows the key can align the text to the random number sequence. We instantiate our watermark methodology with two sampling schemes: inverse transform sampling and exponential minimum sampling. We apply these watermarks to three language models -- OPT-1.3B, LLaMA-7B and Alpaca-7B -- to experimentally validate their statistical power and robustness to various paraphrasing attacks. Notably, for both the OPT-1.3B and LLaMA-7B models, we find we can reliably detect watermarked text (p≤0.01p \leq 0.01) from 3535 tokens even after corrupting between 4040-5050\% of the tokens via random edits (i.e., substitutions, insertions or deletions). For the Alpaca-7B model, we conduct a case study on the feasibility of watermarking responses to typical user instructions. Due to the lower entropy of the responses, detection is more difficult: around 25%25\% of the responses -- whose median length is around 100100 tokens -- are detectable with p≤0.01p \leq 0.01, and the watermark is also less robust to certain automated paraphrasing attacks we implement
    • …
    corecore