4,592 research outputs found

    What skills pay more? The changing demand and return to skills for professional workers

    Get PDF
    Technology is disrupting labor markets. We analyze the demand and reward for skills at occupation and state level across two time periods using job postings. First, we use principal components analysis to derive nine skills groups: ‘collaborative leader’, ‘interpersonal & organized’, ‘big data’, ‘cloud computing’, ‘programming’, ‘machine learning’, ‘research’, ‘math’ and ‘analytical’. Second, we comment on changes in the price and demand for skills over time. Third, we analyze non-linear returns to all skills groups and their interactions. We find that ‘collaborative leader’ skills become significant over time and that legacy data skills are replaced over time by innovative ones

    Computational Analyses of Metagenomic Data

    Get PDF
    Metagenomics studies the collective microbial genomes extracted from a particular environment without requiring the culturing or isolation of individual genomes, addressing questions revolving around the composition, functionality, and dynamics of microbial communities. The intrinsic complexity of metagenomic data and the diversity of applications call for efficient and accurate computational methods in data handling. In this thesis, I present three primary projects that collectively focus on the computational analysis of metagenomic data, each addressing a distinct topic. In the first project, I designed and implemented an algorithm named Mapbin for reference-free genomic binning of metagenomic assemblies. Binning aims to group a mixture of genomic fragments based on their genome origin. Mapbin enhances binning results by building a multilayer network that combines the initial binning, assembly graph, and read-pairing information from paired-end sequencing data. The network is further partitioned by the community-detection algorithm, Infomap, to yield a new binning result. Mapbin was tested on multiple simulated and real datasets. The results indicated an overall improvement in the common binning quality metrics. The second and third projects are both derived from ImMiGeNe, a collaborative and multidisciplinary study investigating the interplay between gut microbiota, host genetics, and immunity in stem-cell transplantation (SCT) patients. In the second project, I conducted microbiome analyses for the metagenomic data. The workflow included the removal of contaminant reads and multiple taxonomic and functional profiling. The results revealed that the SCT recipients' samples yielded significantly fewer reads with heavy contamination of the host DNA, and their microbiomes displayed evident signs of dysbiosis. Finally, I discussed several inherent challenges posed by extremely low levels of target DNA and high levels of contamination in the recipient samples, which cannot be rectified solely through bioinformatics approaches. The primary goal of the third project is to design a set of primers that can be used to cover bacterial flagellin genes present in the human gut microbiota. Considering the notable diversity of flagellins, I incorporated a method to select representative bacterial flagellin gene sequences, a heuristic approach based on established primer design methods to generate a degenerate primer set, and a selection method to filter genes unlikely to occur in the human gut microbiome. As a result, I successfully curated a reduced yet representative set of primers that would be practical for experimental implementation

    Meta-learning algorithms and applications

    Get PDF
    Meta-learning in the broader context concerns how an agent learns about their own learning, allowing them to improve their learning process. Learning how to learn is not only beneficial for humans, but it has also shown vast benefits for improving how machines learn. In the context of machine learning, meta-learning enables models to improve their learning process by selecting suitable meta-parameters that influence the learning. For deep learning specifically, the meta-parameters typically describe details of the training of the model but can also include description of the model itself - the architecture. Meta-learning is usually done with specific goals in mind, for example trying to improve ability to generalize or learn new concepts from only a few examples. Meta-learning can be powerful, but it comes with a key downside: it is often computationally costly. If the costs would be alleviated, meta-learning could be more accessible to developers of new artificial intelligence models, allowing them to achieve greater goals or save resources. As a result, one key focus of our research is on significantly improving the efficiency of meta-learning. We develop two approaches: EvoGrad and PASHA, both of which significantly improve meta-learning efficiency in two common scenarios. EvoGrad allows us to efficiently optimize the value of a large number of differentiable meta-parameters, while PASHA enables us to efficiently optimize any type of meta-parameters but fewer in number. Meta-learning is a tool that can be applied to solve various problems. Most commonly it is applied for learning new concepts from only a small number of examples (few-shot learning), but other applications exist too. To showcase the practical impact that meta-learning can make in the context of neural networks, we use meta-learning as a novel solution for two selected problems: more accurate uncertainty quantification (calibration) and general-purpose few-shot learning. Both are practically important problems and using meta-learning approaches we can obtain better solutions than the ones obtained using existing approaches. Calibration is important for safety-critical applications of neural networks, while general-purpose few-shot learning tests model's ability to generalize few-shot learning abilities across diverse tasks such as recognition, segmentation and keypoint estimation. More efficient algorithms as well as novel applications enable the field of meta-learning to make more significant impact on the broader area of deep learning and potentially solve problems that were too challenging before. Ultimately both of them allow us to better utilize the opportunities that artificial intelligence presents

    Sound of Violent Images / Violence of Sound Images: Pulling apart Tom and Jerry

    Get PDF
    Violence permeates Tom and Jerry in the repetitive, physically violent gags and scenes of humiliation and mocking, yet unarguably, there is comedic value in the onscreen violence.The musical scoring of Tom and Jerry in the early William Hanna and Joseph Barbera period of production (pre-1958) by Scott Bradley played a key role in conveying the comedic impact of violent gags due to the close synchronisation of music and sound with visual action and is typified by a form of sound design characteristic of zip crash animation as described by Paul Taberham (2012), in which sound actively participates in the humour and directly influences the viewer’s interpretation of the visual action. This research investigates the sound-image relationships in Tom and Jerry through practice, by exploring how processes of decontextualisation and desynchronisation of sound and image elements of violent gags unmask the underlying violent subtext of Tom and Jerry’s slapstick comedy. This research addresses an undertheorised area in animation related to the role of sound-image synchronisation and presents new knowledge derived from the novel application of audiovisual analysis of Tom and Jerry source material and the production of audiovisual artworks. The findings of this research are discussed from a pan theoretical perspective drawing on theorisation of film sound and cognitivist approaches to film music. This investigation through practice, supports the notion that intrinsic and covert processes of sound-image synchronisation as theorised by Kevin Donnelly (2014), play a key role in the reading of slapstick violence as comedic. Therefore, this practice-based research can be viewed as a case study that demonstrates the potential of a sampling-based creative practice to enable new readings to emerge from sampled source material. Novel artefacts were created in the form of audiovisual works that embody specific knowledge of factors related to the reconfiguration of sound-image relations and their impact in altering viewers’ readings of violence contained within Tom and Jerry. Critically, differences emerged between the artworks in terms of the extent to which they unmasked underlying themes of violence and potential mediating factors are discussed related to the influence of asynchrony on comical framing, the role of the unseen voice, perceived musicality and perceptions of interiority in the audiovisual artworks. The research findings yielded new knowledge regarding a potential gender-based bias in the perception of the human voice in the animated artworks produced. This research also highlights the role of intra-animation dimensions pertaining to the use of the single frame, the use of blank spaces and the relationship of sound-image synchronisation to the notion of the acousmatic imaginary. The PhD includes a portfolio of experimental audiovisual artworks produced during the testing and experimental phases of the research on which the textual dissertation critically reflects

    Accessibility at Film Festivals: Guidelines for Inclusive Subtitling

    Get PDF
    In today's media-dominated world, the imperative for accessibility has never been greater, and ensuring that audiovisual experiences cater to individuals with sensory disabilities has become a pressing concern. One of the key initiatives in this endeavour is inclusive subtitling (IS), a practice rooted in the broader contexts of subtitling for the deaf and hard of hearing (SDH/CC), audiovisual translation studies (AVTS), media accessibility studies (MAS), and the evolving field of Deaf studies (DS). This study aims to offer a comprehensive exploration of how inclusive subtitling contributes to fostering accessible and inclusive audiovisual experiences, with a particular focus on its implications within the unique environment of film festivals. To gain a holistic perspective of inclusive subtitling, it is essential to examine its lineage in relation to analogous practices, which is the focus of the first chapter. Inclusive subtitling is an extension of SDH/CC, designed for individuals with hearing impairments, and SDH/CC, in turn, is a nuanced variation of traditional subtitling extensively explored within the realm of AVTS. To encapsulate the diverse techniques and modalities aimed at making audiovisual content universally accessible, the study recognises the term "Audiovisual Accessibility" (AVA). The second chapter explores the interconnection of accessibility studies (AS), AVTS, and MAS, highlighting their symbiotic relationship and their role in framing inclusive subtitles within these fields. These interconnections are pivotal in shaping a framework for the practice of inclusive subtitling, enabling a comprehensive examination of its applicability and research implications. The third chapter delves into Deaf studies and the evolution of Deafhood, which hinges on the history and culture of Deaf individuals. This chapter elucidates the distinction between ‘deafness’ as a medical construct and ‘Deafhood’ as a cultural identity, crucial to the understanding of audiovisual accessibility and its intersection with the Deaf community's perspectives. In the fourth chapter, the focus turns to the exploration of film festivals, with a specific emphasis on the crucial role of subtitles in enhancing accessibility, particularly when films are presented in their original languages. The chapter marks a critical point, highlighting the inherent connection between subtitles and the immersive nature of film festivals that aspire to promote inclusivity in the cinematic experience. The emphasis on inclusivity extends to the evolution of film festivals, giving rise to more advanced forms, including accessible film festivals and Deaf film festivals. At the core of the chapter is a thorough examination of the corpus, specifically, the SDH/CC of films spanning the editions from 2020 to 2023 of two highly significant film festivals, namely BFI Flare and the London Film Festival. The corpus serves as the foundation upon which my research unfolds, providing a nuanced understanding of the role subtitles play in film festival contexts. The main chapter, chapter five, thoroughly analyses the technical and linguistic aspects of inclusive subtitling, drawing insights from the Inclusive Subtitling Guidelines - a two version document devised by myself - and offering real-world applications supported by a case study at an Italian film festival and another case study of the short film Pure, with the relevant inclusive subtitles file annexed. In conclusion, the research sets the stage for a comprehensive exploration of inclusive subtitling's role in ensuring accessible and inclusive audiovisual experiences, particularly within film festivals. It underscores the importance of accessibility in the world of audiovisual media and highlights the need for inclusive practices to cater to diverse audiences

    Modern computing: Vision and challenges

    Get PDF
    Over the past six decades, the computing systems field has experienced significant transformations, profoundly impacting society with transformational developments, such as the Internet and the commodification of computing. Underpinned by technological advancements, computer systems, far from being static, have been continuously evolving and adapting to cover multifaceted societal niches. This has led to new paradigms such as cloud, fog, edge computing, and the Internet of Things (IoT), which offer fresh economic and creative opportunities. Nevertheless, this rapid change poses complex research challenges, especially in maximizing potential and enhancing functionality. As such, to maintain an economical level of performance that meets ever-tighter requirements, one must understand the drivers of new model emergence and expansion, and how contemporary challenges differ from past ones. To that end, this article investigates and assesses the factors influencing the evolution of computing systems, covering established systems and architectures as well as newer developments, such as serverless computing, quantum computing, and on-device AI on edge devices. Trends emerge when one traces technological trajectory, which includes the rapid obsolescence of frameworks due to business and technical constraints, a move towards specialized systems and models, and varying approaches to centralized and decentralized control. This comprehensive review of modern computing systems looks ahead to the future of research in the field, highlighting key challenges and emerging trends, and underscoring their importance in cost-effectively driving technological progress

    Forest planning utilizing high spatial resolution data

    Get PDF
    This thesis presents planning approaches adapted for high spatial resolution data from remote sensing and evaluate whether such approaches can enhance the provision of ecosystem services from forests. The presented methods are compared with conventional, stand-level methods. The main focus lies on the planning concept of dynamic treatment units (DTU), where treatments in small units for modelling ecosystem processes and forest management are clustered spatiotemporally to form treatment units realistic in practical forestry. The methodological foundation of the thesis is mainly airborne laser scanning data (raster cells 12.5x12.5 m2), different optimization methods and the forest decision support system Heureka. Paper I demonstrates a mixed-integer programming model for DTU planning, and the results highlight the economic advances of clustering harvests. Paper II and III presents an addition to a DTU heuristic from the literature and further evaluates its performance. Results show that direct modelling of fixed costs for harvest operations can improve plans and that DTU planning enhances the economic outcome of forestry. The higher spatial resolution of data in the DTU approach enables the planning model to assign management with higher precision than if stand-based planning is applied. Paper IV evaluates whether this phenomenon is also valid for ecological values. Here, an approach adapted for cell-level data is compared to a schematic approach, dealing with stand-level data, for the purpose of allocating retention patches. The evaluation of economic and ecological values indicate that high spatial resolution data and an adapted planning approach increased the ecological values, while differences in economy were small. In conclusion, the studies in this thesis demonstrate how forest planning can utilize high spatial resolution data from remote sensing, and the results suggest that there is a potential to increase the overall provision of ecosystem services if such methods are applied

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum
    • …
    corecore