84 research outputs found

    The challenges of data in future pandemics

    Get PDF
    This work was supported by Engineering and Physical Sciences Research Council (EPSRC) grant no. EP/R014604/1. M.C. was funded by the Engineering and Physical Sciences Research Council (EPSRC) grant no. EP/V054236/1. G.M. is supported by the Scottish Government’s Rural and Environment Science and Analytical Services Division (RESAS). J.P-G’s work is supported by funding from the UK Health Security Agency and the UK Department of Health and Social Care. L.P. is funded by the Wellcome Trust, UK and the Royal Society, UK (grant no. 202562/Z/16/Z). L.P. is supported by the UK Research and Innovation (UKRI) through the JUNIPER modelling consortium (grant no. MR/V038613/1). L.P. is also supported by The Alan Turing Institute for Data Science and Artificial Intelligence, UK. R.R was supported by the Natural Environment Research Council (NERC) grant no. NE/T004193/1 and NE/T010355/1, the Engineering and Physical Sciences Research Council (EPSRC) grant no. EP/V054236/1 and the Science and Technology Facilities Council (STFC) grant no. ST/V006126/1.The use of data has been essential throughout the unfolding COVID-19 pandemic. We have needed it to populate our models, inform our understanding, and shape our responses to the disease. However, data has not always been easy to find and access, it has varied in quality and coverage, been difficult to reuse or repurpose. This paper reviews these and other challenges and recommends steps to develop a data ecosystem better able to deal with future pandemics by better supporting preparedness, prevention, detection and response.Publisher PDFPeer reviewe

    (N)NLO+NLL’ accurate predictions for plain and groomed 1-jettiness in neutral current DIS

    Get PDF
    The possibility to reanalyse data taken by the HERA experiments offers the chance to study modern QCD jet and event-shape observables in deep-inelastic scattering. To address this, we compute resummed and matched predictions for the 1-jettiness distribution in neutral current DIS with and without grooming the hadronic final state using the soft-drop technique. Our theoretical predictions also account for non-perturbative corrections from hadronisation through parton-to-hadron level transfer matrices extracted from dedicated Monte Carlo simulations with Sherpa. To estimate parameter uncertainties in particular for the beam-fragmentation modelling we derive a family of replica tunes to data from the HERA experiments. While NNLO QCD normalisation corrections to the NLO+NLL’ prediction are numerically small, hadronisation corrections turn out to be quite sizeable. However, soft-drop grooming significantly reduces the impact of non-perturbative contributions. We supplement our study with hadron-level predictions from Sherpa based on the matching of NLO QCD matrix elements with the parton shower. Good agreement between the predictions from the two calculational methods is observed

    Monte Carlo Simulations for BSM Physics and Precision Higgs Physics at the LHC

    Get PDF
    Monte Carlo event generators are indispensable tools for the interpretation of data taken at particle collider experiments like the Large Hadron Collider (LHC), the most powerful particle collider to date. In this thesis, the general purpose Monte Carlo event generator Sherpa is used to implement a new simulation framework for models that go beyond the Standard Model of particle physics. This is achieved by means of an newly designed interface to a universal format for generic models and by extending existing functionalities in such a way as to handle a generic class of coupling structures that appear in many extensions of the Standard Model. Furthermore, an improved modeling of the dominant LHC Higgs pro- duction mechanism in the Standard Model is described and the effects of the improvements are quantified. The improved simulation that is implemented in Sherpa supplements the description of Higgs production at the LHC in terms of an effective Higgs-gluon interaction with finite top quark mass effects that restore a reliable description of the kinematics in events with large momentum transfers. Using this improved description of Higgs production at the LHC, this work demonstrates how the transverse momentum spectrum of the Higgs boson can be used to constrain models that modify the Higgs-gluon coupling. In addition, state-of-the-art Monte Carlo event generation techniques are used in order to assess the sensitivity of analysis strategies in the search for invisibly decaying Higgs bosons. In this analysis, it was found that previously neglected loop-induced contributions have a significant impact and it is demonstrated how multi-jet merging techniques can be used to obtain a reliable description of these contributions. Furthermore, the work presented in the last chapter of this thesis shows how jet substructure techniques can be used in order to search for rare Higgs decays into light resonances that decay further into hadrons. This analysis closes with a demonstration on how such an analysis can be used to constrain extensions of the Standard Model that feature multiple Higgs bosons

    Hierarchical categorisation of web tags for Delicious

    Get PDF
    In the scenario of social bookmarking, a user browsing the Web bookmarks web pages and assigns free-text labels (i.e., tags) to them according to their personal preferences. The benefits of social tagging are clear – tags enhance Web content browsing and search. However, since these tags may be publicly available to any Internet user, a privacy attacker may collect this information and extract an accurate snapshot of users’ interests or user profiles, containing sensitive information, such as health-related information, political preferences, salary or religion. In order to hinder attackers in their efforts to profile users, this report focuses on the practical aspects of capturing user interests from their tagging activity. More accurately, we study how to categorise a collection of tags posted by users in one of the most popular bookmarking services, Delicious (http://delicious.com).Preprin

    Hierarchical categorisation of tags for delicious

    Get PDF
    In the scenario of social bookmarking, a user browsing the Web bookmarks web pages and assigns free-text labels (i.e., tags) to them according to their personal preferences. In this technical report, we approach one of the practical aspects when it comes to represent users' interests from their tagging activity, namely the categorization of tags into high-level categories of interest. The reason is that the representation of user profiles on the basis of the myriad of tags available on the Web is certainly unfeasible from various practical perspectives; mainly concerning the unavailability of data to reliably, accurately measure interests across such fine-grained categorisation, and, should the data be available, its overwhelming computational intractability. Motivated by this, our study presents the results of a categorization process whereby a collection of tags posted at Delicious #http://delicious.com# are classified into 200 subcategories of interest.Preprin

    Survey of Template-Based Code Generation

    Full text link
    L'automatisation de la génération des artefacts textuels à partir des modèles est une étape critique dans l'Ingénierie Dirigée par les Modèles (IDM). C'est une transformation de modèles utile pour générer le code source, sérialiser les modèles dans de stockages persistents, générer les rapports ou encore la documentation. Parmi les différents paradigmes de transformation de modèle-au-texte, la génération de code basée sur les templates (TBCG) est la plus utilisée en IDM. La TBCG est une technique de génération qui produit du code à partir des spécifications de haut niveau appelées templates. Compte tenu de la diversité des outils et des approches, il est nécessaire de classifier et de comparer les techniques de TBCG existantes afin d'apporter un soutien approprié aux développeurs. L'objectif de ce mémoire est de mieux comprendre les caractéristiques des techniques de TBCG, identifier les tendances dans la recherche, et éxaminer l'importance du rôle de l'IDM par rapport à cette approche. J'évalue également l'expressivité, la performance et la mise à l'échelle des outils associés selon une série de modèles. Je propose une étude systématique de cartographie de la littérature qui décrit une intéressante vue d'ensemble de la TBCG et une étude comparitive des outils de la TBCG pour mieux guider les dévloppeurs dans leur choix. Cette étude montre que les outils basés sur les modèles offrent plus d'expressivité tandis que les outils basés sur le code sont les plus performants. Enfin, Xtend2 offre le meilleur compromis entre l'expressivité et la performance.A critical step in model-driven engineering (MDE) is the automatic synthesis of a textual artifact from models. This is a very useful model transformation to generate application code, to serialize the model in persistent storage, generate documentation or reports. Among the various model-to-text transformation paradigms, Template-Based Code Generation (TBCG) is the most popular in MDE. TBCG is a synthesis technique that produces code from high-level specifications, called templates. It is a popular technique in MDE given that they both emphasize abstraction and automation. Given the diversity of tools and approaches, it is necessary to classify and compare existing TBCG techniques to provide appropriate support to developers. The goal of this thesis is to better understand the characteristics of TBCG techniques, identify research trends, and assess the importance of the role of MDE in this code synthesis approach. We also evaluate the expressiveness, performance and scalability of the associated tools based on a range of models that implement critical patterns. To this end, we conduct a systematic mapping study of the literature that paints an interesting overview of TBCG and a comparative study on TBCG tools to better guide developers in their choices. This study shows that model-based tools offer more expressiveness whereas code-based tools performed much faster. Xtend2 offers the best compromise between the expressiveness and the performance

    A cognitive linguistic approach to describing the communication of mental illness in comics

    Get PDF
    This thesis examines the ways in which subjective experience is communicated through comics about mental illness and how such communication can be described and analysed. I chose to focus on comics about mental illness to draw on my own lived experience and because of their common thematic focus on subjectivity. I applied a mixed methods approach, using personal reflection, qualitative analysis of discussion group data and intuitive linguistic analysis. The central analysis focuses on three contemporary comics that tell stories about experiences of mental illness: Lighter than My Shadow by Katie Green, Tangles, by Sarah Leavitt, and The Nao of Brown by Glyn Dillon. I propose a means of describing comics based on aspects of cognitive linguistics, including Text World Theory and cognitive grammar. Given its grounding in aspects of cognitive psychology such as attention, focusing, scanning and construal, cognitive grammar provides a common rubric for engaging with the text, images, and composition of comics. I supplement this approach with aspects of Text World Theory, which provides a framework for describing the interface between the content of comics, the context of their production and reading, and how these two aspects of communication relate to one another. In carrying out this analysis, I used data from reading group discussions of the three comics to guide the focus of my analysis and to supplement my own interpretations with the more naturalistic reading experiences of reading group participants. This led me to focus on aspects of reading including discourse structure, agency, multimodal metaphor, archetypal roles, perspective, event structuring, and reality conceptions. As well providing developments to the application of cognitive linguistics to multimodal communication, my overall findings point to the importance of alternatives to verbal communication, such as comics, as means of differently framing the conceptualisation of experiences of mental illness

    Media Space: an analysis of spatial practices in planar pictorial media.

    Get PDF
    The thesis analyses the visual space displayed in pictures, film, television and digital interactive media. The argument is developed that depictions are informed by the objectives of the artefact as much as by any simple visual correspondence to the observed world. The simple concept of ‘realism’ is therefore anatomised and a more pragmatic theory proposed which resolves some of the traditional controversies concerning the relation between depiction and vision. This is then applied to the special problems of digital interactive media. An introductory chapter outlines the topic area and the main argument and provides an initial definition of terms. To provide a foundation for the ensuing arguments, a brief account is given of two existing and contrasted approaches to the notion of space: that of perception science which gives priority to acultural aspects, and that of visual culture which emphasises aspects which are culturally contingent. An existing approach to spatial perception (that of JJ Gibson originating in the 1940s and 50s) is applied to spatial depiction in order to explore the differences between seeing and picturing, and also to emphasise the many different cues for spatial perception beyond those commonly considered (such as binocularity and linear perspective). At this stage a simple framework of depiction is introduced which identifies five components or phases: the objectives of the picture, the idea chosen to embody the objectives, the model (essentially, the visual ‘subject matter’), the characteristics of the view and finally the substantive picture or depiction itself. This framework draws attention to the way in which each of the five phases presents an opportunity for decision-making about representation. The framework is used and refined throughout the thesis. Since pictures are considered in some everyday sense to be ‘realistic’ (otherwise, in terms of this thesis, they would not count as depictions), the nature of realism is considered at some length. The apparently unitary concept is broken down into several different types of realism and it is argued that, like the different spatial cues, each lends itself to particular objectives intended for the artefact. From these several types, two approaches to realism are identified, one prioritising the creation of a true illusion (that the picture is in fact a scene) and the other (of which there are innumerably more examples both across cultures and over historical time) one which evokes aspects of vision without aiming to exactly imitate the optical stimulus of the scene. Various reasons for the latter approach, and the variety of spatial practices to which it leads, are discussed. In addition to analysing traditional pictures, computer graphics images are discussed in conjunction with the claims for realism offered by their authors. In the process, informational and affective aspects of picture-making are distinguished, a distinction which it is argued is useful and too seldom made. Discussion of still pictures identifies the evocation of movement (and other aspects of time) as one of the principal motives for departing from attempts at straightforward optical matching. The discussion proceeds to the subject of film where, perhaps surprisingly now that the depiction of movement is possible, the lack of straightforward imitation of the optical is noteworthy again. This is especially true of the relationship between shots rather than within them; the reasons for this are analysed. This reinforces the argument that the spatial form of the fiction film, like that of other kinds of depiction, arises from its objectives, presenting realism once again as a contingent concept. The separation of depiction into two broad classes – one which aims to negate its own mediation, to seem transparent to what it depicts, and one which presents the fact of depiction ostensively to the viewer – is carried through from still pictures, via film, into a discussion of factual television and finally of digital interactive media. The example of factual television is chosen to emphasise how, despite the similarities between the technologies of film and television, spatial practices within some television genres contrast strongly with those of the mainstream fiction film. By considering historic examples, it is shown that many of the spatial practices now familiar in factual television were gradually expunged from the classical film when the latter became centred on the concerns of narrative fiction. By situating the spaces of interactive media in the context of other kinds of pictorial space, questions are addressed concerning the transferability of spatial usages from traditional media to those which are interactive. During the thesis the spatial practices of still-picture-making, film and television are characterised as ‘mature’ and ‘expressive’ (terms which are defined in the text). By contrast the spatial practices of digital interactive media are seen to be immature and inexpressive. It is argued that this is to some degree inevitable given the context in which interactive media artefacts are made and experienced – the lack of a shared ‘language’ or languages in any new media. Some of the difficult spatial problems which digital interactive media need to overcome are identified, especially where, as is currently normal, interaction is based on the relation between a pointer and visible objects within a depiction. The range of existing practice in digital interactive media is classified in a seven-part taxonomy, which again makes use of the objective-idea-model-view-picture framework, and again draws out the difference between self-concealing approaches to depiction and those which offer awareness of depiction as a significant component of the experience. The analysis indicates promising lines of enquiry for the future and emphasises the need for further innovation. Finally the main arguments are summarised and the thesis concludes with a short discussion of the implications for design arising from the key concepts identified – expressivity and maturity, pragmatism and realism
    • …
    corecore