116,009 research outputs found

    Results of the IGEC-2 search for gravitational wave bursts during 2005

    Get PDF
    The network of resonant bar detectors of gravitational waves resumed coordinated observations within the International Gravitational Event Collaboration (IGEC-2). Four detectors are taking part in this collaboration: ALLEGRO, AURIGA, EXPLORER and NAUTILUS. We present here the results of the search for gravitational wave bursts over 6 months during 2005, when IGEC-2 was the only gravitational wave observatory in operation. The network data analysis implemented is based on a time coincidence search among AURIGA, EXPLORER and NAUTILUS, keeping the data from ALLEGRO for follow-up studies. With respect to the previous IGEC 1997-2000 observations, the amplitude sensitivity of the detectors to bursts improved by a factor about 3 and the sensitivity bandwidths are wider, so that the data analysis was tuned considering a larger class of detectable waveforms. Thanks to the higher duty cycles of the single detectors, we decided to focus the analysis on three-fold observation, so to ensure the identification of any single candidate of gravitational waves (gw) with high statistical confidence. The achieved false detection rate is as low as 1 per century. No candidates were found.Comment: 10 pages, to be submitted to Phys. Rev.

    Proceedings of Abstracts Engineering and Computer Science Research Conference 2019

    Get PDF
    © 2019 The Author(s). This is an open-access work distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. For further details please see https://creativecommons.org/licenses/by/4.0/. Note: Keynote: Fluorescence visualisation to evaluate effectiveness of personal protective equipment for infection control is © 2019 Crown copyright and so is licensed under the Open Government Licence v3.0. Under this licence users are permitted to copy, publish, distribute and transmit the Information; adapt the Information; exploit the Information commercially and non-commercially for example, by combining it with other Information, or by including it in your own product or application. Where you do any of the above you must acknowledge the source of the Information in your product or application by including or linking to any attribution statement specified by the Information Provider(s) and, where possible, provide a link to this licence: http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/This book is the record of abstracts submitted and accepted for presentation at the Inaugural Engineering and Computer Science Research Conference held 17th April 2019 at the University of Hertfordshire, Hatfield, UK. This conference is a local event aiming at bringing together the research students, staff and eminent external guests to celebrate Engineering and Computer Science Research at the University of Hertfordshire. The ECS Research Conference aims to showcase the broad landscape of research taking place in the School of Engineering and Computer Science. The 2019 conference was articulated around three topical cross-disciplinary themes: Make and Preserve the Future; Connect the People and Cities; and Protect and Care

    Last millennium northern hemisphere summer temperatures from tree rings: Part I: The long term context

    Get PDF
    Large-scale millennial length Northern Hemisphere (NH) temperature reconstructions have been progressively improved over the last 20 years as new datasets have been developed. This paper, and its companion (Part II, Anchukaitis et al. in prep), details the latest tree-ring (TR) based NH land air temperature reconstruction from a temporal and spatial perspective. This work is the first product of a consortium called N-TREND (Northern Hemisphere Tree-Ring Network Development) which brings together dendroclimatologists to identify a collective strategy for improving large-scale summer temperature reconstructions. The new reconstruction, N-TREND2015, utilises 54 records, a significant expansion compared with previous TR studies, and yields an improved reconstruction with stronger statistical calibration metrics. N-TREND2015 is relatively insensitive to the compositing method and spatial weighting used and validation metrics indicate that the new record portrays reasonable coherence with large scale summer temperatures and is robust at all time-scales from 918 to 2004 where at least 3 TR records exist from each major continental mass. N-TREND2015 indicates a longer and warmer medieval period (∼900–1170) than portrayed by previous TR NH reconstructions and by the CMIP5 model ensemble, but with better overall agreement between records for the last 600 years. Future dendroclimatic projects should focus on developing new long records from data-sparse regions such as North America and eastern Eurasia as well as ensuring the measurement of parameters related to latewood density to complement ring-width records which can improve local based calibration substantially

    A management architecture for active networks

    Get PDF
    In this paper we present an architecture for network and applications management, which is based on the Active Networks paradigm and shows the advantages of network programmability. The stimulus to develop this architecture arises from an actual need to manage a cluster of active nodes, where it is often required to redeploy network assets and modify nodes connectivity. In our architecture, a remote front-end of the managing entity allows the operator to design new network topologies, to check the status of the nodes and to configure them. Moreover, the proposed framework allows to explore an active network, to monitor the active applications, to query each node and to install programmable traps. In order to take advantage of the Active Networks technology, we introduce active SNMP-like MIBs and agents, which are dynamic and programmable. The programmable management agents make tracing distributed applications a feasible task. We propose a general framework that can inter-operate with any active execution environment. In this framework, both the manager and the monitor front-ends communicate with an active node (the Active Network Access Point) through the XML language. A gateway service performs the translation of the queries from XML to an active packet language and injects the code in the network. We demonstrate the implementation of an active network gateway for PLAN (Packet Language for Active Networks) in a forty active nodes testbed. Finally, we discuss an application of the active management architecture to detect the causes of network failures by tracing network events in time

    Playing Tag with ANN: Boosted Top Identification with Pattern Recognition

    Get PDF
    Many searches for physics beyond the Standard Model at the Large Hadron Collider (LHC) rely on top tagging algorithms, which discriminate between boosted hadronic top quarks and the much more common jets initiated by light quarks and gluons. We note that the hadronic calorimeter (HCAL) effectively takes a "digital image" of each jet, with pixel intensities given by energy deposits in individual HCAL cells. Viewed in this way, top tagging becomes a canonical pattern recognition problem. With this motivation, we present a novel top tagging algorithm based on an Artificial Neural Network (ANN), one of the most popular approaches to pattern recognition. The ANN is trained on a large sample of boosted tops and light quark/gluon jets, and is then applied to independent test samples. The ANN tagger demonstrated excellent performance in a Monte Carlo study: for example, for jets with p_T in the 1100-1200 GeV range, 60% top-tag efficiency can be achieved with a 4% mis-tag rate. We discuss the physical features of the jets identified by the ANN tagger as the most important for classification, as well as correlations between the ANN tagger and some of the familiar top-tagging observables and algorithms.Comment: 20 pages, 9 figure

    Adverse event reporting and patient safety at a University Hospital: Mapping, correlating and associating events for a data-based patient risk management

    Get PDF
    BACKGROUND: Reporting adverse events (AE) with a bearing on patient safety is fundamentally important to the identification and mitigation of potential clinical risks. OBJECTIVE: The aim of this study was to analyze the AE reporting systems adopted at a university hospital for the purpose of enhancing the learning potential afforded by these systems. RESEARCH DESIGN: Retrospective cohort study METHODS: Data were collected from different information flows (reports of incidents and falls, patients' claims and complaints, and cases of hospital-acquired infection [HAI]) at an university hospital. A composite risk indicator was developed to combine the data from the different flows. Spearman's nonparametric test was applied to investigate the correlation between the AE rates and a Poisson regression analysis to verify the association among characteristics of the wards and AE rates. SUBJECTS: Sixty-four wards at a University Hospital. RESULTS: There was a marked variability among wards AE rates. Correlations emerged between patients' claims with complaints and the number of incidents reported. Falls were positively associated with average length of hospital stay, number of beds, patients' mean age, and type of ward, and they were negatively associated with the average Cost Weight of the Diagnosis-related group (DRG) of patients on a given ward. Claims and complaints were associated directly with the average DRG weight of a ward's patient admissions. CONCLUSIONS: This study attempted to learn something useful from an analysis of the mandatory (but often little used) data flows generated on adverse events occurring at an university hospital with a view to managing the associated clinical risk to patients

    A Survey on IT-Techniques for a Dynamic Emergency Management in Large Infrastructures

    Get PDF
    This deliverable is a survey on the IT techniques that are relevant to the three use cases of the project EMILI. It describes the state-of-the-art in four complementary IT areas: Data cleansing, supervisory control and data acquisition, wireless sensor networks and complex event processing. Even though the deliverable’s authors have tried to avoid a too technical language and have tried to explain every concept referred to, the deliverable might seem rather technical to readers so far little familiar with the techniques it describes

    A management architecture for active networks

    Get PDF
    In this paper we present an architecture for network and applications management, which is based on the Active Networks paradigm and shows the advantages of network programmability. The stimulus to develop this architecture arises from an actual need to manage a cluster of active nodes, where it is often required to redeploy network assets and modify nodes connectivity. In our architecture, a remote front-end of the managing entity allows the operator to design new network topologies, to check the status of the nodes and to configure them. Moreover, the proposed framework allows to explore an active network, to monitor the active applications, to query each node and to install programmable traps. In order to take advantage of the Active Networks technology, we introduce active SNMP-like MIBs and agents, which are dynamic and programmable. The programmable management agents make tracing distributed applications a feasible task. We propose a general framework that can inter-operate with any active execution environment. In this framework, both the manager and the monitor front-ends communicate with an active node (the Active Network Access Point) through the XML language. A gateway service performs the translation of the queries from XML to an active packet language and injects the code in the network. We demonstrate the implementation of an active network gateway for PLAN (Packet Language for Active Networks) in a forty active nodes testbed. Finally, we discuss an application of the active management architecture to detect the causes of network failures by tracing network events in time

    Unpacking constructs: a network approach for studying war exposure, daily stressors and post-traumatic stress disorder

    Get PDF
    Conflict affected populations are exposed to stressful events during and after war, and it is well established that both take a substantial toll on individuals' mental health. Exactly how exposure to events during and after war affect mental health is a topic of considerable debate. Various hypotheses have been put forward on the relation between stressful war exposure (SWE), daily stressors (DS) and the development of post-traumatic stress disorder (PTSD). This paper seeks to contribute to this debate by critically reflecting upon conventional modeling approaches and by advancing an alternative model to studying interrelationships between SWE, DS, and PTSD variables. The network model is proposed as an innovative and comprehensive modeling approach in the field of mental health in the context of war. It involves a conceptualization and representation of variables and relationships that better approach reality, hence improving methodological rigor. It also promises utility in programming and delivering mental health support for war-affected populations
    corecore