5,901 research outputs found

    Tools and Procedures for the CTA Array Calibration

    Get PDF
    The Cherenkov Telescope Array (CTA) is an international initiative to build the next generation ground-based very-high-energy gamma-ray observatory. Full sky coverage will be assured by two arrays, one located on each of the northern and southern hemispheres. Three different sizes of telescopes will cover a wide energy range from tens of GeV up to hundreds of TeV. These telescopes, of which prototypes are currently under construction or completion, will have different mirror sizes and fields-of-view designed to access different energy regimes. Additionally, there will be groups of telescopes with different optics system, camera and electronics design. Given this diversity of instruments, an overall coherent calibration of the full array is a challenging task. Moreover, the CTA requirements on calibration accuracy are much more stringent than those achieved with current Imaging Atmospheric Cherenkov Telescopes, like for instance: the systematic errors in the energy scale must not exceed 10%.In this contribution we present both the methods that, applied directly to the acquired observational CTA data, will ensure that the calibration is correctly performed to the stringent required precision, and the calibration equipment that, external to the telescopes, is currently under development and testing. Moreover, some notes about the operative procedure to be followed with both methods and instruments, will be described. The methods applied to the observational CTA data include the analysis of muon ring images, of carefully selected cosmic-ray air shower images, of the reconstructed electron spectrum and that of known gamma-ray sources and the possible use of stereo techniques hardware-independent. These methods will be complemented with the use of calibrated light sources located on ground or on board unmanned aerial vehicles.Comment: All CTA contributions at arXiv:1709.0348

    Predicting abnormal respiratory patterns in older adults using supervised machine learning on Internet of medical things respiratory frequency data

    Get PDF
    Wearable Internet of Medical Things (IoMT) technology, designed for non-invasive respiratory monitoring, has demonstrated considerable promise in the early detection of severe diseases. This paper introduces the application of supervised machine learning techniques to predict respiratory abnormalities through frequency data analysis. The principal aim is to identify respiratory-related health risks in older adults using data collected from non-invasive wearable devices. This article presents the development, assessment, and comparison of three machine learning models, underscoring their potential for accurately predicting respiratory-related health issues in older adults. The convergence of wearable IoMT technology and machine learning holds immense potential for proactive and personalized healthcare among older adults, ultimately enhancing their quality of life

    MRPR: a MapReduce solution for prototype reduction in big data classification

    Get PDF
    In the era of big data, analyzing and extracting knowledge from large-scale data sets is a very interesting and challenging task. The application of standard data mining tools in such data sets is not straightforward. Hence, a new class of scalable mining method that embraces the huge storage and processing capacity of cloud platforms is required. In this work, we propose a novel distributed partitioning methodology for prototype reduction techniques in nearest neighbor classification. These methods aim at representing original training data sets as a reduced number of instances. Their main purposes are to speed up the classification process and reduce the storage requirements and sensitivity to noise of the nearest neighbor rule. However, the standard prototype reduction methods cannot cope with very large data sets. To overcome this limitation, we develop a MapReduce-based framework to distribute the functioning of these algorithms through a cluster of computing elements, proposing several algorithmic strategies to integrate multiple partial solutions (reduced sets of prototypes) into a single one. The proposed model enables prototype reduction algorithms to be applied over big data classification problems without significant accuracy loss. We test the speeding up capabilities of our model with data sets up to 5.7 millions of instances. The results show that this model is a suitable tool to enhance the performance of the nearest neighbor classifier with big data

    The working group on the analysis and management of accidents (WGAMA): A historical review of major contributions

    Get PDF
    The Working Group on the Analysis and Management of Accidents (WGAMA) was created on December 31st, 1999 to assess and strengthen the technical basis needed for the prevention, mitigation and management of potential accidents in NPP and to facilitate international convergence on safety issues and accident management analyses and strategies. WGAMA addresses reactor coolant system thermal-hydraulics, in-vessel behaviour of degraded cores and in-vessel protection, containment behaviour and containment protection, and fission product (FP) release, transport, deposition and retention, for both current and advanced reactors. As a result, WGAMA contributions in thermal-hydraulics, computational fluid-dynamics (CFD) and severe accidents along the first two decades of the 21st century have been outstanding and are summarized in this paper. Beyond any doubt, the Fukushima-Daiichi accident heavily impacted WGAMA activities and the substantial outcomes produced in the accident aftermath are neatly identified in the paper. Beyond specific events, most importantly, around 50 technical reports have become reference material in the different fields covered by the group and they are gathered altogether in the reference section of the paper; a common outstanding feature in most of these reports is the recommendations included for further research, some of which have eventually given rise to some of the projects conducted or underway within the OECD framework. Far from declining, ongoing WGAMA activities are numerous and a number of them is already planned to be launched in the near future; a short mention to them is also included in this paper

    Computer graphics application in the engineering design integration system

    Get PDF
    The computer graphics aspect of the Engineering Design Integration (EDIN) system and its application to design problems were discussed. Three basic types of computer graphics may be used with the EDIN system for the evaluation of aerospace vehicles preliminary designs: offline graphics systems using vellum-inking or photographic processes, online graphics systems characterized by direct coupled low cost storage tube terminals with limited interactive capabilities, and a minicomputer based refresh terminal offering highly interactive capabilities. The offline line systems are characterized by high quality (resolution better than 0.254 mm) and slow turnaround (one to four days). The online systems are characterized by low cost, instant visualization of the computer results, slow line speed (300 BAUD), poor hard copy, and the early limitations on vector graphic input capabilities. The recent acquisition of the Adage 330 Graphic Display system has greatly enhanced the potential for interactive computer aided design

    Framework, principles and recommendations for utilising participatory methodologies in the co-creation and evaluation of public health interventions

    Get PDF
    Background: Due to the chronic disease burden on society, there is a need for preventive public health interventions to stimulate society towards a healthier lifestyle. To deal with the complex variability between individual lifestyles and settings, collaborating with end-users to develop interventions tailored to their unique circumstances has been suggested as a potential way to improve effectiveness and adherence. Co-creation of public health interventions using participatory methodologies has shown promise but lacks a framework to make this process systematic. The aim of this paper was to identify and set key principles and recommendations for systematically applying participatory methodologies to co-create and evaluate public health interventions. Methods: These principles and recommendations were derived using an iterative reflection process, combining key learning from published literature in addition to critical reflection on three case studies conducted by research groups in three European institutions, all of whom have expertise in co-creating public health interventions using different participatory methodologies. Results: Key principles and recommendations for using participatory methodologies in public health intervention co-creation are presented for the stages of: Planning (framing the aim of the study and identifying the appropriate sampling strategy); Conducting (defining the procedure, in addition to manifesting ownership); Evaluating (the process and the effectiveness) and Reporting (providing guidelines to report the findings). Three scaling models are proposed to demonstrate how to scale locally developed interventions to a population level. Conclusions: These recommendations aim to facilitate public health intervention co-creation and evaluation utilising participatory methodologies by ensuring the process is systematic and reproducible

    Validation data for models of contaminant dispersal : scaling laws and data needs.

    Full text link

    Shared Arrangements: practical inter-query sharing for streaming dataflows

    Full text link
    Current systems for data-parallel, incremental processing and view maintenance over high-rate streams isolate the execution of independent queries. This creates unwanted redundancy and overhead in the presence of concurrent incrementally maintained queries: each query must independently maintain the same indexed state over the same input streams, and new queries must build this state from scratch before they can begin to emit their first results. This paper introduces shared arrangements: indexed views of maintained state that allow concurrent queries to reuse the same in-memory state without compromising data-parallel performance and scaling. We implement shared arrangements in a modern stream processor and show order-of-magnitude improvements in query response time and resource consumption for interactive queries against high-throughput streams, while also significantly improving performance in other domains including business analytics, graph processing, and program analysis

    Systematizing Decentralization and Privacy: Lessons from 15 Years of Research and Deployments

    Get PDF
    Decentralized systems are a subset of distributed systems where multiple authorities control different components and no authority is fully trusted by all. This implies that any component in a decentralized system is potentially adversarial. We revise fifteen years of research on decentralization and privacy, and provide an overview of key systems, as well as key insights for designers of future systems. We show that decentralized designs can enhance privacy, integrity, and availability but also require careful trade-offs in terms of system complexity, properties provided, and degree of decentralization. These trade-offs need to be understood and navigated by designers. We argue that a combination of insights from cryptography, distributed systems, and mechanism design, aligned with the development of adequate incentives, are necessary to build scalable and successful privacy-preserving decentralized systems
    corecore