109 research outputs found

    Practical guidelines for the registration and monitoring of serious traffic injuries, D7.1 of the H2020 project SafetyCube

    Get PDF
    BACKGROUND AND OBJECTIVES Crashes also cause numerous serious traffic injuries, resulting in considerable economic and human costs. Given the burden of injury produced by traffic, using only fatalities as an indicator to monitor road safety gives a very small picture of the health impact of traffic crashes, just the tip of the iceberg. Moreover, in several countries during the last years the number of serious traffic injuries has not been decreasing as fast as the number of fatalities. In other countries the number of serious traffic injuries has even been increasing (Berecki-Gisolf et al., 2013; IRTAD Working Group on Serious Road Traffic Casualties, 2010; Weijermars et al., 2015).Therefore, serious traffic injuries are more commonly being adopted by policy makers as an additional indicator of road safety. Reducing the number of serious traffic injuries is one of the key priorities in the road safety programme 2011-2020 of the European Commission (EC, 2010). To be able to compare performance and monitor developments in serious traffic injuries across Europe, a common definition of a serious road injury was necessary. In January 2013, the High Level Group on Road Safety, representing all EU Member States, established the definition of serious traffic injuries as road casualties with an injury level of MAIS ≥ 3. The Maximum AIS represents the most severe injury obtained by a casualty according to the Abbreviated Injury Scale (AIS). Traditionally the main source of information on traffic accidents and injuries has been the police registration. This provides the official data for statistics at national and European level (CARE Database). Data reported by police usually is very detailed about the circumstances of the crash particularly if there are people injured or killed. But on the other hand police cannot assess the severity of injuries in a reliable way, due, obviously to their training. Therefore, police based data use to classify people involved in a crash as fatality, severe injured if hospitalised more than 24 hours and slight injured if not hospitalised. Moreover, it is known that even a so clear definition as a fatality is not always well reported and produces underreporting. This is due to several factors such as lack of coverage of police at the scene or people dying at hospital not followed by police (Amoros et al., 2006; Broughton et al., 2007; Pérez et al., 2006). Hospital records of patients with road traffic injuries usually include very little information on circumstances of the crash but it does contain data about the person, the hospitalisation (date of hospitalisation and discharge, medical diagnosis, mechanism or external cause of injury, and interventions). Hospital inpatient Discharge Register (HDR) offers an opportunity to complement police data on road traffic injuries. Medical diagnoses can be used to derive information about severity of injuries. Among others, one of the possible scales to measure injury severity is the Abbreviated Injury Scale (AIS). The High Level group identified three main ways Member States can collect data on serious traffic injuries (MAIS ≥ 3): 1) by applying a correction on police data, 2) by using hospital data and 3) by using linked police and hospital data. Once one of these three ways is selected, several additional choices need to be made. In order to be able to compare injury data across different countries, it is important to understand the effects of methodological choices on the estimated numbers of serious traffic injuries. A number of questions arise: How to determine the correction factors that are to be applied to police data? How to select road traffic casualties in the hospital data and how to derive MAIS ≥ 3 casualties? How should police and hospital data be linked and how can the number of MAIS ≥ 3 casualties be determined on the basis of the linked data sources? Currently, EU member states use different procedures to determine the number of MAIS ≥ 3 traffic injuries, dependent on the available data. Given the major differences in the procedures being applied, the quality of the data differs considerably and the numbers are not yet fully comparable between countries. In order to be able to compare injury data across different countries, it is important to understand the effects of methodological choices on the estimated numbers of serious traffic injuries. Work Package 7 of SafetyCube project is dedicated to serious traffic injuries, their health impacts and their costs. One of the aims of work package 7 is to assess and improve the estimation of the number of serious traffic injuries. The aim of this deliverable (D7.1) is to report practices in Europe concerning the reporting of serious traffic injuries and to provide guidelines and recommendations applied to each of the three main ways to estimate the number of road traffic serious injuries. Specific objectives for this deliverable are to: Describe the current state of collection of data on serious traffic injuries across Europe Provide practical guidelines for the estimation of the number of serious traffic injuries for each of the three ways identified by the High Level Group Examine how the estimated number of serious traffic injuries is affected by differences in methodology

    Expectations, deflation traps and macroeconomic policy

    Get PDF
    We examine global economic dynamics under infinite-horizon learning in a New Keynesian model in which the interest-rate rule is subject to the zero lower bound. As in Evans, Guse and Honkapohja, European Economic Review (2008), we find that under normal monetary and fiscal policy the intended steady state is locally but not globally stable. Unstable deflationary paths can arise after large pessimistic shocks to expectations. For large expectation shocks that push interest rates to the zero lower bound, temporary increases in government spending can effectively insulate the economy from deflation traps.adaptive learning; monetary policy; fiscal policy; zero interest rate lower bound

    The IPAC Image Subtraction and Discovery Pipeline for the intermediate Palomar Transient Factory

    Get PDF
    We describe the near real-time transient-source discovery engine for the intermediate Palomar Transient Factory (iPTF), currently in operations at the Infrared Processing and Analysis Center (IPAC), Caltech. We coin this system the IPAC/iPTF Discovery Engine (or IDE). We review the algorithms used for PSF-matching, image subtraction, detection, photometry, and machine-learned (ML) vetting of extracted transient candidates. We also review the performance of our ML classifier. For a limiting signal-to-noise ratio of 4 in relatively unconfused regions, "bogus" candidates from processing artifacts and imperfect image subtractions outnumber real transients by ~ 10:1. This can be considerably higher for image data with inaccurate astrometric and/or PSF-matching solutions. Despite this occasionally high contamination rate, the ML classifier is able to identify real transients with an efficiency (or completeness) of ~ 97% for a maximum tolerable false-positive rate of 1% when classifying raw candidates. All subtraction-image metrics, source features, ML probability-based real-bogus scores, contextual metadata from other surveys, and possible associations with known Solar System objects are stored in a relational database for retrieval by the various science working groups. We review our efforts in mitigating false-positives and our experience in optimizing the overall system in response to the multitude of science projects underway with iPTF.Comment: 66 pages, 21 figures, 7 tables, accepted by PAS

    Multidimensional random sampling for Fourier transform estimation

    Get PDF
    This research considers the Fourier transform calculations of multidimensional signals. The calculations are based on random sampling, where the sampling points are nonuniformly distributed according to strategically selected probability functions, to provide new opportunities that are unavailable in the uniform sampling environment. The latter imposes the sampling density of at least the Nyquist density. Otherwise, alias frequencies occur in the processed bandwidth which can lead to irresolvable processing problems. Random sampling can mitigate Nyquist limit that classical uniform-sampling-based approaches endure, for the purpose of performing direct (with no prefiltering or downconverting) Fourier analysis of (high-frequency) signals with unknown spectrum support using low sampling density. Lowering the sampling density while achieving the same signal processing objective could be an efficient, if not essential, way of exploiting the system resources in terms of power, hardware complexity and the acquisition-processing time. In this research we investigate and devise novel random sampling estimation schemes for multidimensional Fourier transform. The main focus of the investigation and development is on the aspect of the quality of estimated Fourier transform in terms of the sampling density. The former aspect is crucial as it serves towards the heart objective of random sampling of lowering the sampling density. This research was motivated by the applicability of the random-sampling-based approaches in determining the Fourier transform in multidimensional Nuclear Magnetic Resonance (NMR) spectroscopy to resolve the critical issue of its long experimental time

    An investigation of the information propagation and entropy transport aspects of Stirling machine numerical simulation

    Get PDF
    Aspects of the information propagation modeling behavior of integral machine computer simulation programs are investigated in terms of a transmission line. In particular, the effects of pressure-linking and temporal integration algorithms on the amplitude ratio and phase angle predictions are compared against experimental and closed-form analytic data. It is concluded that the discretized, first order conservation balances may not be adequate for modeling information propagation effects at characteristic numbers less than about 24. An entropy transport equation suitable for generalized use in Stirling machine simulation is developed. The equation is evaluated by including it in a simulation of an incompressible oscillating flow apparatus designed to demonstrate the effect of flow oscillations on the enhancement of thermal diffusion. Numerical false diffusion is found to be a major factor inhibiting validation of the simulation predictions with experimental and closed-form analytic data. A generalized false diffusion correction algorithm is developed which allows the numerical results to match their analytic counterparts. Under these conditions, the simulation yields entropy predictions which satisfy Clausius' inequality

    CP Violation in Top Physics

    Full text link
    CP violation in top physics is reviewed. The Standard Model has negligible effects, consequently CP violation searches involving the top quark may constitute the best way to look for physics beyond the Standard Model. Non-standard sources of CP violation due to an extended Higgs sector with and without natural flavor conservation and supersymmetric theories are discussed. Experimental feasibility of detecting CP violation effects in top quark production and decays in high energy e+ e-, gamma-gamma, mu+ mu-, pp and p-bar p colliders are surveyed. Searches for the electric, electro-weak and the chromo-electric dipole moments of the top quark in e+ e- -> t-bar t and in p p -> t-bar t X are descibed. In addition, other mechanisms that appear promising for experiments, e.g., tree-level CP violation in e+ e- -> t-bar t h, t-bar t Z, t-bar t nu_e-bar nu_e and in the top decay t -> b tau nu_tau and CP violation driven by s-channel Higgs exchanges in p p, gamma gamma, mu+ mu- -> t-bar t etc., are also discussed.Comment: 253 pages, 70 figures, A 2-up version of this postscript file may be obtained at http://thy.phy.bnl.gov/~soni/topreview.htm

    Workflows For X-ray And Neutron Interferometry/Tomography As Applied To Additive Manufacturing

    Get PDF
    Grating-based interferometry/tomography is being rapidly developed for non-destructive evaluation of additive manufacturing test articles. An application requiring an efficient workflow is extremely necessary for stress and fatigue testing samples. At present, scientific workflows play an important role for computational experiments in additive manufacturing 3D printing and interferometry/tomography imaging analysis. A clear workflow template allows scientists to process experiments easier and faster. Work flow library grows, but to find an appropriate workflow for their task is challenging. In our research, there are mainly three portions in the workflow, interferometry analysis, image reconstruction and 3D visualization. Currently, the hierarchy of workflows in interferom etry/tomography projects is Mathematica, TomoPy/ASTRA/Jupyter notebook, VisTrails and Dragonfly. In general, two methods of interferometry analysis have been used in the first portion of workflow, single-shot interferometry and stepped-grating interferometry. As for the second portion, with a Jupyter notebook, the reconstruction method ’Gridrec’ in TomoPy and ’SIRT’ (Simultaneous Iterative Reconstruction Technique) in ASTRA gener ated a powerful reconstruction volume for absorption projections and dark-field projections separately. For the last portion, Dragonfly developed by ORS (Object Research System) company is a 3D visualization software with powerful scripting capabilities implemented in Python macros. Meanwhile, the VisTrails workflow incorporated both interferometry anal ysis and image reconstruction portions into VisTrails modules. Workflows in VisTrails hide much of the complexity of Mathematica or Python programming from users. Instead, with a simple GUI, it is possible for users to make their interferometry/tomography workflows through VisTrails modules. Especially, for DPC (differential phase contrast) images in grating-based interferome try/tomography, we addressed the phase unwrapping issue with the method of 2D integra tion through generating phase images. With the algorithm, we have demonstrated the 2D integrated phase images denote a clearer contrast than DPC images

    Transform coding techniques and their application in JPEG scheme.

    Get PDF
    by Chun-tat See.Thesis (M.Phil.)--Chinese University of Hong Kong, 1991.Includes bibliographical references.ACKNOWLEDGEMENTS --- p.iABSTRACT --- p.iiNOTATIONS --- p.ivTABLE OF CONTENTS --- p.viChapter 1. --- INTRODUCTION --- p.1-1Chapter 1.1 --- Introduction --- p.1-1Chapter 1.2 --- A Basic Transform Coding System --- p.1-2Chapter 1.3 --- Thesis Organization --- p.1-5Chapter 2. --- DYADIC MATRICES AND THEIR APPLICATION --- p.2-1Chapter 2.1 --- Introduction --- p.2-1Chapter 2.2 --- Theory of Dyadic Matrix --- p.2-2Chapter 2.2.1 --- Basic Definitions --- p.2-3Chapter 2.2.2 --- Maximum Size of Dyadic Matrix --- p.2-8Chapter 2.3 --- Application of Dyadic Matrix in Generating Orthogonal Transforms --- p.2-13Chapter 2.3.1 --- Transform Performance Criteria --- p.2-14Chapter 2.3.2 --- "[T1] = [P]Diag([DM2(4)],[A(4)])[Q]" --- p.2-19Chapter 2.3.3 --- "[T2] = [P]Diag([DM2(4)],[DM2(4)])[Q]" --- p.2-21Chapter 2.4 --- Discussion and Conclusion --- p.2-26Chapter 3. --- LOW SEQUENCY COEFFICIENT TRUNCATION (LSCT) CODING SCHEME --- p.3-1Chapter 3.1 --- Introduction --- p.3-1Chapter 3.2 --- DC Coefficient Estimation Schemes --- p.3-2Chapter 3.2.1 --- Element Estimation --- p.3-2Chapter 3.2.2 --- Row Estimation --- p.3-4Chapter 3.2.3 --- Plane Estimation --- p.3-7Chapter 3.3 --- LSCT Coding Scheme 1 and Results --- p.3-11Chapter 3.4 --- LSCT Coding Scheme 2 and Results --- p.3-17Chapter 3.5 --- Discussions and Conclusions --- p.3-21Chapter 4. --- VARIABLE BLOCK SIZE (VBS) CODING SCHEME --- p.4-1Chapter 4.1 --- Introduction --- p.4-1Chapter 4.2 --- Chen's VBS Coding Scheme and Its Limitation --- p.4-3Chapter 4.3 --- VBS Coding Scheme With Block Size Determined Using Edge Discriminator --- p.4-6Chapter 4.4 --- Simulation Results --- p.4-8Chapter 4.5 --- Discussions and Conclusions --- p.4-12Chapter 5. --- ENHANCEMENT OF JPEG INTERNATIONAL STANDARD --- p.5-1Chapter 5.1 --- Introduction --- p.5-1Chapter 5.2 --- The Basic JPEG International Standard --- p.5-2Chapter 5.2.1 --- Level Shift and Discrete Cosine Transform --- p.5-4Chapter 5.2.2 --- Uniform Quantization --- p.5-5Chapter 5.2.3 --- Coefficient Coding --- p.5-7Chapter 5.3 --- Efficient DC Coefficients Encoding --- p.5-8Chapter 5.3.1 --- The Minimum Edge Difference (MED) Predictor --- p.5-8Chapter 5.3.2 --- Simulation Results --- p.5-9Chapter 5.3.3 --- Pixel Domain Predictors --- p.5-13Chapter 5.3.4 --- Discussion and Conclusion --- p.5-15Chapter 5.4 --- JPEG Scheme Using Variable Block Size Technique --- p.5-15Chapter 5.4.1 --- Scheme 1 --- p.5-16Chapter 5.4.2 --- Scheme 2 --- p.5-25Chapter 5.4.3 --- Scheme 3 --- p.5-27Chapter 5.4.4 --- Scheme 4 --- p.5-29Chapter 5.4.5 --- Scheme 5 --- p.5-32Chapter 5.4.6 --- Discussions and Conclusions --- p.5-32Chapter 5.5 --- Conclusions --- p.5-33Chapter 6. --- CONCLUSIONS --- p.6-1Chapter 6.1 --- Summary of Research Work --- p.6-1Chapter 6.2 --- Contributions of Work --- p.6-2Chapter 6.3 --- Suggestions for Further Research --- p.6-3Chapter 7. --- REFERENCES --- p.7-1RESULT
    corecore