38,090 research outputs found

    Stellar contributions to the hard X-ray galactic ridge

    Get PDF
    The number density of serendipitous sources in galactic plane Einstein Observatory IPC fields are compared with predictions based on the intensity of the HEAO-1 A2 unresolved hrd X-ray galactic ridge emission. It is concluded that theoretically predicted X-ray source populations of luminosity 8 x 10 to the 32nd power to 3 x 10 to the 34th power ergs s have 2 KeV to 10 KeV local surface densities of less than approximately .0008 L(32) pc/2 and are unlikely to be the dominant contributors to the hard X-ray ridge. An estimate for Be/neutron star binary systems, such as X Persei, gives a 2 keV to 10 keV local surface density of approximately 26 x 10 to the -5 power L(32) pc/2. Stellar systems of low luminosity, are more likely contributors. Both RS CVn and cataclysmic variable systems contribute 43% + or - 18% of the ridge. A more sensitive measurement of the ridge's hard X-ray spectrum should reveal Fe-line emission. We speculate that dM stars are further major contributors

    Minimum entropy restoration using FPGAs and high-level techniques

    Get PDF
    One of the greatest perceived barriers to the widespread use of FPGAs in image processing is the difficulty for application specialists of developing algorithms on reconfigurable hardware. Minimum entropy deconvolution (MED) techniques have been shown to be effective in the restoration of star-field images. This paper reports on an attempt to implement a MED algorithm using simulated annealing, first on a microprocessor, then on an FPGA. The FPGA implementation uses DIME-C, a C-to-gates compiler, coupled with a low-level core library to simplify the design task. Analysis of the C code and output from the DIME-C compiler guided the code optimisation. The paper reports on the design effort that this entailed and the resultant performance improvements

    Performance of Particle Flow Calorimetry at CLIC

    Full text link
    The particle flow approach to calorimetry can provide unprecedented jet energy resolution at a future high energy collider, such as the International Linear Collider (ILC). However, the use of particle flow calorimetry at the proposed multi-TeV Compact Linear Collider (CLIC) poses a number of significant new challenges. At higher jet energies, detector occupancies increase, and it becomes increasingly difficult to resolve energy deposits from individual particles. The experimental conditions at CLIC are also significantly more challenging than those at previous electron-positron colliders, with increased levels of beam-induced backgrounds combined with a bunch spacing of only 0.5 ns. This paper describes the modifications made to the PandoraPFA particle flow algorithm to improve the jet energy reconstruction for jet energies above 250 GeV. It then introduces a combination of timing and p_T cuts that can be applied to reconstructed particles in order to significantly reduce the background. A systematic study is performed to understand the dependence of the jet energy resolution on the jet energy and angle, and the physics performance is assessed via a study of the energy and mass resolution of W and Z particles in the presence of background at CLIC. Finally, the missing transverse momentum resolution is presented, and the fake missing momentum is quantified. The results presented in this paper demonstrate that high granularity particle flow calorimetry leads to a robust and high resolution reconstruction of jet energies and di-jet masses at CLIC.Comment: 14 pages, 11 figure

    Causal inference for continuous-time processes when covariates are observed only at discrete times

    Get PDF
    Most of the work on the structural nested model and g-estimation for causal inference in longitudinal data assumes a discrete-time underlying data generating process. However, in some observational studies, it is more reasonable to assume that the data are generated from a continuous-time process and are only observable at discrete time points. When these circumstances arise, the sequential randomization assumption in the observed discrete-time data, which is essential in justifying discrete-time g-estimation, may not be reasonable. Under a deterministic model, we discuss other useful assumptions that guarantee the consistency of discrete-time g-estimation. In more general cases, when those assumptions are violated, we propose a controlling-the-future method that performs at least as well as g-estimation in most scenarios and which provides consistent estimation in some cases where g-estimation is severely inconsistent. We apply the methods discussed in this paper to simulated data, as well as to a data set collected following a massive flood in Bangladesh, estimating the effect of diarrhea on children's height. Results from different methods are compared in both simulation and the real application.Comment: Published in at http://dx.doi.org/10.1214/10-AOS830 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Restoration of star-field images using high-level languages and core libraries

    Get PDF
    Research into the use of FPGAs in Image Processing began in earnest at the beginning of the 1990s. Since then, many thousands of publications have pointed to the computational capabilities of FPGAs. During this time, FPGAs have seen the application space to which they are applicable grow in tandem with their logic densities. When investigating a particular application, researchers compare FPGAs with alternative technologies such as Digital Signal Processors (DSPs), Application-Specific Integrated Cir-cuits (ASICs), microprocessors and vector processors. The metrics for comparison depend on the needs of the application, and include such measurements as: raw performance, power consumption, unit cost, board footprint, non-recurring engineering cost, design time and design cost. The key metrics for a par-ticular application may also include ratios of these metrics, e.g. power/performance, or performance/unit cost. The work detailed in this paper compares a 90nm-process commodity microprocessor with a plat-form based around a 90nm-process FPGA, focussing on design time and raw performance. The application chosen for implementation was a minimum entropy restoration of star-field images (see [1] for an introduction), with simulated annealing used to converge towards the globally-optimum solution. This application was not chosen in the belief that it would particularly suit one technology over another, but was instead selected as being representative of a computationally intense image-processing application

    Modelling and simulating unplanned and urgent healthcare: the contribution of scenarios of future healthcare systems.

    Get PDF
    The current financial challenges being faced by the UK economy have meant that the NHS will have to make ÂŁ20 billion of savings between 2010 and 2014 requiring it to be innovative about how it delivers healthcare. This paper presents the methodology of a research project that is simulating the whole healthcare system with the aim of reducing waste within urgent unscheduled care streams whilst understanding the impact of such changes on the whole system. The research is aimed at care commissioners who could use such simulation in their decision-making practice, and the paper presents the findings from early stakeholder discussions about the scope and focus of the research and the relevance of stakeholder consultation and scenarios in the development of a valid decision-support tool that is fit for purpose
    • …
    corecore