2,209 research outputs found

    Performance of the Micromegas detector in the CAST experiment

    Full text link
    The gaseous Micromegas detector designed for the CERN Axion search experiment CAST, operated smoothly during Phase-I, which included the 2003 and 2004 running periods. It exhibited linear response in the energy range of interest (1-10keV), good spatial sensitivity and energy resolution (15-19% FWHM at 5.9keV)as well as remarkable stability. The detector's upgrade for the 2004 run, supported by the development of advanced offline analysis tools, improved the background rejection capability, leading to an average rate 5x10^-5 counts/sec/cm^2/keV with 94% cut efficiency. Also, the origin of the detected background was studied with a Monte Carlo simulation, using the GEANT4 package.Comment: Prepared for PSD7: The Seventh International Conference on Position Sensitive Detectors, Liverpool, United Kingdom, 12-16 Sep. 200

    Bioinformatics Solutions for Image Data Processing

    Get PDF
    In recent years, the increasing use of medical devices has led to the generation of large amounts of data, including image data. Bioinformatics solutions provide an effective approach for image data processing in order to retrieve information of interest and to integrate several data sources for knowledge extraction; furthermore, images processing techniques support scientists and physicians in diagnosis and therapies. In addition, bioinformatics image analysis may be extended to support several scenarios, for instance, in cyber-security the biometric recognition systems are applied to unlock devices and restricted areas, as well as to access sensitive data. In medicine, computational platforms generate high amount of data from medical devices such as Computed Tomography (CT), and Magnetic Resonance Imaging (MRI); this chapter will survey on bioinformatics solutions and toolkits for medical imaging in order to suggest an overview of techniques and methods that can be applied for the imaging analysis in medicine

    Learning Sparse Classifiers: Continuous and Mixed Integer Optimization Perspectives

    Full text link
    We consider a discrete optimization formulation for learning sparse classifiers, where the outcome depends upon a linear combination of a small subset of features. Recent work has shown that mixed integer programming (MIP) can be used to solve (to optimality) ℓ0\ell_0-regularized regression problems at scales much larger than what was conventionally considered possible. Despite their usefulness, MIP-based global optimization approaches are significantly slower compared to the relatively mature algorithms for ℓ1\ell_1-regularization and heuristics for nonconvex regularized problems. We aim to bridge this gap in computation times by developing new MIP-based algorithms for ℓ0\ell_0-regularized classification. We propose two classes of scalable algorithms: an exact algorithm that can handle p≈50,000p\approx 50,000 features in a few minutes, and approximate algorithms that can address instances with p≈106p\approx 10^6 in times comparable to the fast ℓ1\ell_1-based algorithms. Our exact algorithm is based on the novel idea of \textsl{integrality generation}, which solves the original problem (with pp binary variables) via a sequence of mixed integer programs that involve a small number of binary variables. Our approximate algorithms are based on coordinate descent and local combinatorial search. In addition, we present new estimation error bounds for a class of ℓ0\ell_0-regularized estimators. Experiments on real and synthetic data demonstrate that our approach leads to models with considerably improved statistical performance (especially, variable selection) when compared to competing methods.Comment: To appear in JML

    HEP Community White Paper on Software trigger and event reconstruction

    Get PDF
    Realizing the physics programs of the planned and upgraded high-energy physics (HEP) experiments over the next 10 years will require the HEP community to address a number of challenges in the area of software and computing. For this reason, the HEP software community has engaged in a planning process over the past two years, with the objective of identifying and prioritizing the research and development required to enable the next generation of HEP detectors to fulfill their full physics potential. The aim is to produce a Community White Paper which will describe the community strategy and a roadmap for software and computing research and development in HEP for the 2020s. The topics of event reconstruction and software triggers were considered by a joint working group and are summarized together in this document.Comment: Editors Vladimir Vava Gligorov and David Lang
    • 

    corecore