2,428 research outputs found

    Absorption of fermionic dark matter by nuclear targets

    Get PDF
    Absorption of fermionic dark matter leads to a range of distinct and novel signatures at dark matter direct detection and neutrino experiments. We study the possible signals from fermionic absorption by nuclear targets, which we divide into two classes of four Fermi operators: neutral and charged current. In the neutral current signal, dark matter is absorbed by a target nucleus and a neutrino is emitted. This results in a characteristically different nuclear recoil energy spectrum from that of elastic scattering. The charged current channel leads to induced β decays in isotopes which are stable in vacuum as well as shifts of the kinematic endpoint of β spectra in unstable isotopes. To confirm the possibility of observing these signals in light of other constraints, we introduce UV completions of example higher dimensional operators that lead to fermionic absorption signals and study their phenomenology. Most prominently, dark matter which exhibits fermionic absorption signals is necessarily unstable leading to stringent bounds from indirect detection searches. Nevertheless, we find a large viable parameter space in which dark matter is sufficiently long lived and detectable in current and future experiments

    Feature Selection via Coalitional Game Theory

    Get PDF
    We present and study the contribution-selection algorithm (CSA), a novel algorithm for feature selection. The algorithm is based on the multiperturbation shapley analysis (MSA), a framework that relies on game theory to estimate usefulness. The algorithm iteratively estimates the usefulness of features and selects them accordingly, using either forward selection or backward elimination. It can optimize various performance measures over unseen data such as accuracy, balanced error rate, and area under receiver-operator-characteristic curve. Empirical comparison with several other existing feature selection methods shows that the backward elimination variant of CSA leads to the most accurate classification results on an array of data sets

    Brown representability for space-valued functors

    Full text link
    In this paper we prove two theorems which resemble the classical cohomological and homological Brown representability theorems. The main difference is that our results classify small contravariant functors from spaces to spaces up to weak equivalence of functors. In more detail, we show that every small contravariant functor from spaces to spaces which takes coproducts to products up to homotopy and takes homotopy pushouts to homotopy pullbacks is naturally weekly equivalent to a representable functor. The second representability theorem states: every contravariant continuous functor from the category of finite simplicial sets to simplicial sets taking homotopy pushouts to homotopy pullbacks is equivalent to the restriction of a representable functor. This theorem may be considered as a contravariant analog of Goodwillie's classification of linear functors.Comment: 19 pages, final version, accepted by the Israel Journal of Mathematic

    Comparing the Efficacy of Drug Regimens for Pulmonary Tuberculosis: Meta-analysis of Endpoints in Early-Phase Clinical Trials

    Get PDF
    Background A systematic review of early clinical outcomes in tuberculosis was undertaken to determine ranking of efficacy of drugs and combinations, define variability of these measures on different endpoints, and to establish the relationships between them. Methods Studies were identified by searching PubMed, Medline, Embase, LILACS (Latin American and Caribbean Health Sciences Literature), and reference lists of included studies. Outcomes were early bactericidal activity results over 2, 7, and 14 days, and the proportion of patients with negative culture at 8 weeks. Results One hundred thirty-three trials reporting phase 2A (early bactericidal activity) and phase 2B (culture conversion at 2 months) outcomes were identified. Only 9 drug combinations were assessed on >1 phase 2A endpoint and only 3 were assessed in both phase 2A and 2B trials. Conclusions The existing evidence base supporting phase 2 methodology in tuberculosis is highly incomplete. In future, a broader range of drugs and combinations should be more consistently studied across a greater range of phase 2 endpoints

    Random Filters for Compressive Sampling and Reconstruction

    Get PDF
    We propose and study a new technique for efficiently acquiring and reconstructing signals based on convolution with a fixed FIR filter having random taps. The method is designed for sparse and compressible signals, i.e., ones that are well approximated by a short linear combination of vectors from an orthonormal basis. Signal reconstruction involves a non-linear Orthogonal Matching Pursuit algorithm that we implement efficiently by exploiting the nonadaptive, time-invariant structure of the measurement process. While simpler and more efficient than other random acquisition techniques like Compressed Sensing, random filtering is sufficiently generic to summarize many types of compressible signals and generalizes to streaming and continuous-time signals. Extensive numerical experiments demonstrate its efficacy for acquiring and reconstructing signals sparse in the time, frequency, and wavelet domains, as well as piecewise smooth signals and Poisson processes

    Using a neural network approach for muon reconstruction and triggering

    Full text link
    The extremely high rate of events that will be produced in the future Large Hadron Collider requires the triggering mechanism to take precise decisions in a few nano-seconds. We present a study which used an artificial neural network triggering algorithm and compared it to the performance of a dedicated electronic muon triggering system. Relatively simple architecture was used to solve a complicated inverse problem. A comparison with a realistic example of the ATLAS first level trigger simulation was in favour of the neural network. A similar architecture trained after the simulation of the electronics first trigger stage showed a further background rejection.Comment: A talk given at ACAT03, KEK, Japan, November 2003. Submitted to Nuclear Instruments and Methods in Physics Research, Section

    Measurement Bounds for Sparse Signal Ensembles via Graphical Models

    Full text link
    In compressive sensing, a small collection of linear projections of a sparse signal contains enough information to permit signal recovery. Distributed compressive sensing (DCS) extends this framework by defining ensemble sparsity models, allowing a correlated ensemble of sparse signals to be jointly recovered from a collection of separately acquired compressive measurements. In this paper, we introduce a framework for modeling sparse signal ensembles that quantifies the intra- and inter-signal dependencies within and among the signals. This framework is based on a novel bipartite graph representation that links the sparse signal coefficients with the measurements obtained for each signal. Using our framework, we provide fundamental bounds on the number of noiseless measurements that each sensor must collect to ensure that the signals are jointly recoverable.Comment: 11 pages, 2 figure

    A review of High Performance Computing foundations for scientists

    Full text link
    The increase of existing computational capabilities has made simulation emerge as a third discipline of Science, lying midway between experimental and purely theoretical branches [1, 2]. Simulation enables the evaluation of quantities which otherwise would not be accessible, helps to improve experiments and provides new insights on systems which are analysed [3-6]. Knowing the fundamentals of computation can be very useful for scientists, for it can help them to improve the performance of their theoretical models and simulations. This review includes some technical essentials that can be useful to this end, and it is devised as a complement for researchers whose education is focused on scientific issues and not on technological respects. In this document we attempt to discuss the fundamentals of High Performance Computing (HPC) [7] in a way which is easy to understand without much previous background. We sketch the way standard computers and supercomputers work, as well as discuss distributed computing and discuss essential aspects to take into account when running scientific calculations in computers.Comment: 33 page
    corecore