576 research outputs found

    Определение эффективности нейтронного детектора из пластического сцинтиллятора o100?200 мм

    Get PDF
    Рассчитывается и экспериментально проверяется эффективность детектора. к нейтронам сверхвысоких (десятки и сотни МэВ) энергий

    A Bisognano-Wichmann-like Theorem in a Certain Case of a Non Bifurcate Event Horizon related to an Extreme Reissner-Nordstr\"om Black Hole

    Full text link
    Thermal Wightman functions of a massless scalar field are studied within the framework of a ``near horizon'' static background model of an extremal R-N black hole. This model is built up by using global Carter-like coordinates over an infinite set of Bertotti-Robinson submanifolds glued together. The analytical extendibility beyond the horizon is imposed as constraints on (thermal) Wightman's functions defined on a Bertotti-Robinson sub manifold. It turns out that only the Bertotti-Robinson vacuum state, i.e. T=0T=0, satisfies the above requirement. Furthermore the extension of this state onto the whole manifold is proved to coincide exactly with the vacuum state in the global Carter-like coordinates. Hence a theorem similar to Bisognano-Wichmann theorem for the Minkowski space-time in terms of Wightman functions holds with vanishing ``Unruh-Rindler temperature''. Furtermore, the Carter-like vacuum restricted to a Bertotti-Robinson region, resulting a pure state there, has vanishing entropy despite of the presence of event horizons. Some comments on the real extreme R-N black hole are given

    Improved limits on nuebar emission from mu+ decay

    Full text link
    We investigated mu+ decays at rest produced at the ISIS beam stop target. Lepton flavor (LF) conservation has been tested by searching for \nueb via the detection reaction p(\nueb,e+)n. No \nueb signal from LF violating mu+ decays was identified. We extract upper limits of the branching ratio for the LF violating decay mu+ -> e+ \nueb \nu compared to the Standard Model (SM) mu+ -> e+ nue numub decay: BR < 0.9(1.7)x10^{-3} (90%CL) depending on the spectral distribution of \nueb characterized by the Michel parameter rho=0.75 (0.0). These results improve earlier limits by one order of magnitude and restrict extensions of the SM in which \nueb emission from mu+ decay is allowed with considerable strength. The decay \mupdeb as source for the \nueb signal observed in the LSND experiment can be excluded.Comment: 10 pages, including 1 figure, 1 tabl

    The KATRIN Pre-Spectrometer at reduced Filter Energy

    Get PDF
    The KArlsruhe TRItium Neutrino experiment, KATRIN, will determine the mass of the electron neutrino with a sensitivity of 0.2 eV (90% C.L.) via a measurement of the beta-spectrum of gaseous tritium near its endpoint of E_0 =18.57 keV. An ultra-low background of about b = 10 mHz is among the requirements to reach this sensitivity. In the KATRIN main beam-line two spectrometers of MAC-E filter type are used in a tandem configuration. This setup, however, produces a Penning trap which could lead to increased background. We have performed test measurements showing that the filter energy of the pre-spectrometer can be reduced by several keV in order to diminish this trap. These measurements were analyzed with the help of a complex computer simulation, modeling multiple electron reflections both from the detector and the photoelectric electron source used in our test setup.Comment: 22 pages, 12 figure

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of 2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem
    corecore