293 research outputs found

    A Bisognano-Wichmann-like Theorem in a Certain Case of a Non Bifurcate Event Horizon related to an Extreme Reissner-Nordstr\"om Black Hole

    Full text link
    Thermal Wightman functions of a massless scalar field are studied within the framework of a ``near horizon'' static background model of an extremal R-N black hole. This model is built up by using global Carter-like coordinates over an infinite set of Bertotti-Robinson submanifolds glued together. The analytical extendibility beyond the horizon is imposed as constraints on (thermal) Wightman's functions defined on a Bertotti-Robinson sub manifold. It turns out that only the Bertotti-Robinson vacuum state, i.e. T=0T=0, satisfies the above requirement. Furthermore the extension of this state onto the whole manifold is proved to coincide exactly with the vacuum state in the global Carter-like coordinates. Hence a theorem similar to Bisognano-Wichmann theorem for the Minkowski space-time in terms of Wightman functions holds with vanishing ``Unruh-Rindler temperature''. Furtermore, the Carter-like vacuum restricted to a Bertotti-Robinson region, resulting a pure state there, has vanishing entropy despite of the presence of event horizons. Some comments on the real extreme R-N black hole are given

    Characterization of silicon drift detectors with electrons for the TRISTAN project

    Get PDF
    Sterile neutrinos are a minimal extension of the standard model of particle physics. A promising model-independent way to search for sterile neutrinos is via high-precision β-spectroscopy. The Karlsruhe tritium neutrino (KATRIN) experiment, equipped with a novel multi-pixel silicon drift detector focal plane array and read-out system, named the TRISTAN detector, has the potential to supersede the sensitivity of previous laboratory-based searches. In this work we present the characterization of the first silicon drift detector prototypes with electrons and we investigate the impact of uncertainties of the detector\u27s response to electrons on the final sterile neutrino sensitivity

    Characterization of Silicon Drift Detectors with Electrons for the TRISTAN Project

    Get PDF
    Sterile neutrinos are a minimal extension of the Standard Model of Particle Physics. A promising model-independent way to search for sterile neutrinos is via high-precision beta spectroscopy. The Karlsruhe Tritium Neutrino (KATRIN) experiment, equipped with a novel multi-pixel silicon drift detector focal plane array and read-out system, named the TRISTAN detector, has the potential to supersede the sensitivity of previous laboratory-based searches. In this work we present the characterization of the first silicon drift detector prototypes with electrons and we investigate the impact of uncertainties of the detector's response to electrons on the final sterile neutrino sensitivity.Comment: 18 pages, 8 figures. J. Phys. G: Nucl. Part. Phys. 48 01500

    Commissioning of the vacuum system of the KATRIN Main Spectrometer

    Get PDF
    The KATRIN experiment will probe the neutrino mass by measuring the beta-electron energy spectrum near the endpoint of tritium beta-decay. An integral energy analysis will be performed by an electro-static spectrometer (Main Spectrometer), an ultra-high vacuum vessel with a length of 23.2 m, a volume of 1240 m^3, and a complex inner electrode system with about 120000 individual parts. The strong magnetic field that guides the beta-electrons is provided by super-conducting solenoids at both ends of the spectrometer. Its influence on turbo-molecular pumps and vacuum gauges had to be considered. A system consisting of 6 turbo-molecular pumps and 3 km of non-evaporable getter strips has been deployed and was tested during the commissioning of the spectrometer. In this paper the configuration, the commissioning with bake-out at 300{\deg}C, and the performance of this system are presented in detail. The vacuum system has to maintain a pressure in the 10^{-11} mbar range. It is demonstrated that the performance of the system is already close to these stringent functional requirements for the KATRIN experiment, which will start at the end of 2016.Comment: submitted for publication in JINST, 39 pages, 15 figure

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of â„“2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem

    Derivation of consistent hard rock (1000<Vs<3000 m/s) GMPEs from surface and down-hole recordings: Analysis of KiK-net data

    Get PDF
    A key component in seismic hazard assessment is the estimation of ground motion for hard rock sites, either for applications to installations built on this site category, or as an input motion for site response computation. Empirical ground motion prediction equations (GMPEs) are the traditional basis for estimating ground motion while VS30 is the basis to account for site conditions. As current GMPEs are poorly constrained for VS30 larger than 1000 m/s, the presently used approach for estimating hazard on hard rock sites consists of “host-to-target” adjustment techniques based on VS30 and κ0 values. The present study investigates alternative methods on the basis of a KiK-net dataset corresponding to stiff and rocky sites with 500 < VS30 < 1350 m/s. The existence of sensor pairs (one at the surface and one in depth) and the availability of P- and S-wave velocity profiles allow deriving two “virtual” datasets associated to outcropping hard rock sites with VS in the range [1000, 3000] m/s with two independent corrections: 1/down-hole recordings modified from within motion to outcropping motion with a depth correction factor, 2/surface recordings deconvolved from their specific site response derived through 1D simulation. GMPEs with simple functional forms are then developed, including a VS30 site term. They lead to consistent and robust hard-rock motion estimates, which prove to be significantly lower than host-to-target adjustment predictions. The difference can reach a factor up to 3–4 beyond 5 Hz for very hard-rock, but decreases for decreasing frequency until vanishing below 2 Hz

    Genomic and gene expression profiling of minute alterations of chromosome arm 1p in small-cell lung carcinoma cells

    Get PDF
    Genetic alterations occurring on human chromosome arm 1p are common in many types of cancer including lung, breast, neuroblastoma, pheochromocytoma, and colorectal. The identification of tumour suppressors and oncogenes on this arm has been limited by the low resolution of current technologies for fine mapping. In order to identify genetic alterations on 1p in small-cell lung carcinoma, we developed a new resource for fine mapping segmental DNA copy number alterations. We have constructed an array of 642 ordered and fingerprint-verified bacterial artificial chromosome clones spanning the 120 megabase (Mb) 1p arm from 1p11.2 to p36.33. The 1p arm of 15 small-cell lung cancer cell lines was analysed at sub-Mb resolution using this arm-specific array. Among the genetic alterations identified, two regions of recurrent amplification emerged. They were detected in at least 45% of the samples: a 580 kb region at 1p34.2–p34.3 and a 270 kb region at 1p11.2. We further defined the potential importance of these genomic amplifications by analysing the RNA expression of the genes in these regions with Affymetrix oligonucleotide arrays and semiquantitative reverse transcriptase–polymerase chain reaction. Our data revealed overexpression of the genes HEYL, HPCAL4, BMP8, IPT, and RLF, coinciding with genomic amplification
    • …
    corecore