980 research outputs found

    The protohistoric briquetage at Puntone (Tuscany, Italy):A multidisciplinary attempt to unravel its age and role in the salt supply of Early States in Tyrrhenian Central Italy

    Get PDF
    While processes involved in the protohistoric briquetage at Puntone (Tuscany, Italy) have been reconstructed in detail, the age of this industry remained uncertain since materials suited for traditional dating (14C dating on charcoal and typological dating of ceramics) were very scarce. We attempted to assess its age by radiocarbon dating organic matter and carbonates in strata that were directly linked to the industry. Microbial DNA and C isotope analyses showed that the organic matter is dominantly composed of labile organic matter, of which the age is coeval with the briquetage industry. Carbonates had a complex origin and were overall unsuited for radiocarbon dating: Shells in process residues exhibited a large, uncertain ‘marine reservoir effect’, hampering their use for dating the industry; the secondary carbonates in these residues had a quite varied composition, including much more recent carbonate that precipitated from infiltrated lateral run-off, as could be concluded from C and Sr isotope analyses. Dates found that were deemed reliable (c. 1000–100 cal BCE) show that this ancient industry, which started in the Late Bronze Age - Early Iron Age (1107–841 cal BCE), extended into the Roman Republican period and was contemporary with the saltern-based larger scale salt industry in Central Lazio

    Atom trapping and two-dimensional Bose-Einstein condensates in field-induced adiabatic potentials

    Get PDF
    We discuss a method to create two-dimensional traps as well as atomic shell, or bubble, states for a Bose-Einstein condensate initially prepared in a conventional magnetic trap. The scheme relies on the use of time-dependent, radio frequency-induced adiabatic potentials. These are shown to form a versatile and robust tool to generate novel trapping potentials. Our shell states take the form of thin, highly stable matter-wave bubbles and can serve as stepping-stones to prepare atoms in highly-excited trap eigenstates or to study `collapse and revival phenomena'. Their creation requires gravitational effects to be compensated by applying additional optical dipole potentials. However, in our scheme gravitation can also be exploited to provide a route to two-dimensional atom trapping. We demonstrate the loading process for such a trap and examine experimental conditions under which a 2D condensate may be prepared.Comment: 16 pages, 10 figure

    Slepton and Neutralino/Chargino Coannihilations in MSSM

    Get PDF
    Within the low-energy effective Minimal Supersymmetric extension of Standard Model (effMSSM) we calculated the neutralino relic density taking into account slepton-neutralino and neutralino-chargino/neutralino coannihilation channels. We performed comparative study of these channels and obtained that both of them give sizable contributions to the reduction of the relic density. Due to these coannihilation processes some models (mostly with large neutralino masses) enter into the cosmologically interesting region for relic density, but other models leave this region. Nevertheless, in general, the predictions for direct and indirect dark matter detection rates are not strongly affected by these coannihilation channels in the effMSSM.Comment: 12 pages, 9 figures, revte

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of ℓ2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem

    Search for the standard model Higgs boson in tau final states

    Get PDF
    We present a search for the standard model Higgs boson using hadronically decaying tau leptons, in 1 inverse femtobarn of data collected with the D0 detector at the Fermilab Tevatron ppbar collider. We select two final states: tau plus missing transverse energy and b jets, and tau+ tau- plus jets. These final states are sensitive to a combination of associated W/Z boson plus Higgs boson, vector boson fusion and gluon-gluon fusion production processes. The observed ratio of the combined limit on the Higgs production cross section at the 95% C.L. to the standard model expectation is 29 for a Higgs boson mass of 115 GeV.Comment: publication versio

    Measurement of the p-pbar -> Wgamma + X cross section at sqrt(s) = 1.96 TeV and WWgamma anomalous coupling limits

    Full text link
    The WWgamma triple gauge boson coupling parameters are studied using p-pbar -> l nu gamma + X (l = e,mu) events at sqrt(s) = 1.96 TeV. The data were collected with the DO detector from an integrated luminosity of 162 pb^{-1} delivered by the Fermilab Tevatron Collider. The cross section times branching fraction for p-pbar -> W(gamma) + X -> l nu gamma + X with E_T^{gamma} > 8 GeV and Delta R_{l gamma} > 0.7 is 14.8 +/- 1.6 (stat) +/- 1.0 (syst) +/- 1.0 (lum) pb. The one-dimensional 95% confidence level limits on anomalous couplings are -0.88 < Delta kappa_{gamma} < 0.96 and -0.20 < lambda_{gamma} < 0.20.Comment: Submitted to Phys. Rev. D Rapid Communication

    Measurement of the ttbar Production Cross Section in ppbar Collisions at sqrt{s} = 1.96 TeV using Kinematic Characteristics of Lepton + Jets Events

    Get PDF
    We present a measurement of the top quark pair ttbar production cross section in ppbar collisions at a center-of-mass energy of 1.96 TeV using 230 pb**{-1} of data collected by the DO detector at the Fermilab Tevatron Collider. We select events with one charged lepton (electron or muon), large missing transverse energy, and at least four jets, and extract the ttbar content of the sample based on the kinematic characteristics of the events. For a top quark mass of 175 GeV, we measure sigma(ttbar) = 6.7 {+1.4-1.3} (stat) {+1.6- 1.1} (syst) +/-0.4 (lumi) pb, in good agreement with the standard model prediction.Comment: submitted to Phys.Rev.Let

    Measurement of the ttbar Production Cross Section in ppbar Collisions at sqrt(s)=1.96 TeV using Lepton + Jets Events with Lifetime b-tagging

    Get PDF
    We present a measurement of the top quark pair (ttˉt\bar{t}) production cross section (σttˉ\sigma_{t\bar{t}}) in ppˉp\bar{p} collisions at s=1.96\sqrt{s}=1.96 TeV using 230 pb−1^{-1} of data collected by the D0 experiment at the Fermilab Tevatron Collider. We select events with one charged lepton (electron or muon), missing transverse energy, and jets in the final state. We employ lifetime-based b-jet identification techniques to further enhance the ttˉt\bar{t} purity of the selected sample. For a top quark mass of 175 GeV, we measure σttˉ=8.6−1.5+1.6(stat.+syst.)±0.6(lumi.)\sigma_{t\bar{t}}=8.6^{+1.6}_{-1.5}(stat.+syst.)\pm 0.6(lumi.) pb, in agreement with the standard model expectation.Comment: 7 pages, 2 figures, 3 tables Submitted to Phys.Rev.Let

    Search for W' bosons decaying to an electron and a neutrino with the D0 detector

    Get PDF
    This Letter describes the search for a new heavy charged gauge boson W' decaying into an electron and a neutrino. The data were collected with the D0 detector at the Fermilab Tevatron proton-antiproton Collider at a center-of-mass energy of 1.96 TeV, and correspond to an integrated luminosity of about 1 inverse femtobarn. Lacking any significant excess in the data in comparison with known processes, an upper limit is set on the production cross section times branching fraction, and a W' boson with mass below 1.00 TeV can be excluded at the 95% C.L., assuming standard-model-like couplings to fermions. This result significantly improves upon previous limits, and is the most stringent to date.Comment: submitted to Phys. Rev. Let
    • 

    corecore