302 research outputs found
Convex recovery of a structured signal from independent random linear measurements
This chapter develops a theoretical analysis of the convex programming method
for recovering a structured signal from independent random linear measurements.
This technique delivers bounds for the sampling complexity that are similar
with recent results for standard Gaussian measurements, but the argument
applies to a much wider class of measurement ensembles. To demonstrate the
power of this approach, the paper presents a short analysis of phase retrieval
by trace-norm minimization. The key technical tool is a framework, due to
Mendelson and coauthors, for bounding a nonnegative empirical process.Comment: 18 pages, 1 figure. To appear in "Sampling Theory, a Renaissance."
v2: minor corrections. v3: updated citations and increased emphasis on
Mendelson's contribution
A statistical interpretation of the correlation between intermediate mass fragment multiplicity and transverse energy
Multifragment emission following Xe+Au collisions at 30, 40, 50 and 60 AMeV
has been studied with multidetector systems covering nearly 4-pi in solid
angle. The correlations of both the intermediate mass fragment and light
charged particle multiplicities with the transverse energy are explored. A
comparison is made with results from a similar system, Xe+Bi at 28 AMeV. The
experimental trends are compared to statistical model predictions.Comment: 7 pages, submitted to Phys. Rev.
Isotopic composition of fragments in multifragmentation of very large nuclear systems: effects of the chemical equilibrium
Studies on the isospin of fragments resulting from the disassembly of highly
excited large thermal-like nuclear emitting sources, formed in the ^{197}Au +
^{197}Au reaction at 35 MeV/nucleon beam energy, are presented. Two different
decay systems (the quasiprojectile formed in midperipheral reactions and the
unique source coming from the incomplete fusion of projectile and target in the
most central collisions) were considered; these emitting sources have the same
initial N/Z ratio and excitation energy (E^* ~= 5--6 MeV/nucleon), but
different size. Their charge yields and isotopic content of the fragments show
different distributions. It is observed that the neutron content of
intermediate mass fragments increases with the size of the source. These
evidences are consistent with chemical equilibrium reached in the systems. This
fact is confirmed by the analysis with the statistical multifragmentation
model.Comment: 9 pages, 4 ps figure
Relation Between Chiral Susceptibility and Solutions of Gap Equation in Nambu--Jona-Lasinio Model
We study the solutions of the gap equation, the thermodynamic potential and
the chiral susceptibility in and beyond the chiral limit at finite chemical
potential in the Nambu--Jona-Lasinio (NJL) model. We give an explicit relation
between the chiral susceptibility and the thermodynamic potential in the NJL
model. We find that the chiral susceptibility is a quantity being able to
represent the furcation of the solutions of the gap equation and the
concavo-convexity of the thermodynamic potential in NJL model. It indicates
that the chiral susceptibility can identify the stable state and the
possibility of the chiral phase transition in NJL model.Comment: 21 pages, 6 figures, misprints are correcte
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
Stream and slope weathering effects on organic-rich mudstone geochemistry and implications for hydrocarbon source rock assessment: a Bowland Shale case study
This study contributes to the exploration and quantification of the weathering of organic-rich mudstones under temperate climatic conditions. Bowland Shales, exposed by a stream and slope, were sampled in order to develop a model for the effects of weathering on the mudstone geochemistry, including major and trace element geochemistry, Rock-Eval pyrolysis and δ13Corg. Four weathering grades (I â IV) are defined using a visual classification scheme; visually fresh and unaltered (I), chemically altered (II, III) and âpaper shaleâ that typifies weathered mudstone on slopes (IV). Bedload abrasion in the stream exposes of visually fresh and geochemically unaltered mudstone. Natural fractures are conduits for oxidising meteoric waters that promote leaching at the millimetre scale and/or precipitation of iron oxide coatings along fracture surfaces. On the slope, bedding-parallel fractures formed (and may continue to form) in response to chemical and/or physical weathering processes. These fractures develop along planes of weakness, typically along laminae comprising detrital grains, and exhibit millimetre- and centimetre-scale leached layers and iron oxide coatings. Fracture surfaces are progressively exposed to physical weathering processes towards the outcrop surface, and results in disintegration of the altered material along fracture surfaces. Grade IV, âpaper shaleâ mudstone is chemically unaltered but represents a biased record driven by initial heterogeneity in the sedimentary fabric. Chemically weathered outcrop samples exhibit lower concentrations of both âfreeâ (S1) (up to 0.6 mgHC/g rock) and âboundâ (S2) (up to 3.2 mgHC/g rock) hydrocarbon, reduced total organic carbon content (up to 0.34 wt%), reduced hydrogen index (up to 58 mgHC/gTOC), increased oxygen index (up to 19 mgCO + CO2/gTOC) and increased Tmax (up to 11 °C) compared with unaltered samples. If analysis of chemically weathered samples is unavoidable, back-extrapolation of Rock-Eval parameters can assist in the estimation of pre-weathering organic compositions. Combining Cs/Cu with oxygen index is a proxy for identifying the weathering progression from fresh material (I) to âpaper shaleâ (IV). This study demonstrates that outcrop samples in temperate climates can provide information for assessing hydrocarbon potential of organic-rich mudstones
Search for direct production of charginos and neutralinos in events with three leptons and missing transverse momentum in âs = 7 TeV pp collisions with the ATLAS detector
A search for the direct production of charginos and neutralinos in final states with three electrons or muons and missing transverse momentum is presented. The analysis is based on 4.7 fbâ1 of protonâproton collision data delivered by the Large Hadron Collider and recorded with the ATLAS detector. Observations are consistent with Standard Model expectations in three signal regions that are either depleted or enriched in Z-boson decays. Upper limits at 95% confidence level are set in R-parity conserving phenomenological minimal supersymmetric models and in simplified models, significantly extending previous results
Jet size dependence of single jet suppression in lead-lead collisions at sqrt(s(NN)) = 2.76 TeV with the ATLAS detector at the LHC
Measurements of inclusive jet suppression in heavy ion collisions at the LHC
provide direct sensitivity to the physics of jet quenching. In a sample of
lead-lead collisions at sqrt(s) = 2.76 TeV corresponding to an integrated
luminosity of approximately 7 inverse microbarns, ATLAS has measured jets with
a calorimeter over the pseudorapidity interval |eta| < 2.1 and over the
transverse momentum range 38 < pT < 210 GeV. Jets were reconstructed using the
anti-kt algorithm with values for the distance parameter that determines the
nominal jet radius of R = 0.2, 0.3, 0.4 and 0.5. The centrality dependence of
the jet yield is characterized by the jet "central-to-peripheral ratio," Rcp.
Jet production is found to be suppressed by approximately a factor of two in
the 10% most central collisions relative to peripheral collisions. Rcp varies
smoothly with centrality as characterized by the number of participating
nucleons. The observed suppression is only weakly dependent on jet radius and
transverse momentum. These results provide the first direct measurement of
inclusive jet suppression in heavy ion collisions and complement previous
measurements of dijet transverse energy imbalance at the LHC.Comment: 15 pages plus author list (30 pages total), 8 figures, 2 tables,
submitted to Physics Letters B. All figures including auxiliary figures are
available at
http://atlas.web.cern.ch/Atlas/GROUPS/PHYSICS/PAPERS/HION-2011-02
Search for the standard model Higgs boson decaying into two photons in pp collisions at sqrt(s)=7 TeV
A search for a Higgs boson decaying into two photons is described. The
analysis is performed using a dataset recorded by the CMS experiment at the LHC
from pp collisions at a centre-of-mass energy of 7 TeV, which corresponds to an
integrated luminosity of 4.8 inverse femtobarns. Limits are set on the cross
section of the standard model Higgs boson decaying to two photons. The expected
exclusion limit at 95% confidence level is between 1.4 and 2.4 times the
standard model cross section in the mass range between 110 and 150 GeV. The
analysis of the data excludes, at 95% confidence level, the standard model
Higgs boson decaying into two photons in the mass range 128 to 132 GeV. The
largest excess of events above the expected standard model background is
observed for a Higgs boson mass hypothesis of 124 GeV with a local significance
of 3.1 sigma. The global significance of observing an excess with a local
significance greater than 3.1 sigma anywhere in the search range 110-150 GeV is
estimated to be 1.8 sigma. More data are required to ascertain the origin of
this excess.Comment: Submitted to Physics Letters
Search for new physics in events with opposite-sign leptons, jets, and missing transverse energy in pp collisions at sqrt(s) = 7 TeV
A search is presented for physics beyond the standard model (BSM) in final
states with a pair of opposite-sign isolated leptons accompanied by jets and
missing transverse energy. The search uses LHC data recorded at a
center-of-mass energy sqrt(s) = 7 TeV with the CMS detector, corresponding to
an integrated luminosity of approximately 5 inverse femtobarns. Two
complementary search strategies are employed. The first probes models with a
specific dilepton production mechanism that leads to a characteristic kinematic
edge in the dilepton mass distribution. The second strategy probes models of
dilepton production with heavy, colored objects that decay to final states
including invisible particles, leading to very large hadronic activity and
missing transverse energy. No evidence for an event yield in excess of the
standard model expectations is found. Upper limits on the BSM contributions to
the signal regions are deduced from the results, which are used to exclude a
region of the parameter space of the constrained minimal supersymmetric
extension of the standard model. Additional information related to detector
efficiencies and response is provided to allow testing specific models of BSM
physics not considered in this paper.Comment: Replaced with published version. Added journal reference and DO
- âŚ