555 research outputs found
Complication prevalence following use of tutoplast-derived human acellular dermal matrix in prosthetic breast reconstruction: A retrospective review of 203 patients
SummaryUse of human acellular dermal matrix (ADM) during prosthetic breast reconstruction has increased. Several ADM products are available produced by differing manufacturing techniques. It is not known if outcomes vary with different products. This study reports the complication prevalence following use of a tutoplast-derived ADM (T-ADM) in prosthetic breast reconstruction. We performed a retrospective chart review of 203 patients (mean follow-up times 12.2 months) who underwent mastectomy and immediate prosthetic breast reconstruction utilizing T-ADM, recording demographic data, surgical indications and complication (infection, seroma, hematoma, wound healing exceeding three weeks and reconstruction failure). During a four-year period, 348 breast reconstructions were performed Complications occurred in 16.4% of reconstructed breasts. Infection occurred in 6.6% of breast reconstructions (3.7% – major infection, requiring intravenous antibiotics and 2.9% minor infection, requiring oral antibiotics only). Seromas occurred in 3.4% and reconstruction failure occurred in 0.6% of breast reconstructions. Analysis suggested that complication prevalence was significantly higher in patients with a BMI >30 (p = 0.03). The complication profile following T-ADM use is this series is comparable to that reported for with other ADM products. T-ADM appears to be a safe and acceptable option for use in ADM-assisted breast reconstruction
Templates for Convex Cone Problems with Applications to Sparse Signal Recovery
This paper develops a general framework for solving a variety of convex cone
problems that frequently arise in signal processing, machine learning,
statistics, and other fields. The approach works as follows: first, determine a
conic formulation of the problem; second, determine its dual; third, apply
smoothing; and fourth, solve using an optimal first-order method. A merit of
this approach is its flexibility: for example, all compressed sensing problems
can be solved via this approach. These include models with objective
functionals such as the total-variation norm, ||Wx||_1 where W is arbitrary, or
a combination thereof. In addition, the paper also introduces a number of
technical contributions such as a novel continuation scheme, a novel approach
for controlling the step size, and some new results showing that the smooth and
unsmoothed problems are sometimes formally equivalent. Combined with our
framework, these lead to novel, stable and computationally efficient
algorithms. For instance, our general implementation is competitive with
state-of-the-art methods for solving intensively studied problems such as the
LASSO. Further, numerical experiments show that one can solve the Dantzig
selector problem, for which no efficient large-scale solvers exist, in a few
hundred iterations. Finally, the paper is accompanied with a software release.
This software is not a single, monolithic solver; rather, it is a suite of
programs and routines designed to serve as building blocks for constructing
complete algorithms.Comment: The TFOCS software is available at http://tfocs.stanford.edu This
version has updated reference
The clinical profile of moderate amblyopia in children younger than 7 years
Objective To describe the demographic and clinical characteristics of a cohort of children with moderate amblyopia participating in the Amblyopia Treatment Study 1, a randomized trial comparing atropine and patching. Methods The children enrolled were younger than 7 years and had strabismic, anisometropic, or combined strabismic and anisometropic amblyopia. Visual acuity, measured with a standardized testing protocol using single-surround HOTV optotypes, was 20/40 to 20/100 in the amblyopic eye, with an intereye acuity difference of 3 or more logMAR lines. There were 419 children enrolled, 409 of whom met these criteria and were included in the analyses. Results The mean age of the 409 children was 5.3 years. The cause of the amblyopia was strabismus in 38%, anisometropia in 37%, and both strabismus and anisometropia in 24%. The mean visual acuity of the amblyopic eyes (approximately 20/60) was similar among the strabismic, anisometropic, and combined groups (P = .24), but visual acuity of the sound eyes was worse in the strabismic group compared with the anisometropic group (P<.001). For the patients randomized into the patching group, 43% were initially treated for 6 hours per day, whereas 17% underwent full-time patching. Patients with poorer visual acuity in the amblyopic eye were prescribed more hours of patching than patients with better acuity (P = .003). Conclusions In the Amblyopia Treatment Study 1, there were nearly equal proportions of patients with strabismic and anisometropic amblyopia. A similar level of visual impairment was found irrespective of the cause of amblyopia. There was considerable variation in treatment practices with regard to the number of hours of initial patching prescribed
Impact of Patching and Atropine Treatment on the Child and Family in the Amblyopia Treatment Study
Objective To assess the psychosocial impact on the child and family of patching and atropine as treatments for moderate amblyopia in children younger than 7 years. Methods In a randomized, controlled clinical trial, 419 children younger than 7 years with amblyopic eye visual acuity in the range of 20/40 to 20/100 were assigned to receive treatment with either patching or atropine at 47 clinical sites. After 5 weeks of treatment, a parental quality-of-life questionnaire was completed for 364 (87%) of the 419 patients. Main Outcome Measure Overall and subscale scores on the Amblyopia Treatment Index. Results High internal validity and reliability were demonstrated for the Amblyopia Treatment Index questionnaire. The overall Amblyopia Treatment Index scores and the 3 subscale scores were consistently higher (worse) in the patching group compared with the atropine-treated group (overall mean, 2.52 vs 2.02, P<.001; adverse effects of treatment: mean, 2.35 vs 2.11, P = .002; difficulty with compliance: mean, 2.46 vs 1.99, P<.001; and social stigma: mean, 3.09 vs 1.84, P<.001, respectively). Conclusion Although the Amblyopia Treatment Index questionnaire results indicated that both atropine and patching treatments were well tolerated by the child and family, atropine received more favorable scores overall and on all 3 questionnaire subscales
Prox-regularity of rank constraint sets and implications for algorithms
We present an analysis of sets of matrices with rank less than or equal to a
specified number . We provide a simple formula for the normal cone to such
sets, and use this to show that these sets are prox-regular at all points with
rank exactly equal to . The normal cone formula appears to be new. This
allows for easy application of prior results guaranteeing local linear
convergence of the fundamental alternating projection algorithm between sets,
one of which is a rank constraint set. We apply this to show local linear
convergence of another fundamental algorithm, approximate steepest descent. Our
results apply not only to linear systems with rank constraints, as has been
treated extensively in the literature, but also nonconvex systems with rank
constraints.Comment: 12 pages, 24 references. Revised manuscript to appear in the Journal
of Mathematical Imaging and Visio
Incremental proximal methods for large scale convex optimization
Laboratory for Information and Decision Systems Report LIDS-P-2847We consider the minimization of a sum∑m [over]i=1 fi (x) consisting of a large
number of convex component functions fi . For this problem, incremental methods
consisting of gradient or subgradient iterations applied to single components have
proved very effective. We propose new incremental methods, consisting of proximal
iterations applied to single components, as well as combinations of gradient, subgradient,
and proximal iterations. We provide a convergence and rate of convergence
analysis of a variety of such methods, including some that involve randomization in
the selection of components.We also discuss applications in a few contexts, including
signal processing and inference/machine learning.United States. Air Force Office of Scientific Research (grant FA9550-10-1-0412
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
Anisotropic flow of charged hadrons, pions and (anti-)protons measured at high transverse momentum in Pb-Pb collisions at TeV
The elliptic, , triangular, , and quadrangular, , azimuthal
anisotropic flow coefficients are measured for unidentified charged particles,
pions and (anti-)protons in Pb-Pb collisions at TeV
with the ALICE detector at the Large Hadron Collider. Results obtained with the
event plane and four-particle cumulant methods are reported for the
pseudo-rapidity range at different collision centralities and as a
function of transverse momentum, , out to GeV/.
The observed non-zero elliptic and triangular flow depends only weakly on
transverse momentum for GeV/. The small dependence
of the difference between elliptic flow results obtained from the event plane
and four-particle cumulant methods suggests a common origin of flow
fluctuations up to GeV/. The magnitude of the (anti-)proton
elliptic and triangular flow is larger than that of pions out to at least
GeV/ indicating that the particle type dependence persists out
to high .Comment: 16 pages, 5 captioned figures, authors from page 11, published
version, figures at http://aliceinfo.cern.ch/ArtSubmission/node/186
Centrality dependence of charged particle production at large transverse momentum in Pb-Pb collisions at TeV
The inclusive transverse momentum () distributions of primary
charged particles are measured in the pseudo-rapidity range as a
function of event centrality in Pb-Pb collisions at
TeV with ALICE at the LHC. The data are presented in the range
GeV/ for nine centrality intervals from 70-80% to 0-5%.
The Pb-Pb spectra are presented in terms of the nuclear modification factor
using a pp reference spectrum measured at the same collision
energy. We observe that the suppression of high- particles strongly
depends on event centrality. In central collisions (0-5%) the yield is most
suppressed with at -7 GeV/. Above
GeV/, there is a significant rise in the nuclear modification
factor, which reaches for GeV/. In
peripheral collisions (70-80%), the suppression is weaker with almost independently of . The measured nuclear
modification factors are compared to other measurements and model calculations.Comment: 17 pages, 4 captioned figures, 2 tables, authors from page 12,
published version, figures at
http://aliceinfo.cern.ch/ArtSubmission/node/284
Measurement of charm production at central rapidity in proton-proton collisions at TeV
The -differential production cross sections of the prompt (B
feed-down subtracted) charmed mesons D, D, and D in the rapidity
range , and for transverse momentum GeV/, were
measured in proton-proton collisions at TeV with the ALICE
detector at the Large Hadron Collider. The analysis exploited the hadronic
decays DK, DK, DD, and their charge conjugates, and was performed on a
nb event sample collected in 2011 with a
minimum-bias trigger. The total charm production cross section at TeV and at 7 TeV was evaluated by extrapolating to the full phase space
the -differential production cross sections at TeV
and our previous measurements at TeV. The results were compared
to existing measurements and to perturbative-QCD calculations. The fraction of
cdbar D mesons produced in a vector state was also determined.Comment: 20 pages, 5 captioned figures, 4 tables, authors from page 15,
published version, figures at
http://aliceinfo.cern.ch/ArtSubmission/node/307
- …