226 research outputs found

    International consensus recommendations on key outcome measures for organ preservation after (chemo)radiotherapy in patients with rectal cancer

    Get PDF
    Multimodal treatment strategies for patients with rectal cancer are increasingly including the possibility of organ preservation, through nonoperative management or local excision. Organ preservation strategies can enable patients with a complete response or near-complete clinical responses after radiotherapy with or without concomitant chemotherapy to safely avoid the morbidities associated with radical surgery, and thus to maintain anorectal function and quality of life. However, standardization of the key outcome measures of organ preservation strategies is currently lacking; this includes a lack of consensus of the optimal definitions and selection of primary end points according to the trial phase and design; the optimal time points for response assessment; response-based decision-making; follow-up schedules; use of specific anorectal function tests; and quality of life and patient-reported outcomes. Thus, a consensus statement on outcome measures is necessary to ensure consistency and facilitate more accurate comparisons of data from ongoing and future trials. Here, we have convened an international group of experts with extensive experience in the management of patients with rectal cancer, including organ preservation approaches, and used a Delphi process to establish the first international consensus recommendations for key outcome measures of organ preservation, in an attempt to standardize the reporting of data from both trials and routine practice in this emerging area.Patients with early-stage rectal cancer might potentially benefit from treatment with an organ-sparing approach, which preserves quality of life owing to avoidance of the need for permanent colostomy. Trials conducted to investigate this have so far been hampered by considerable inter-trial heterogeneity in several key features. In this Consensus Statement, the authors provide guidance on the optimal end points, response assessment time points, follow-up procedures and quality of life measures in an attempt to improve the comparability of clinical research in this area

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of 2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem

    Quantum walks: a comprehensive review

    Full text link
    Quantum walks, the quantum mechanical counterpart of classical random walks, is an advanced tool for building quantum algorithms that has been recently shown to constitute a universal model of quantum computation. Quantum walks is now a solid field of research of quantum computation full of exciting open problems for physicists, computer scientists, mathematicians and engineers. In this paper we review theoretical advances on the foundations of both discrete- and continuous-time quantum walks, together with the role that randomness plays in quantum walks, the connections between the mathematical models of coined discrete quantum walks and continuous quantum walks, the quantumness of quantum walks, a summary of papers published on discrete quantum walks and entanglement as well as a succinct review of experimental proposals and realizations of discrete-time quantum walks. Furthermore, we have reviewed several algorithms based on both discrete- and continuous-time quantum walks as well as a most important result: the computational universality of both continuous- and discrete- time quantum walks.Comment: Paper accepted for publication in Quantum Information Processing Journa

    Search for direct production of charginos and neutralinos in events with three leptons and missing transverse momentum in √s = 7 TeV pp collisions with the ATLAS detector

    Get PDF
    A search for the direct production of charginos and neutralinos in final states with three electrons or muons and missing transverse momentum is presented. The analysis is based on 4.7 fb−1 of proton–proton collision data delivered by the Large Hadron Collider and recorded with the ATLAS detector. Observations are consistent with Standard Model expectations in three signal regions that are either depleted or enriched in Z-boson decays. Upper limits at 95% confidence level are set in R-parity conserving phenomenological minimal supersymmetric models and in simplified models, significantly extending previous results

    Search for displaced vertices arising from decays of new heavy particles in 7 TeV pp collisions at ATLAS

    Get PDF
    We present the results of a search for new, heavy particles that decay at a significant distance from their production point into a final state containing charged hadrons in association with a high-momentum muon. The search is conducted in a pp-collision data sample with a center-of-mass energy of 7 TeV and an integrated luminosity of 33 pb^-1 collected in 2010 by the ATLAS detector operating at the Large Hadron Collider. Production of such particles is expected in various scenarios of physics beyond the standard model. We observe no signal and place limits on the production cross-section of supersymmetric particles in an R-parity-violating scenario as a function of the neutralino lifetime. Limits are presented for different squark and neutralino masses, enabling extension of the limits to a variety of other models.Comment: 8 pages plus author list (20 pages total), 8 figures, 1 table, final version to appear in Physics Letters

    Measurement of the polarisation of W bosons produced with large transverse momentum in pp collisions at sqrt(s) = 7 TeV with the ATLAS experiment

    Get PDF
    This paper describes an analysis of the angular distribution of W->enu and W->munu decays, using data from pp collisions at sqrt(s) = 7 TeV recorded with the ATLAS detector at the LHC in 2010, corresponding to an integrated luminosity of about 35 pb^-1. Using the decay lepton transverse momentum and the missing transverse energy, the W decay angular distribution projected onto the transverse plane is obtained and analysed in terms of helicity fractions f0, fL and fR over two ranges of W transverse momentum (ptw): 35 < ptw < 50 GeV and ptw > 50 GeV. Good agreement is found with theoretical predictions. For ptw > 50 GeV, the values of f0 and fL-fR, averaged over charge and lepton flavour, are measured to be : f0 = 0.127 +/- 0.030 +/- 0.108 and fL-fR = 0.252 +/- 0.017 +/- 0.030, where the first uncertainties are statistical, and the second include all systematic effects.Comment: 19 pages plus author list (34 pages total), 9 figures, 11 tables, revised author list, matches European Journal of Physics C versio

    Observation of a new chi_b state in radiative transitions to Upsilon(1S) and Upsilon(2S) at ATLAS

    Get PDF
    The chi_b(nP) quarkonium states are produced in proton-proton collisions at the Large Hadron Collider (LHC) at sqrt(s) = 7 TeV and recorded by the ATLAS detector. Using a data sample corresponding to an integrated luminosity of 4.4 fb^-1, these states are reconstructed through their radiative decays to Upsilon(1S,2S) with Upsilon->mu+mu-. In addition to the mass peaks corresponding to the decay modes chi_b(1P,2P)->Upsilon(1S)gamma, a new structure centered at a mass of 10.530+/-0.005 (stat.)+/-0.009 (syst.) GeV is also observed, in both the Upsilon(1S)gamma and Upsilon(2S)gamma decay modes. This is interpreted as the chi_b(3P) system.Comment: 5 pages plus author list (18 pages total), 2 figures, 1 table, corrected author list, matches final version in Physical Review Letter

    Measurement of the inclusive isolated prompt photon cross-section in pp collisions at sqrt(s)= 7 TeV using 35 pb-1 of ATLAS data

    Get PDF
    A measurement of the differential cross-section for the inclusive production of isolated prompt photons in pp collisions at a center-of-mass energy sqrt(s) = 7 TeV is presented. The measurement covers the pseudorapidity ranges |eta|<1.37 and 1.52<=|eta|<2.37 in the transverse energy range 45<=E_T<400GeV. The results are based on an integrated luminosity of 35 pb-1, collected with the ATLAS detector at the LHC. The yields of the signal photons are measured using a data-driven technique, based on the observed distribution of the hadronic energy in a narrow cone around the photon candidate and the photon selection criteria. The results are compared with next-to-leading order perturbative QCD calculations and found to be in good agreement over four orders of magnitude in cross-section.Comment: 7 pages plus author list (18 pages total), 2 figures, 4 tables, final version published in Physics Letters

    Reducing heterotic M-theory to five dimensional supergravity on a manifold with boundary

    Get PDF
    This paper constructs the reduction of heterotic MM-theory in eleven dimensions to a supergravity model on a manifold with boundary in five dimensions using a Calabi-Yau three-fold. New results are presented for the boundary terms in the action and for the boundary conditions on the bulk fields. Some general features of dualisation on a manifold with boundary are used to explain the origin of some topological terms in the action. The effect of gaugino condensation on the fermion boundary conditions leads to a `twist' in the chirality of the gravitino which can provide an uplifting mechanism in the vacuum energy to cancel the cosmological constant after moduli stabilisation.Comment: 16 pages, RevTe
    corecore