1,347 research outputs found

    Carotid Intima-Media Thickness Progression in HIV-Infected Adults Occurs Preferentially at the Carotid Bifurcation and Is Predicted by Inflammation.

    Get PDF
    BackgroundShear stress gradients and inflammation have been causally associated with atherosclerosis development in carotid bifurcation regions. The mechanism underlying higher levels of carotid intima-media thickness observed among HIV-infected individuals remains unknown.Methods and resultsWe measured carotid intima-media thickness progression and development of plaque in the common carotid, bifurcation region, and internal carotid artery in 300 HIV-infected persons and 47 controls. The median duration of follow-up was 2.4 years. When all segments were included, the rate of intima-media thickness progression was greater in HIV-infected subjects compared with controls after adjustment for traditional risk factors (0.055 vs. 0.024 mm/year, P=0.016). Rate of progression was also greater in the bifurcation region (0.067 vs. 0.025 mm/year, P=0.042) whereas differences were smaller in the common and internal regions. HIV-infected individuals had a greater incidence of plaque compared with controls in the internal (23% vs. 6.4%, P=0.0037) and bifurcation regions (34% vs. 17%, P=0.014). Among HIV-infected individuals, the rate of progression in the bifurcation region was more rapid compared with the common carotid, internal, or mean intima-media thickness; in contrast, progression rates among controls were similar at all sites. Baseline hsCRP was elevated in HIV-infected persons and was a predictor of progression in the bifurcation region.ConclusionsAtherosclerosis progresses preferentially in the carotid bifurcation region in HIV-infected individuals. hsCRP, a marker of inflammation, is elevated in HIV and is associated with progression in the bifurcation region. These data are consistent with a model in which the interplay between hemodynamic shear stresses and HIV-associated inflammation contribute to accelerated atherosclerosis. (J Am Heart Assoc. 2012;1:jah3-e000422 doi: 10.1161/JAHA.111.000422.)Clinical trial registrationURL: http://clinicaltrials.gov. Unique identifier: NCT01519141

    Regularization of Linear Ill-posed Problems by the Augmented Lagrangian Method and Variational Inequalities

    Full text link
    We study the application of the Augmented Lagrangian Method to the solution of linear ill-posed problems. Previously, linear convergence rates with respect to the Bregman distance have been derived under the classical assumption of a standard source condition. Using the method of variational inequalities, we extend these results in this paper to convergence rates of lower order, both for the case of an a priori parameter choice and an a posteriori choice based on Morozov's discrepancy principle. In addition, our approach allows the derivation of convergence rates with respect to distance measures different from the Bregman distance. As a particular application, we consider sparsity promoting regularization, where we derive a range of convergence rates with respect to the norm under the assumption of restricted injectivity in conjunction with generalized source conditions of H\"older type

    Sparse Regularization with lql^q Penalty Term

    Full text link
    We consider the stable approximation of sparse solutions to non-linear operator equations by means of Tikhonov regularization with a subquadratic penalty term. Imposing certain assumptions, which for a linear operator are equivalent to the standard range condition, we derive the usual convergence rate O(δ)O(\sqrt{\delta}) of the regularized solutions in dependence of the noise level δ\delta. Particular emphasis lies on the case, where the true solution is known to have a sparse representation in a given basis. In this case, if the differential of the operator satisfies a certain injectivity condition, we can show that the actual convergence rate improves up to O(δ)O(\delta).Comment: 15 page

    Necessary conditions for variational regularization schemes

    Full text link
    We study variational regularization methods in a general framework, more precisely those methods that use a discrepancy and a regularization functional. While several sets of sufficient conditions are known to obtain a regularization method, we start with an investigation of the converse question: How could necessary conditions for a variational method to provide a regularization method look like? To this end, we formalize the notion of a variational scheme and start with comparison of three different instances of variational methods. Then we focus on the data space model and investigate the role and interplay of the topological structure, the convergence notion and the discrepancy functional. Especially, we deduce necessary conditions for the discrepancy functional to fulfill usual continuity assumptions. The results are applied to discrepancy functionals given by Bregman distances and especially to the Kullback-Leibler divergence.Comment: To appear in Inverse Problem

    Ptychographic electron microscopy using high-angle dark-field scattering for sub-nanometre resolution imaging

    Get PDF
    Diffractive imaging, in which image-forming optics are replaced by an inverse computation using scattered intensity data, could, in principle, realize wavelength-scale resolution in a transmission electron microscope. However, to date all implementations of this approach have suffered from various experimental restrictions. Here we demonstrate a form of diffractive imaging that unshackles the image formation process from the constraints of electron optics, improving resolution over that of the lens used by a factor of five and showing for the first time that it is possible to recover the complex exit wave (in modulus and phase) at atomic resolution, over an unlimited field of view, using low-energy (30 keV) electrons. Our method, called electron ptychography, has no fundamental experimental boundaries: further development of this proof-of-principle could revolutionize sub-atomic scale transmission imaging

    Towards Innovative Solutions through Integrative Futures Analysis - Preliminary qualitative scenarios

    Get PDF
    This report presents preliminary results of developing qualitative global water scenarios. The water scenarios are developed to be consistent with the underlying Shared Socio- Economic Pathways (SSPs). In this way different stakeholders in different contexts (climate, water) can be presented with consistent set of scenarios avoiding confusion and increasing policy impact. Water scenarios are based on the conceptual framework that has been developd specifically for this effort. The framework provides clear representation of important dimensions in the areas of Nature, Economy and Society and Water dimensions that are embedded in them. These critical dimensions are used to describe future changes in a consistent way for all scenarios. Three scenarios are presented based on SSP1, SSP2 and SSP3 respectively. Hydro-economic classes are introduced to further differentiate within scenarios based on economic and water conditions for specific regions and/or countries. In the process of building these preliminary water scenarios assumptions that are presented in this report, the number of challenges have been met. In the conclusions section these challenges are summarized and possible ways of tackling them are described

    Analysis of optical flow models in the framework of calculus of variations

    Get PDF
    In image sequence analysis, variational optical flow computations require the solution of a parameter dependent optimization problem with a data term and a regularizer. In this paper we study existence and uniqueness of the optimizers. Our studies rely on quasiconvex functionals on the spaces W¹,P(Ω, IRd), with p > 1, BV(Ω, IRd), BD(&Omeag;). The methods that are covered by our results include several existing techniques. Experiments are presented that illustrate the behavior of these approaches

    On regularization methods of EM-Kaczmarz type

    Full text link
    We consider regularization methods of Kaczmarz type in connection with the expectation-maximization (EM) algorithm for solving ill-posed equations. For noisy data, our methods are stabilized extensions of the well established ordered-subsets expectation-maximization iteration (OS-EM). We show monotonicity properties of the methods and present a numerical experiment which indicates that the extended OS-EM methods we propose are much faster than the standard EM algorithm.Comment: 18 pages, 6 figures; On regularization methods of EM-Kaczmarz typ

    A combined first and second order variational approach for image reconstruction

    Full text link
    In this paper we study a variational problem in the space of functions of bounded Hessian. Our model constitutes a straightforward higher-order extension of the well known ROF functional (total variation minimisation) to which we add a non-smooth second order regulariser. It combines convex functions of the total variation and the total variation of the first derivatives. In what follows, we prove existence and uniqueness of minimisers of the combined model and present the numerical solution of the corresponding discretised problem by employing the split Bregman method. The paper is furnished with applications of our model to image denoising, deblurring as well as image inpainting. The obtained numerical results are compared with results obtained from total generalised variation (TGV), infimal convolution and Euler's elastica, three other state of the art higher-order models. The numerical discussion confirms that the proposed higher-order model competes with models of its kind in avoiding the creation of undesirable artifacts and blocky-like structures in the reconstructed images -- a known disadvantage of the ROF model -- while being simple and efficiently numerically solvable.Comment: 34 pages, 89 figure

    Expected Performance of the ATLAS Experiment - Detector, Trigger and Physics

    Get PDF
    A detailed study is presented of the expected performance of the ATLAS detector. The reconstruction of tracks, leptons, photons, missing energy and jets is investigated, together with the performance of b-tagging and the trigger. The physics potential for a variety of interesting physics processes, within the Standard Model and beyond, is examined. The study comprises a series of notes based on simulations of the detector and physics processes, with particular emphasis given to the data expected from the first years of operation of the LHC at CERN
    corecore