712 research outputs found

    The Price of Anarchy for Selfish Ring Routing is Two

    Full text link
    We analyze the network congestion game with atomic players, asymmetric strategies, and the maximum latency among all players as social cost. This important social cost function is much less understood than the average latency. We show that the price of anarchy is at most two, when the network is a ring and the link latencies are linear. Our bound is tight. This is the first sharp bound for the maximum latency objective.Comment: Full version of WINE 2012 paper, 24 page

    Evaluating distribution of foveal avascular zone parameters corrected by lateral magnification and their associations with retinal thickness

    Get PDF
    Purpose To examine the distribution of foveal avascular zone (FAZ) parameters, with and without correction for lateral magnification, in a large cohort of healthy young adults. Design Cross-sectional, observational cohort study. Participants A total of 504 healthy adults, 27 to 30 years of age. Methods Participants underwent a comprehensive ophthalmic examination including axial length measurement and OCT angiography (OCTA) imaging of the macula. OCT angiography images of combined superficial and deep retinal vessel plexuses were processed via a custom software to extract foveal avascular zone area (FAZA) and foveal density-300 (FD-300), the vessel density in a 300-μm wide annulus surrounding the FAZ, with and without correction for lateral magnification. Bland–Altman analyses were performed to examine the effect of lateral magnification on FAZA and FD-300, as well as to evaluate the interocular agreement in both parameters. Linear mixed-effects models were used to examine the relationship between retinal thicknesses and OCTA parameters. Main Outcome Measures The FAZA and FD-300, corrected for lateral magnification. Results The mean (standard deviation [SD]) of laterally corrected FAZA and FD-300 was 0.22 mm2 (0.10 mm2) and 51.9% (3.2%), respectively. Relative to uncorrected data, 55.6% of corrected FAZA showed a relative change > 5%, whereas all FD-300 changes were within 5%. There was good interocular symmetry (mean right eye–left eye difference, 95% limits of agreement [LoA]) in both FAZA (0.006 mm2, -0.05 mm2, to 0.07 mm2) and FD-300 (-0.05%, -5.39%, to 5.30%). There were significant negative associations between central retinal thickness and FAZA (β = -0.0029), as well as between central retinal thickness and FD-300 (β = -0.044), with the relationships driven by inner, not outer, retina. Conclusions We reported lateral magnification adjusted normative values for FAZA and FD-300 in a large cohort of young, healthy eyes. Clinicians should strongly consider accounting for lateral magnification when evaluating FAZA. Good interocular agreement in FAZA and FD-300 suggests the contralateral eye can be used as control data

    Quasinormal modes of a Schwarzschild black hole surrounded by free static spherically symmetric quintessence: Electromagnetic perturbations

    Full text link
    In this paper, we evaluated the quasinormal modes of electromagnetic perturbation in a Schwarzschild black hole surrounded by the static spherically symmetric quintessence by using the third-order WKB approximation when the quintessential state parameter wq w_{q} in the range of 1/3<wq<0-1/3<w_{q}<0. Due to the presence of quintessence, Maxwell field damps more slowly. And when at 1<wq<1/3-1<w_{q}<-1/3, it is similar to the black hole solution in the ds/Ads spacetime. The appropriate boundary conditions need to be modified.Comment: 6 pages, 3 figure

    Air fluorescence measurements in the spectral range 300-420 nm using a 28.5 GeV electron beam

    Full text link
    Measurements are reported of the yield and spectrum of fluorescence, excited by a 28.5 GeV electron beam, in air at a range of pressures of interest to ultra-high energy cosmic ray detectors. The wavelength range was 300 - 420 nm. System calibration has been performed using Rayleigh scattering of a nitrogen laser beam. In atmospheric pressure dry air at 304 K the yield is 20.8 +/- 1.6 photons per MeV.Comment: 29 pages, 10 figures. Submitted to Astroparticle Physic

    Energy and Flux Measurements of Ultra-High Energy Cosmic Rays Observed During the First ANITA Flight

    Get PDF
    The first flight of the Antarctic Impulsive Transient Antenna (ANITA) experiment recorded 16 radio signals that were emitted by cosmic-ray induced air showers. For 14 of these events, this radiation was reflected from the ice. The dominant contribution to the radiation from the deflection of positrons and electrons in the geomagnetic field, which is beamed in the direction of motion of the air shower. This radiation is reflected from the ice and subsequently detected by the ANITA experiment at a flight altitude of 36km. In this paper, we estimate the energy of the 14 individual events and find that the mean energy of the cosmic-ray sample is 2.9 EeV. By simulating the ANITA flight, we calculate its exposure for ultra-high energy cosmic rays. We estimate for the first time the cosmic-ray flux derived only from radio observations. In addition, we find that the Monte Carlo simulation of the ANITA data set is in agreement with the total number of observed events and with the properties of those events.Comment: Added more explanation of the experimental setup and textual improvement

    Observational constraint on generalized Chaplygin gas model

    Get PDF
    We investigate observational constraints on the generalized Chaplygin gas (GCG) model as the unification of dark matter and dark energy from the latest observational data: the Union SNe Ia data, the observational Hubble data, the SDSS baryon acoustic peak and the five-year WMAP shift parameter. It is obtained that the best fit values of the GCG model parameters with their confidence level are As=0.730.06+0.06A_{s}=0.73^{+0.06}_{-0.06} (1σ1\sigma) 0.09+0.09^{+0.09}_{-0.09} (2σ)(2\sigma), α=0.090.12+0.15\alpha=-0.09^{+0.15}_{-0.12} (1σ1\sigma) 0.19+0.26^{+0.26}_{-0.19} (2σ)(2\sigma). Furthermore in this model, we can see that the evolution of equation of state (EOS) for dark energy is similar to quiessence, and its current best-fit value is w0de=0.96w_{0de}=-0.96 with the 1σ1\sigma confidence level 0.91w0de1.00-0.91\geq w_{0de}\geq-1.00.Comment: 9 pages, 5 figure

    Probabilistic models to evaluate effectiveness of steel bridge weld fatigue retrofitting by peening

    Get PDF
    The purpose of this study was to evaluate, with two probabilistic analytical models, the effectiveness of several alternative fatigue management strategies for steel bridge welds. The investigated strategies employed, in various combinations, magnetic particle inspection, gouging and rewelding, and postweld treatment by peening. The analytical models included a probabilistic strain-based fracture mechanics model and a Markov chain model. For comparing the results obtained with the two models, the fatigue life was divided into a small, fixed number of condition states based on crack depth, similar to those often used by bridge management systems to model deterioration due to other processes, such as corrosion and road surface wear. The probabilistic strain-based fracture mechanics model was verified first by comparison with design S-N curves and test data for untreated welds. Next, the verified model was used to determine the probability that untreated and treated welds would be in each condition state in a given year; the probabilities were then used to calibrate transition probabilities for a much simpler Markov chain fatigue model. Then both models were used to simulate a number of fatigue management strategies. From the results of these simulations, the performance of the different strategies was compared, and the accuracy of the simpler Markov chain fatigue model was evaluated. In general, peening was more effective if preceded by inspection of the weld. The Markov chain fatigue model did a reasonable job of predicting the general trends and relative effectiveness of the different investigated strategies

    Glyphosate, Other Herbicides, And Transformation Products In Midwestern Streams, 2002

    Get PDF
    The use of glyphosate has increased rapidly, and there is limited understanding of its environmental fate. The objective of this study was to document the occurrence of glyphosate and the transformation product aminomethylphosphonic acid (AMPA) in Midwestern streams and to compare their occurrence with that of more commonly measured herbicides such as acetochlor, atrazine, and metolachlor. Water samples were collected at sites on 51 streams in nine Midwestern states in 2002 during three runoff events: after the application of pre-emergence herbicides, after the application of post-emergence herbicides, and during harvest season. All samples were analyzed for glyphosate and 20 other herbicides using gas chromatography/mass spectrometry or high performance liquid chromatography/mass spectrometry. The frequency of glyphosate and AMPA detection, range of concentrations in runoff samples, and ratios of AMPA to glyphosate concentrations did not vary throughout the growing season as substantially as for other herbicides like atrazine, probably because of different seasonal use patterns. Glyphosate was detected at or above 0.1 μg/l in 35 percent of pre-emergence, 40 percent of post-emergence, and 31 percent of harvest season samples, with a maximum concentration of 8.7 μg/l. AMPA was detected at or above 0.1 μg/l in 53 percent of pre-emergence, 83 percent of post-emergence, and 73 percent of harvest season samples, with a maximum concentration of 3.6 μg/l. Glyphosate was not detected at a concentration at or above the U.S. Environmental Protection Agency’s maximum contamination level (MCL) of 700 μg/l in any sample. Atrazine was detected at or above 0.1 μg/l in 94 percent of pre-emergence, 96 percent of postemergence, and 57 percent of harvest season samples, with a maximum concentration of 55 μg/l. Atrazine was detected at or above its MCL (3 μg/l) in 57 percent of pre-emergence and 33 percent of postemergence samples

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of 2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem
    corecore