3,049 research outputs found

    Valuation equilibrium

    Get PDF
    We introduce a new solution concept for games in extensive form with perfect information, valuation equilibrium, which is based on a partition of each player's moves into similarity classes. A valuation of a player'is a real-valued function on the set of her similarity classes. In this equilibrium each player's strategy is optimal in the sense that at each of her nodes, a player chooses a move that belongs to a class with maximum valuation. The valuation of each player is consistent with the strategy profile in the sense that the valuation of a similarity class is the player's expected payoff, given that the path (induced by the strategy profile) intersects the similarity class. The solution concept is applied to decision problems and multi-player extensive form games. It is contrasted with existing solution concepts. The valuation approach is next applied to stopping games, in which non-terminal moves form a single similarity class, and we note that the behaviors obtained echo some biases observed experimentally. Finally, we tentatively suggest a way of endogenizing the similarity partitions in which moves are categorized according to how well they perform relative to the expected equilibrium value, interpreted as the aspiration level

    Sr0.9_{0.9}K0.1_{0.1}Zn1.8_{1.8}Mn0.2_{0.2}As2_{2}: a ferromagnetic semiconductor with colossal magnetoresistance

    Get PDF
    A bulk diluted magnetic semiconductor (Sr,K)(Zn,Mn)2_{2}As2_{2} was synthesized with decoupled charge and spin doping. It has a hexagonal CaAl2_{2}Si2_{2}-type structure with the (Zn,Mn)2_{2}As2_{2} layer forming a honeycomb-like network. Magnetization measurements show that the sample undergoes a ferromagnetic transition with a Curie temperature of 12 K and \revision{magnetic moment reaches about 1.5 μB\mu_{B}/Mn under μ0H\mu_0H = 5 T and TT = 2 K}. Surprisingly, a colossal negative magnetoresistance, defined as [ρ(H)ρ(0)]/ρ(0)[\rho(H)-\rho(0)]/\rho(0), up to -38\% under a low field of μ0H\mu_0H = 0.1 T and to -99.8\% under μ0H\mu_0H = 5 T, was observed at TT = 2 K. The colossal magnetoresistance can be explained based on the Anderson localization theory.Comment: Accepted for publication in EP

    Multi-Step Processing of Spatial Joins

    Get PDF
    Spatial joins are one of the most important operations for combining spatial objects of several relations. In this paper, spatial join processing is studied in detail for extended spatial objects in twodimensional data space. We present an approach for spatial join processing that is based on three steps. First, a spatial join is performed on the minimum bounding rectangles of the objects returning a set of candidates. Various approaches for accelerating this step of join processing have been examined at the last year’s conference [BKS 93a]. In this paper, we focus on the problem how to compute the answers from the set of candidates which is handled by the following two steps. First of all, sophisticated approximations are used to identify answers as well as to filter out false hits from the set of candidates. For this purpose, we investigate various types of conservative and progressive approximations. In the last step, the exact geometry of the remaining candidates has to be tested against the join predicate. The time required for computing spatial join predicates can essentially be reduced when objects are adequately organized in main memory. In our approach, objects are first decomposed into simple components which are exclusively organized by a main-memory resident spatial data structure. Overall, we present a complete approach of spatial join processing on complex spatial objects. The performance of the individual steps of our approach is evaluated with data sets from real cartographic applications. The results show that our approach reduces the total execution time of the spatial join by factors

    Querying Probabilistic Neighborhoods in Spatial Data Sets Efficiently

    Full text link
    \newcommand{\dist}{\operatorname{dist}} In this paper we define the notion of a probabilistic neighborhood in spatial data: Let a set PP of nn points in Rd\mathbb{R}^d, a query point qRdq \in \mathbb{R}^d, a distance metric \dist, and a monotonically decreasing function f:R+[0,1]f : \mathbb{R}^+ \rightarrow [0,1] be given. Then a point pPp \in P belongs to the probabilistic neighborhood N(q,f)N(q, f) of qq with respect to ff with probability f(\dist(p,q)). We envision applications in facility location, sensor networks, and other scenarios where a connection between two entities becomes less likely with increasing distance. A straightforward query algorithm would determine a probabilistic neighborhood in Θ(nd)\Theta(n\cdot d) time by probing each point in PP. To answer the query in sublinear time for the planar case, we augment a quadtree suitably and design a corresponding query algorithm. Our theoretical analysis shows that -- for certain distributions of planar PP -- our algorithm answers a query in O((N(q,f)+n)logn)O((|N(q,f)| + \sqrt{n})\log n) time with high probability (whp). This matches up to a logarithmic factor the cost induced by quadtree-based algorithms for deterministic queries and is asymptotically faster than the straightforward approach whenever N(q,f)o(n/logn)|N(q,f)| \in o(n / \log n). As practical proofs of concept we use two applications, one in the Euclidean and one in the hyperbolic plane. In particular, our results yield the first generator for random hyperbolic graphs with arbitrary temperatures in subquadratic time. Moreover, our experimental data show the usefulness of our algorithm even if the point distribution is unknown or not uniform: The running time savings over the pairwise probing approach constitute at least one order of magnitude already for a modest number of points and queries.Comment: The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-44543-4_3

    Potential Role of Ultrafine Particles in Associations between Airborne Particle Mass and Cardiovascular Health

    Get PDF
    Numerous epidemiologic time-series studies have shown generally consistent associations of cardiovascular hospital admissions and mortality with outdoor air pollution, particularly mass concentrations of particulate matter (PM) ≤2.5 or ≤10 μm in diameter (PM(2.5), PM(10)). Panel studies with repeated measures have supported the time-series results showing associations between PM and risk of cardiac ischemia and arrhythmias, increased blood pressure, decreased heart rate variability, and increased circulating markers of inflammation and thrombosis. The causal components driving the PM associations remain to be identified. Epidemiologic data using pollutant gases and particle characteristics such as particle number concentration and elemental carbon have provided indirect evidence that products of fossil fuel combustion are important. Ultrafine particles < 0.1 μm (UFPs) dominate particle number concentrations and surface area and are therefore capable of carrying large concentrations of adsorbed or condensed toxic air pollutants. It is likely that redox-active components in UFPs from fossil fuel combustion reach cardiovascular target sites. High UFP exposures may lead to systemic inflammation through oxidative stress responses to reactive oxygen species and thereby promote the progression of atherosclerosis and precipitate acute cardiovascular responses ranging from increased blood pressure to myocardial infarction. The next steps in epidemiologic research are to identify more clearly the putative PM casual components and size fractions linked to their sources. To advance this, we discuss in a companion article (Sioutas C, Delfino RJ, Singh M. 2005. Environ Health Perspect 113:947–955) the need for and methods of UFP exposure assessment

    Sampling-based Algorithms for Optimal Motion Planning

    Get PDF
    During the last decade, sampling-based path planning algorithms, such as Probabilistic RoadMaps (PRM) and Rapidly-exploring Random Trees (RRT), have been shown to work well in practice and possess theoretical guarantees such as probabilistic completeness. However, little effort has been devoted to the formal analysis of the quality of the solution returned by such algorithms, e.g., as a function of the number of samples. The purpose of this paper is to fill this gap, by rigorously analyzing the asymptotic behavior of the cost of the solution returned by stochastic sampling-based algorithms as the number of samples increases. A number of negative results are provided, characterizing existing algorithms, e.g., showing that, under mild technical conditions, the cost of the solution returned by broadly used sampling-based algorithms converges almost surely to a non-optimal value. The main contribution of the paper is the introduction of new algorithms, namely, PRM* and RRT*, which are provably asymptotically optimal, i.e., such that the cost of the returned solution converges almost surely to the optimum. Moreover, it is shown that the computational complexity of the new algorithms is within a constant factor of that of their probabilistically complete (but not asymptotically optimal) counterparts. The analysis in this paper hinges on novel connections between stochastic sampling-based path planning algorithms and the theory of random geometric graphs.Comment: 76 pages, 26 figures, to appear in International Journal of Robotics Researc

    Seasonal Analyses of Air Pollution and Mortality in 100 U.S. Cities

    Get PDF
    Time series models relating short-term changes in air pollution levels to daily mortality counts typically assume that the effects of air pollution on the log relative rate of mortality do not vary with time. However, these short-term effects might plausibly vary by season. Changes in the sources of air pollution and meteorology can result in changes in characteristics of the air pollution mixture across seasons. The authors develop Bayesian semi-parametric hierarchical models for estimating time-varying effects of pollution on mortality in multi-site time series studies. The methods are applied to the updated National Morbidity and Mortality Air Pollution Study database for the period 1987--2000, which includes data for 100 U.S. cities. At the national level, a 10 micro-gram/m3 increase in PM(10) at lag 1 is associated with a 0.15 (95% posterior interval: -0.08, 0.39),0.14 (-0.14, 0.42), 0.36 (0.11, 0.61), and 0.14 (-0.06, 0.34) percent increase in mortality for winter, spring, summer, and fall, respectively. An analysis by geographical regions finds a strong seasonal pattern in the northeast (with a peak in summer) and little seasonal variation in the southern regions of the country. These results provide useful information for understanding particle toxicity and guiding future analyses of particle constituent data

    A well-separated pairs decomposition algorithm for k-d trees implemented on multi-core architectures

    Get PDF
    Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.Variations of k-d trees represent a fundamental data structure used in Computational Geometry with numerous applications in science. For example particle track tting in the software of the LHC experiments, and in simulations of N-body systems in the study of dynamics of interacting galaxies, particle beam physics, and molecular dynamics in biochemistry. The many-body tree methods devised by Barnes and Hutt in the 1980s and the Fast Multipole Method introduced in 1987 by Greengard and Rokhlin use variants of k-d trees to reduce the computation time upper bounds to O(n log n) and even O(n) from O(n2). We present an algorithm that uses the principle of well-separated pairs decomposition to always produce compressed trees in O(n log n) work. We present and evaluate parallel implementations for the algorithm that can take advantage of multi-core architectures.The Science and Technology Facilities Council, UK

    A distance for partially labeled trees

    Get PDF
    In a number of practical situations, data have structure and the relations among its component parts need to be coded with suitable data models. Trees are usually utilized for representing data for which hierarchical relations can be defined. This is the case in a number of fields like image analysis, natural language processing, protein structure, or music retrieval, to name a few. In those cases, procedures for comparing trees are very relevant. An approximate tree edit distance algorithm has been introduced for working with trees labeled only at the leaves. In this paper, it has been applied to handwritten character recognition, providing accuracies comparable to those by the most comprehensive search method, being as efficient as the fastest.This work is supported by the Spanish Ministry projects DRIMS (TIN2009-14247-C02), and Consolider Ingenio 2010 (MIPRCV, CSD2007-00018), partially supported by EU ERDF and the Pascal Network of Excellence
    corecore