329 research outputs found

    Information Variability Impacts in Auctions

    Get PDF
    A wide variety of auction models exhibit close relationships between the winner's expected profit and the expected difference between the highest and second-highest order statistics of bidders' information, and between expected revenue and the second-highest order statistic of bidders' expected asset values. We use stochastic orderings to see when greater environmental variability of bidders' information enhances expected profit and expected revenue

    Display blindness? Looking again at the visibility of situated displays using eye tracking

    Get PDF
    Observational studies of situated displays have suggested that they are rarely looked at, and when they are it is typically only for a short period of time. Using a mobile eye tracker during a realistic shopping task in a shopping center, we show that people look at displays more than would be predicted from these observational studies, but still only short glances and often from quite far away. We characterize the patterns of eye-movements that precede looking at a display and discuss some of the design implications for the design of situated display technologies that are deployed in public space

    A Survey on Approximation Mechanism Design without Money for Facility Games

    Full text link
    In a facility game one or more facilities are placed in a metric space to serve a set of selfish agents whose addresses are their private information. In a classical facility game, each agent wants to be as close to a facility as possible, and the cost of an agent can be defined as the distance between her location and the closest facility. In an obnoxious facility game, each agent wants to be far away from all facilities, and her utility is the distance from her location to the facility set. The objective of each agent is to minimize her cost or maximize her utility. An agent may lie if, by doing so, more benefit can be obtained. We are interested in social choice mechanisms that do not utilize payments. The game designer aims at a mechanism that is strategy-proof, in the sense that any agent cannot benefit by misreporting her address, or, even better, group strategy-proof, in the sense that any coalition of agents cannot all benefit by lying. Meanwhile, it is desirable to have the mechanism to be approximately optimal with respect to a chosen objective function. Several models for such approximation mechanism design without money for facility games have been proposed. In this paper we briefly review these models and related results for both deterministic and randomized mechanisms, and meanwhile we present a general framework for approximation mechanism design without money for facility games

    Saliency Benchmarking Made Easy: Separating Models, Maps and Metrics

    Full text link
    Dozens of new models on fixation prediction are published every year and compared on open benchmarks such as MIT300 and LSUN. However, progress in the field can be difficult to judge because models are compared using a variety of inconsistent metrics. Here we show that no single saliency map can perform well under all metrics. Instead, we propose a principled approach to solve the benchmarking problem by separating the notions of saliency models, maps and metrics. Inspired by Bayesian decision theory, we define a saliency model to be a probabilistic model of fixation density prediction and a saliency map to be a metric-specific prediction derived from the model density which maximizes the expected performance on that metric given the model density. We derive these optimal saliency maps for the most commonly used saliency metrics (AUC, sAUC, NSS, CC, SIM, KL-Div) and show that they can be computed analytically or approximated with high precision. We show that this leads to consistent rankings in all metrics and avoids the penalties of using one saliency map for all metrics. Our method allows researchers to have their model compete on many different metrics with state-of-the-art in those metrics: "good" models will perform well in all metrics.Comment: published at ECCV 201

    Welcome to Pikettyville? Mapping London's Alpha Territories

    Get PDF
    This paper considers the influence of the burgeoning global ‘super-rich’ on contemporary socio-spatialization processes in London in the light of a contemporary re-reading of Pahl’s classic volume, Whose City? It explores if a turn to ‘big data’ – in the form of commercial geodemographic classifications – can offer any additional insights to a sociological approach to the study of the ‘super-rich’ that extends the ‘spatialization of class’ thesis further ‘up’ the class structure

    Heavy quarkonium: progress, puzzles, and opportunities

    Get PDF
    A golden age for heavy quarkonium physics dawned a decade ago, initiated by the confluence of exciting advances in quantum chromodynamics (QCD) and an explosion of related experimental activity. The early years of this period were chronicled in the Quarkonium Working Group (QWG) CERN Yellow Report (YR) in 2004, which presented a comprehensive review of the status of the field at that time and provided specific recommendations for further progress. However, the broad spectrum of subsequent breakthroughs, surprises, and continuing puzzles could only be partially anticipated. Since the release of the YR, the BESII program concluded only to give birth to BESIII; the BB-factories and CLEO-c flourished; quarkonium production and polarization measurements at HERA and the Tevatron matured; and heavy-ion collisions at RHIC have opened a window on the deconfinement regime. All these experiments leave legacies of quality, precision, and unsolved mysteries for quarkonium physics, and therefore beg for continuing investigations. The plethora of newly-found quarkonium-like states unleashed a flood of theoretical investigations into new forms of matter such as quark-gluon hybrids, mesonic molecules, and tetraquarks. Measurements of the spectroscopy, decays, production, and in-medium behavior of c\bar{c}, b\bar{b}, and b\bar{c} bound states have been shown to validate some theoretical approaches to QCD and highlight lack of quantitative success for others. The intriguing details of quarkonium suppression in heavy-ion collisions that have emerged from RHIC have elevated the importance of separating hot- and cold-nuclear-matter effects in quark-gluon plasma studies. This review systematically addresses all these matters and concludes by prioritizing directions for ongoing and future efforts.Comment: 182 pages, 112 figures. Editors: N. Brambilla, S. Eidelman, B. K. Heltsley, R. Vogt. Section Coordinators: G. T. Bodwin, E. Eichten, A. D. Frawley, A. B. Meyer, R. E. Mitchell, V. Papadimitriou, P. Petreczky, A. A. Petrov, P. Robbe, A. Vair

    The meta-crisis of secular capitalism

    Get PDF
    The current global economic crisis concerns the way in which contemporary capitalism has turned to financialisation as a double cure for both a falling rate of profit and a deficiency of demand. Although this turning is by no means unprecedented, policies of financialisation have depressed demand (in part as a result of the long-term stagnation of average wages) while at the same time not proving adequate to restore profits and growth. This paper argues that the current crisis is less the ‘normal’ one that has to do with a constitutive need to balance growth of abstract wealth with demand for concrete commodities. Rather, it marks a meta-crisis of capitalism that is to do with the difficulties of sustaining abstract growth as such. This meta-crisis is the tendency at once to abstract from the real economy of productive activities and to reduce everything to its bare materiality. By contrast with a market economy that binds material value to symbolic meaning, a capitalist economy tends to separate matter from symbol and reduce materiality to calculable numbers representing ‘wealth’. Such a conception of wealth rests on the aggregation of abstract numbers that cuts out all the relational goods and the ‘commons’ on which shared prosperity depends

    Self-Control of Traffic Lights and Vehicle Flows in Urban Road Networks

    Full text link
    Based on fluid-dynamic and many-particle (car-following) simulations of traffic flows in (urban) networks, we study the problem of coordinating incompatible traffic flows at intersections. Inspired by the observation of self-organized oscillations of pedestrian flows at bottlenecks [D. Helbing and P. Moln\'ar, Phys. Eev. E 51 (1995) 4282--4286], we propose a self-organization approach to traffic light control. The problem can be treated as multi-agent problem with interactions between vehicles and traffic lights. Specifically, our approach assumes a priority-based control of traffic lights by the vehicle flows themselves, taking into account short-sighted anticipation of vehicle flows and platoons. The considered local interactions lead to emergent coordination patterns such as ``green waves'' and achieve an efficient, decentralized traffic light control. While the proposed self-control adapts flexibly to local flow conditions and often leads to non-cyclical switching patterns with changing service sequences of different traffic flows, an almost periodic service may evolve under certain conditions and suggests the existence of a spontaneous synchronization of traffic lights despite the varying delays due to variable vehicle queues and travel times. The self-organized traffic light control is based on an optimization and a stabilization rule, each of which performs poorly at high utilizations of the road network, while their proper combination reaches a superior performance. The result is a considerable reduction not only in the average travel times, but also of their variation. Similar control approaches could be applied to the coordination of logistic and production processes

    Bayesian Cue Integration as a Developmental Outcome of Reward Mediated Learning

    Get PDF
    Average human behavior in cue combination tasks is well predicted by Bayesian inference models. As this capability is acquired over developmental timescales, the question arises, how it is learned. Here we investigated whether reward dependent learning, that is well established at the computational, behavioral, and neuronal levels, could contribute to this development. It is shown that a model free reinforcement learning algorithm can indeed learn to do cue integration, i.e. weight uncertain cues according to their respective reliabilities and even do so if reliabilities are changing. We also consider the case of causal inference where multimodal signals can originate from one or multiple separate objects and should not always be integrated. In this case, the learner is shown to develop a behavior that is closest to Bayesian model averaging. We conclude that reward mediated learning could be a driving force for the development of cue integration and causal inference
    corecore