452 research outputs found

    THE WAIT-AND-SEE OPTION IN ASCENDING PRICE AUCTIONS

    Get PDF
    Cake-cutting protocols aim at dividing a ``cake'' (i.e., a divisible resource) and assigning the resulting portions to several players in a way that each of the players feels to have received a ``fair'' amount of the cake. An important notion of fairness is envy-freeness: No player wishes to switch the portion of the cake received with another player's portion. Despite intense efforts in the past, it is still an open question whether there is a \emph{finite bounded} envy-free cake-cutting protocol for an arbitrary number of players, and even for four players. We introduce the notion of degree of guaranteed envy-freeness (DGEF) as a measure of how good a cake-cutting protocol can approximate the ideal of envy-freeness while keeping the protocol finite bounded (trading being disregarded). We propose a new finite bounded proportional protocol for any number n \geq 3 of players, and show that this protocol has a DGEF of 1 + \lceil (n^2)/2 \rceil. This is the currently best DGEF among known finite bounded cake-cutting protocols for an arbitrary number of players. We will make the case that improving the DGEF even further is a tough challenge, and determine, for comparison, the DGEF of selected known finite bounded cake-cutting protocols.Comment: 37 pages, 4 figure

    A Discrete and Bounded Envy-free Cake Cutting Protocol for Four Agents

    Full text link
    We consider the well-studied cake cutting problem in which the goal is to identify a fair allocation based on a minimal number of queries from the agents. The problem has attracted considerable attention within various branches of computer science, mathematics, and economics. Although, the elegant Selfridge-Conway envy-free protocol for three agents has been known since 1960, it has been a major open problem for the last fifty years to obtain a bounded envy-free protocol for more than three agents. We propose a discrete and bounded envy-free protocol for four agents

    Nanofabrication of Surface-Enhanced Raman Scattering Device by an Integrated Block-Copolymer and Nanoimprint Lithography Method

    Get PDF
    The integration of block-copolymers and nanoimprint lithography presents a novel and cost-effective approach to achieving nanoscale patterning capabilities. The authors demonstrate the fabrication of a surface-enhanced Raman scattering device using templates created by the block-copolymers nanoimprint lithography integrated method

    Data Mining and Machine Learning in Astronomy

    Full text link
    We review the current state of data mining and machine learning in astronomy. 'Data Mining' can have a somewhat mixed connotation from the point of view of a researcher in this field. If used correctly, it can be a powerful approach, holding the potential to fully exploit the exponentially increasing amount of available data, promising great scientific advance. However, if misused, it can be little more than the black-box application of complex computing algorithms that may give little physical insight, and provide questionable results. Here, we give an overview of the entire data mining process, from data collection through to the interpretation of results. We cover common machine learning algorithms, such as artificial neural networks and support vector machines, applications from a broad range of astronomy, emphasizing those where data mining techniques directly resulted in improved science, and important current and future directions, including probability density functions, parallel algorithms, petascale computing, and the time domain. We conclude that, so long as one carefully selects an appropriate algorithm, and is guided by the astronomical problem at hand, data mining can be very much the powerful tool, and not the questionable black box.Comment: Published in IJMPD. 61 pages, uses ws-ijmpd.cls. Several extra figures, some minor additions to the tex

    From the discrete to the continuous - towards a cylindrically consistent dynamics

    Full text link
    Discrete models usually represent approximations to continuum physics. Cylindrical consistency provides a framework in which discretizations mirror exactly the continuum limit. Being a standard tool for the kinematics of loop quantum gravity we propose a coarse graining procedure that aims at constructing a cylindrically consistent dynamics in the form of transition amplitudes and Hamilton's principal functions. The coarse graining procedure, which is motivated by tensor network renormalization methods, provides a systematic approximation scheme towards this end. A crucial role in this coarse graining scheme is played by embedding maps that allow the interpretation of discrete boundary data as continuum configurations. These embedding maps should be selected according to the dynamics of the system, as a choice of embedding maps will determine a truncation of the renormalization flow.Comment: 22 page

    The Fairness Challenge in Computer Networks

    Full text link
    In this paper, the concept of fairness in computer networks is investigated. We motivate the need of examining fairness issues by providing example future application scenarios where fairness support is needed in order to experience sufficient service quality. Fairness definitions from political science and their application to computer networks are described and a state-of-the-art overview of research activities in fairness, from issues such a queue management and tcp-friendliness to issues like fairness in layered multi-rate multicast scenarios, is given. We contribute with this paper to the ongoing research activities by defining the fairness challenge with the purpose of helping direct future investigations to with spots on the map of research in fairness

    GA4GH Phenopackets: A Practical Introduction.

    Get PDF
    The Global Alliance for Genomics and Health (GA4GH) is developing a suite of coordinated standards for genomics for healthcare. The Phenopacket is a new GA4GH standard for sharing disease and phenotype information that characterizes an individual person, linking that individual to detailed phenotypic descriptions, genetic information, diagnoses, and treatments. A detailed example is presented that illustrates how to use the schema to represent the clinical course of a patient with retinoblastoma, including demographic information, the clinical diagnosis, phenotypic features and clinical measurements, an examination of the extirpated tumor, therapies, and the results of genomic analysis. The Phenopacket Schema, together with other GA4GH data and technical standards, will enable data exchange and provide a foundation for the computational analysis of disease and phenotype information to improve our ability to diagnose and conduct research on all types of disorders, including cancer and rare diseases

    GA4GH Phenopackets: A Practical Introduction

    Full text link
    The Global Alliance for Genomics and Health (GA4GH) is developing a suite of coordinated standards for genomics for healthcare. The Phenopacket is a new GA4GH standard for sharing disease and phenotype information that characterizes an individual person, linking that individual to detailed phenotypic descriptions, genetic information, diagnoses, and treatments. A detailed example is presented that illustrates how to use the schema to represent the clinical course of a patient with retinoblastoma, including demographic information, the clinical diagnosis, phenotypic features and clinical measurements, an examination of the extirpated tumor, therapies, and the results of genomic analysis. The Phenopacket Schema, together with other GA4GH data and technical standards, will enable data exchange and provide a foundation for the computational analysis of disease and phenotype information to improve our ability to diagnose and conduct research on all types of disorders, including cancer and rare diseases

    Multi-criteria Resource Allocation in Modal Hard Real-Time Systems

    Get PDF
    In this paper, a novel resource allocation approach dedicated to hard real-time systems with distinctive operational modes is proposed. The aim of this approach is to reduce the energy dissipation of the computing cores by either powering them off or switching them into energy-saving states while still guaranteeing to meet all timing constraints. The approach is illustrated with two industrial applications, an engine control management and an engine control unit. Moreover, the amount of data to be migrated during the mode change is minimised. Since the number of processing cores and their energy dissipation are often negatively correlated with the amount of data to be migrated during the mode change, there is some trade-off between these values, which is also analysed in this paper

    Laplacians on discrete and quantum geometries

    Get PDF
    We extend discrete calculus for arbitrary (pp-form) fields on embedded lattices to abstract discrete geometries based on combinatorial complexes. We then provide a general definition of discrete Laplacian using both the primal cellular complex and its combinatorial dual. The precise implementation of geometric volume factors is not unique and, comparing the definition with a circumcentric and a barycentric dual, we argue that the latter is, in general, more appropriate because it induces a Laplacian with more desirable properties. We give the expression of the discrete Laplacian in several different sets of geometric variables, suitable for computations in different quantum gravity formalisms. Furthermore, we investigate the possibility of transforming from position to momentum space for scalar fields, thus setting the stage for the calculation of heat kernel and spectral dimension in discrete quantum geometries.Comment: 43 pages, 2 multiple figures. v2: discussion improved, references added, minor typos correcte
    corecore