109 research outputs found

    Fault-Tolerant Hotelling Games

    Full text link
    The nn-player Hotelling game calls for each player to choose a point on the line segment, so as to maximize the size of his Voronoi cell. This paper studies fault-tolerant versions of the Hotelling game. Two fault models are studied: line faults and player faults. The first model assumes that the environment is prone to failure: with some probability, a disconnection occurs at a random point on the line, splitting it into two separate segments and modifying each player's Voronoi cell accordingly. A complete characterization of the Nash equilibria of this variant is provided for every nn. Additionally, a one to one correspondence is shown between equilibria of this variant and of the Hotelling game with no faults. The second fault model assumes the players are prone to failure: each player is removed from the game with i.i.d. probability, changing the payoffs of the remaining players accordingly. It is shown that for n3n \geq 3 this variant of the game has no Nash equilibria

    Proceedings of the 17th Cologne-Twente Workshop on Graphs and Combinatorial Optimization

    Get PDF

    Fair yet Asymptotically Equal Collaborative Learning

    Full text link
    In collaborative learning with streaming data, nodes (e.g., organizations) jointly and continuously learn a machine learning (ML) model by sharing the latest model updates computed from their latest streaming data. For the more resourceful nodes to be willing to share their model updates, they need to be fairly incentivized. This paper explores an incentive design that guarantees fairness so that nodes receive rewards commensurate to their contributions. Our approach leverages an explore-then-exploit formulation to estimate the nodes' contributions (i.e., exploration) for realizing our theoretically guaranteed fair incentives (i.e., exploitation). However, we observe a "rich get richer" phenomenon arising from the existing approaches to guarantee fairness and it discourages the participation of the less resourceful nodes. To remedy this, we additionally preserve asymptotic equality, i.e., less resourceful nodes achieve equal performance eventually to the more resourceful/"rich" nodes. We empirically demonstrate in two settings with real-world streaming data: federated online incremental learning and federated reinforcement learning, that our proposed approach outperforms existing baselines in fairness and learning performance while remaining competitive in preserving equality.Comment: Accepted to 40th International Conference on Machine Learning (ICML 2023), 37 page

    Políticas de Copyright de Publicações Científicas em Repositórios Institucionais: O Caso do INESC TEC

    Get PDF
    A progressiva transformação das práticas científicas, impulsionada pelo desenvolvimento das novas Tecnologias de Informação e Comunicação (TIC), têm possibilitado aumentar o acesso à informação, caminhando gradualmente para uma abertura do ciclo de pesquisa. Isto permitirá resolver a longo prazo uma adversidade que se tem colocado aos investigadores, que passa pela existência de barreiras que limitam as condições de acesso, sejam estas geográficas ou financeiras. Apesar da produção científica ser dominada, maioritariamente, por grandes editoras comerciais, estando sujeita às regras por estas impostas, o Movimento do Acesso Aberto cuja primeira declaração pública, a Declaração de Budapeste (BOAI), é de 2002, vem propor alterações significativas que beneficiam os autores e os leitores. Este Movimento vem a ganhar importância em Portugal desde 2003, com a constituição do primeiro repositório institucional a nível nacional. Os repositórios institucionais surgiram como uma ferramenta de divulgação da produção científica de uma instituição, com o intuito de permitir abrir aos resultados da investigação, quer antes da publicação e do próprio processo de arbitragem (preprint), quer depois (postprint), e, consequentemente, aumentar a visibilidade do trabalho desenvolvido por um investigador e a respetiva instituição. O estudo apresentado, que passou por uma análise das políticas de copyright das publicações científicas mais relevantes do INESC TEC, permitiu não só perceber que as editoras adotam cada vez mais políticas que possibilitam o auto-arquivo das publicações em repositórios institucionais, como também que existe todo um trabalho de sensibilização a percorrer, não só para os investigadores, como para a instituição e toda a sociedade. A produção de um conjunto de recomendações, que passam pela implementação de uma política institucional que incentive o auto-arquivo das publicações desenvolvidas no âmbito institucional no repositório, serve como mote para uma maior valorização da produção científica do INESC TEC.The progressive transformation of scientific practices, driven by the development of new Information and Communication Technologies (ICT), which made it possible to increase access to information, gradually moving towards an opening of the research cycle. This opening makes it possible to resolve, in the long term, the adversity that has been placed on researchers, which involves the existence of barriers that limit access conditions, whether geographical or financial. Although large commercial publishers predominantly dominate scientific production and subject it to the rules imposed by them, the Open Access movement whose first public declaration, the Budapest Declaration (BOAI), was in 2002, proposes significant changes that benefit the authors and the readers. This Movement has gained importance in Portugal since 2003, with the constitution of the first institutional repository at the national level. Institutional repositories have emerged as a tool for disseminating the scientific production of an institution to open the results of the research, both before publication and the preprint process and postprint, increase the visibility of work done by an investigator and his or her institution. The present study, which underwent an analysis of the copyright policies of INESC TEC most relevant scientific publications, allowed not only to realize that publishers are increasingly adopting policies that make it possible to self-archive publications in institutional repositories, all the work of raising awareness, not only for researchers but also for the institution and the whole society. The production of a set of recommendations, which go through the implementation of an institutional policy that encourages the self-archiving of the publications developed in the institutional scope in the repository, serves as a motto for a greater appreciation of the scientific production of INESC TEC

    Abstracting Multidimensional Concepts for Multilevel Decision Making in Multirobot Systems

    Get PDF
    Multirobot control architectures often require robotic tasks to be well defined before allocation. In complex missions, it is often difficult to decompose an objective into a set of well defined tasks; human operators generate a simplified representation based on experience and estimation. The result is a set of robot roles, which are not best suited to accomplishing those objectives. This thesis presents an alternative approach to generating multirobot control algorithms using task abstraction. By carefully analysing data recorded from similar systems a multidimensional and multilevel representation of the mission can be abstracted, which can be subsequently converted into a robotic controller. This work, which focuses on the control of a team of robots to play the complex game of football, is divided into three sections: In the first section we investigate the use of spatial structures in team games. Experimental results show that cooperative teams beat groups of individuals when competing for space and that controlling space is important in the game of robot football. In the second section, we generate a multilevel representation of robot football based on spatial structures measured in recorded matches. By differentiating between spatial configurations appearing in desirable and undesirable situations, we can abstract a strategy composed of the more desirable structures. In the third section, five partial strategies are generated, based on the abstracted structures, and a suitable controller is devised. A set of experiments shows the success of the method in reproducing those key structures in a multirobot system. Finally, we compile our methods into a formal architecture for task abstraction and control. The thesis concludes that generating multirobot control algorithms using task abstraction is appropriate for problems which are complex, weakly-defined, multilevel, dynamic, competitive, unpredictable, and which display emergent properties

    Statistical Methods for Semiconductor Manufacturing

    Get PDF
    In this thesis techniques for non-parametric modeling, machine learning, filtering and prediction and run-to-run control for semiconductor manufacturing are described. In particular, algorithms have been developed for two major applications area: - Virtual Metrology (VM) systems; - Predictive Maintenance (PdM) systems. Both technologies have proliferated in the past recent years in the semiconductor industries, called fabs, in order to increment productivity and decrease costs. VM systems aim of predicting quantities on the wafer, the main and basic product of the semiconductor industry, that may be physically measurable or not. These quantities are usually ’costly’ to be measured in economic or temporal terms: the prediction is based on process variables and/or logistic information on the production that, instead, are always available and that can be used for modeling without further costs. PdM systems, on the other hand, aim at predicting when a maintenance action has to be performed. This approach to maintenance management, based like VM on statistical methods and on the availability of process/logistic data, is in contrast with other classical approaches: - Run-to-Failure (R2F), where there are no interventions performed on the machine/process until a new breaking or specification violation happens in the production; - Preventive Maintenance (PvM), where the maintenances are scheduled in advance based on temporal intervals or on production iterations. Both aforementioned approaches are not optimal, because they do not assure that breakings and wasting of wafers will not happen and, in the case of PvM, they may lead to unnecessary maintenances without completely exploiting the lifetime of the machine or of the process. The main goal of this thesis is to prove through several applications and feasibility studies that the use of statistical modeling algorithms and control systems can improve the efficiency, yield and profits of a manufacturing environment like the semiconductor one, where lots of data are recorded and can be employed to build mathematical models. We present several original contributions, both in the form of applications and methods. The introduction of this thesis will be an overview on the semiconductor fabrication process: the most common practices on Advanced Process Control (APC) systems and the major issues for engineers and statisticians working in this area will be presented. Furthermore we will illustrate the methods and mathematical models used in the applications. We will then discuss in details the following applications: - A VM system for the estimation of the thickness deposited on the wafer by the Chemical Vapor Deposition (CVD) process, that exploits Fault Detection and Classification (FDC) data is presented. In this tool a new clustering algorithm based on Information Theory (IT) elements have been proposed. In addition, the Least Angle Regression (LARS) algorithm has been applied for the first time to VM problems. - A new VM module for multi-step (CVD, Etching and Litography) line is proposed, where Multi-Task Learning techniques have been employed. - A new Machine Learning algorithm based on Kernel Methods for the estimation of scalar outputs from time series inputs is illustrated. - Run-to-Run control algorithms that employ both the presence of physical measures and statistical ones (coming from a VM system) is shown; this tool is based on IT elements. - A PdM module based on filtering and prediction techniques (Kalman Filter, Monte Carlo methods) is developed for the prediction of maintenance interventions in the Epitaxy process. - A PdM system based on Elastic Nets for the maintenance predictions in Ion Implantation tool is described. Several of the aforementioned works have been developed in collaborations with major European semiconductor companies in the framework of the European project UE FP7 IMPROVE (Implementing Manufacturing science solutions to increase equiPment pROductiVity and fab pErformance); such collaborations will be specified during the thesis, underlying the practical aspects of the implementation of the proposed technologies in a real industrial environment

    New Directions in the Ethics and Politics of Speech

    Get PDF
    This book features new perspectives on the ethics and politics of free speech. Contributors draw on insights from philosophy, psychology, political theory, journalism, literature, and history to respond to pressing problems involving free speech in liberal societies. Recent years have seen an explosion of academic interest in free speech. However, most recent work has focused on constitutional protections for free speech and on issues related to academic freedom and campus politics. The chapters in this volume set their sights more broadly on the non-state problems that we collectively face in attempting to realize a healthy environment for free discourse. The volume’s contributors share the assumption that threats to free speech do not come exclusively from state sources or bad actors, but from ordinary strategic situations in which all may be acting in good faith. Contributors take seriously the idea that our current cultural moment provides plenty of reason to be concerned about our intellectual climate and offer new insights for how to make things better.  New Directions in the Ethics and Politics of Speech will be of interest to researchers and students working in ethics, political philosophy, social theory, and law

    Merger Policy and the 2010 Merger Guidelines

    Get PDF
    New Horizontal Merger Guidelines were issued jointly by the Antitrust Division and the Federal Trade Commission in August, 2010, replacing Guidelines issued in 1992 that no longer reflected either the law or government enforcement policy. The new Guidelines are a striking improvement. They are less technocratic, accommodating a greater and more realistic variety of theories about why mergers of competitors can be anticompetitive and, accordingly, a greater variety of methodologies for assessing them. The unifying theme of the Horizontal Merger Guidelines is to prevent the enhancement of market power that might result from mergers. The 2010 Guidelines state that “[a] merger enhances market power if it is likely to encourage one or more firms to raise price, reduce output, diminish innovation, or otherwise harm customers as a result of diminished competitive constraints or incentives.” The focus on enhancement of market power is not limited to price. Both the 1992 and 2010 Guidelines expressed a concern about mergers that might restrain innovation, but the emphasis in the 2010 Guidelines is greatly expanded. The agencies are increasingly concerned about excessive reliance on market definition mechanisms that are, at best, rough approximations of reality. This increases the relative weight that will be given to other factors. That is not to suggest that the Agencies will not employ technical methodologies. Indeed, the economic methodologies for delineating markets have become more technical and precise over time, thanks in large part to economists that have been working on merger analysis, whether independently or under the aegis of one of the Agencies. Rather, the Guidelines indicate a broader range of approaches. Under the 2010 Guidelines market definition essentially plays two roles: “First, [it] helps specify the line of commerce and section of the country in which the competitive concern arises. . . . Second, [it] allows the Agencies to identify market participants and measure market shares and market concentration.” Market definition (and the measuring of market shares and concentration) is no longer an end in itself, though it is “useful to the extent it illuminates the merger’s competitive effects.” The 2010 Guidelines also place a heavier emphasis on direct evidence of competitive effects in defining markets: “[e]vidence of competitive effects can inform market definition, just as market definition can be informative regarding competitive effects.” This essay attempts to integrate the 2010 Guidelines into merger enforcement policy in the United States, highlighting the most important differences between the approach taken in the 2010 Guidelines and that taken by both previous Guidelines, the enforcement agencies, and the courts
    corecore