160 research outputs found

    Privacy Guarantees through Distributed Constraint Satisfaction

    Get PDF
    The reason for using distributed constraint satisfaction algorithms is often to allow agents to find a solution while revealing as little as possible about their variables and constraints. So far, most algorithms for DisCSP do not guarantee privacy of this information. This paper describes some simple techniques that can be used with DisCSP algorithms such as DPOP, and provide sensible privacy guarantees based on the distributed solving process without sacrificing its efficiency

    A Neural Network-Embedded Optimization Approach for Selecting Multiple Entries for March Madness

    Get PDF
    RÉSUMÉ : On peut s’attendre Ă  une croissance en popularitĂ© des paris sportifs dans le marchĂ© amĂ©ricain suite Ă  la lĂ©galisation de ceux-ci dans plusieurs Ă©tats depuis 2018 [1]. De plus, l’augmentation de la quantitĂ© de donnĂ©es sur le sport et le dĂ©veloppement de nouvelles mĂ©triques de performance sportive ont permis depuis quelques annĂ©es d’avoir une approche statistique pour les problĂšmes de prise de dĂ©cision dans le sport. Alors que la littĂ©rature sur les paris sportifs couvrent majoritairement des modĂšles probabilistes pour prĂ©dire le rĂ©sultat d’un Ă©vĂšnement, cette thĂšse s’intĂ©resse plutĂŽt au dĂ©veloppement d’une stratĂ©gie optimale pour remporter un paris sportif, plus particuliĂšrement le Tournament challenge tenu annuellement par ESPN. Le Tournament Challenge demande aux participants de choisir le gagnant de chacune des 63 parties du March Madness, soit le championnat de fin de saison de basketball collĂ©gial amĂ©ricain. Il existe 263 façons de sĂ©lectionner les gagnants du tournois. De plus, plusieurs millions de personnes y participent Ă  chaque annĂ©e. GĂ©nĂ©ralement, seulement un petit pourcentage des meilleurs scores font un gain monĂ©taire ce qui implique qu’un participant doit obtenir un meilleur score que plusieurs millions de personnes pour remporter un gain. Kaplan et al. (2001) ont Ă©tĂ© les premiers Ă  introduire une approche exacte qui maximise l’espĂ©rance de point produit par une entrĂ©e. Notre stratĂ©gie est la premiĂšre Ă  considĂ©rer plusieurs entrĂ©es dĂ©pendantes au Tournament Challenge. Notre stratĂ©gie cherche Ă  maximiser l’espĂ©rance de points produit par le score maximal des k entrĂ©es. Deux problĂšmes dĂ©coulent de cette stratĂ©gie, soit comment Ă©valuer et comment optimiser la fonction objective. Nous prĂ©sentons trois approches pour Ă©valuer la fonction objective. Cela inclue une mĂ©thode exacte qui est un algorithme basĂ© sur un arbre de dĂ©cision et deux modĂšles approximatifs, soit une approche par simulation et une approche par apprentissage machine. À partir de ces diffĂ©rents modĂšles, nous dĂ©veloppons deux heuristiques permettant d’optimiser la fonction objective, soit un algorithme gĂ©nĂ©tique et un rĂ©seau de neurones intĂ©grĂ© Ă  un modĂšle en nombre entier. Finalement, nous comparons l’espĂ©rance de points produits ainsi que le vrai score obtenu par chacune des mĂ©thodes pour chaque tournois depuis 2002. Nos deux modĂšles surpassent pour chaque instance la solution optimal du modĂšle exacte avec une entrĂ©e.----------ABSTRACT : Sports gambling are expected to grow in popularity in the US as they have been legalized by many states in the last two years [1]. The availability of sports data and the development of new metrics to evaluate the performance of either athletes or teams have allowed the use of statistical approaches to tackle decision-making problems in sports. While most papers in the literature investigate how to predict the outcome of a game, this thesis addresses the development of an optimal strategy to win a sports betting contest. Specifically, we focus on the ESPN Tournament Challenge which is a sport betting contest on the seasonending championship tournaments of americain college basketball, also known as the March Madness. The ESPN Tournament Challenge asks participants to pick the winner of each of the 63 games in the March Madness. Thus, there is a total of 263 different ways of filling the tournament which makes the challenge a complex task. Every year, millions of people aim to predict accurately the March Madness. This contest often adopts a top-heavy payoff structure which implies that a single participant needs to beat millions of participant to receive a positive payoff. Kaplan et al. (2001) first introduce an exact approach to the problem by selecting a singleentry that maximizes the expected score. We propose a novel strategy that considers a multi-entry approach to the Tournament Challenge. Such a strategy maximizes the expected score of the maximum scoring entry. We face two main challenge, namely, (1) how to evaluate the objective function and (2) how to optimize it. We then present three approaches for the evaluation of the objective function. This includes an exact approach in a Tree-based algorithm and two approximate models, a simulation approach and a neural network approach. Based on these three different models to evaluate the objective function, we develop both a genetic algorithm and a neural network-embedded algorithm. Finally, we compare the expected score and the empirical score by each approach on each tournament played since 2002. Computational experiments show that the proposed models clearly outperform the single-entry exact approach on every instance

    Optimal allocation of static and dynamic reactive power support for enhancing power system security

    Get PDF
    Power systems over the recent past few years, has undergone dramatic revolution in terms of government and private investment in various areas such as renewable generation, incorporation of smart grid to better control and operate the power grid, large scale energy storage, and fast responding reactive power sources. The ongoing growth of the electric power industry is mainly because of the deregulation of the industry and regulatory compliance which each participant of the electric power system has to comply with during planning and operational phase. Post worldwide blackouts, especially the year 2003 blackout in north-east USA, which impacted roughly 50 million people, more attention has been given to reactive power planning. At present, there is steady load growth but not enough transmission capacity to carry power to load centers. There is less transmission expansion due to high investment cost, difficulty in getting environmental clearance, and less lucrative cost recovery structure. Moreover, conventional generators close to load centers are aging or closing operation as they cannot comply with the new environmental protection agency (EPA) policies such as Cross-State Air Pollution Rule (CSAPR) and MACT. The conventional generators are getting replaced with far away renewable sources of energy. Thus, the traditional source of dynamic reactive power support close to load centers is getting retired. This has resulted in more frequently overloading of transmission network than before. These issues lead to poor power quality and power system instability. The problem gets even worse during contingencies and especially at high load levels. There is a clear need of power system static and dynamic monitoring. This can help planners and operators to clearly identify severe contingencies causing voltage acceptability problem and system instability. Also, it becomes imperative to find which buses and how much are they impacted by a severe contingency. Thus, sufficient static and dynamic reactive power resource is needed to ensure reliable operation of power system, during stressed conditions and contingencies. In this dissertation, a generic framework has been developed for filtering and ranking of severe contingency. Additionally, vulnerable buses are identified and ranked. The next task after filtering out severe contingencies is to ensure static and dynamic security of the system against them. To ensure system robustness against severe contingencies optimal location and amount of VAR support required needs to be found. Thus, optimal VAR allocation needs to be found which can ensure acceptable voltage performance against all severe contingency. The consideration of contingency in the optimization process leads to security constrained VAR allocation problem. The problem of static VAR allocation requirement is formulated as minlp. To determine optimal dynamic VAR installation requirement the problem is solved in dynamic framework and is formulated as a Mixed Integer Dynamic Optimization (MIDO). Solving the VAR allocation problem for a set of severe contingencies is a very complex problem. Thus an approach is developed in this work which reduces the overall complexity of the problem while ensuring an acceptable optimal solution. The VAR allocation optimization problem has two subparts i.e. interger part and nonlinear part. The integer part of the problem is solved by branch and bound (B&B) method. To enhance the efficiency of B&B, system based knowledge is used to customize the B&B search process. Further to reduce the complexity of B&B method, only selected candidate locations are used instead of all plausible locations in the network. The candidate locations are selected based upon the effectiveness of the location in improving the system voltage. The selected candidate locations are used during the optimization process. The optimization process is divided into two parts: static optimization and dynamic optimization. Separating the overall optimization process into two sub-parts is much more realistic and corresponds to industry practice. Immediately after the occurrence of the contingency, the system goes into transient (or dynamic) phase, which can extend from few milliseconds to a minute. During the transient phase fast acting controllers are used to restore the system. Once the transients die out, the system attains steady state which can extend for hours with the help of slow static controllers. Static optimization is used to ensure acceptable system voltage and system security during steady state. The optimal reactive power allocation as determined via static optimization is a valuable information. It\u27s valuable as during the steady state phase of the system which is a much longer phase (extending in hours), the amount of constant reactive power support needed to maintain steady system voltage is determined. The optimal locations determined during the static optimization are given preference in the dynamic optimization phase. In dynamic optimization optimal location and amount of dynamic reactive power support is determined which can ensure acceptable transient performance and security of the system. To capture the true dynamic behavior of the system, dynamic model of system components such as generator, exciter, load and reactive power source is used. The approach developed in this work can optimally allocate dynamic VAR sources. The results of this work show the effectiveness of the developed reactive power planning tool. The proposed methodology optimally allocates static and dynamic VAR sources that ensure post-contingency acceptable power quality and security of the system. The problem becomes manageable as the developed approach reduces the overall complexity of the optimization problem. We envision that the developed method will provide system planners a useful tool for optimal planning of static and dynamic reactive power support that can ensure system acceptable voltage performance and security

    A contribution to the evaluation and optimization of networks reliability

    Get PDF
    L’évaluation de la fiabilitĂ© des rĂ©seaux est un problĂšme combinatoire trĂšs complexe qui nĂ©cessite des moyens de calcul trĂšs puissants. Plusieurs mĂ©thodes ont Ă©tĂ© proposĂ©es dans la littĂ©rature pour apporter des solutions. Certaines ont Ă©tĂ© programmĂ©es dont notamment les mĂ©thodes d’énumĂ©ration des ensembles minimaux et la factorisation, et d’autres sont restĂ©es Ă  l’état de simples thĂ©ories. Cette thĂšse traite le cas de l’évaluation et l’optimisation de la fiabilitĂ© des rĂ©seaux. Plusieurs problĂšmes ont Ă©tĂ© abordĂ©s dont notamment la mise au point d’une mĂ©thodologie pour la modĂ©lisation des rĂ©seaux en vue de l’évaluation de leur fiabilitĂ©s. Cette mĂ©thodologie a Ă©tĂ© validĂ©e dans le cadre d’un rĂ©seau de radio communication Ă©tendu implantĂ© rĂ©cemment pour couvrir les besoins de toute la province quĂ©bĂ©coise. Plusieurs algorithmes ont aussi Ă©tĂ© Ă©tablis pour gĂ©nĂ©rer les chemins et les coupes minimales pour un rĂ©seau donnĂ©. La gĂ©nĂ©ration des chemins et des coupes constitue une contribution importante dans le processus d’évaluation et d’optimisation de la fiabilitĂ©. Ces algorithmes ont permis de traiter de maniĂšre rapide et efficace plusieurs rĂ©seaux tests ainsi que le rĂ©seau de radio communication provincial. Ils ont Ă©tĂ© par la suite exploitĂ©s pour Ă©valuer la fiabilitĂ© grĂące Ă  une mĂ©thode basĂ©e sur les diagrammes de dĂ©cision binaire. Plusieurs contributions thĂ©oriques ont aussi permis de mettre en place une solution exacte de la fiabilitĂ© des rĂ©seaux stochastiques imparfaits dans le cadre des mĂ©thodes de factorisation. A partir de cette recherche plusieurs outils ont Ă©tĂ© programmĂ©s pour Ă©valuer et optimiser la fiabilitĂ© des rĂ©seaux. Les rĂ©sultats obtenus montrent clairement un gain significatif en temps d’exĂ©cution et en espace de mĂ©moire utilisĂ© par rapport Ă  beaucoup d’autres implĂ©mentations. Mots-clĂ©s: FiabilitĂ©, rĂ©seaux, optimisation, diagrammes de dĂ©cision binaire, ensembles des chemins et coupes minimales, algorithmes, indicateur de Birnbaum, systĂšmes de radio tĂ©lĂ©communication, programmes.Efficient computation of systems reliability is required in many sensitive networks. Despite the increased efficiency of computers and the proliferation of algorithms, the problem of finding good and quickly solutions in the case of large systems remains open. Recently, efficient computation techniques have been recognized as significant advances to solve the problem during a reasonable period of time. However, they are applicable to a special category of networks and more efforts still necessary to generalize a unified method giving exact solution. Assessing the reliability of networks is a very complex combinatorial problem which requires powerful computing resources. Several methods have been proposed in the literature. Some have been implemented including minimal sets enumeration and factoring methods, and others remained as simple theories. This thesis treats the case of networks reliability evaluation and optimization. Several issues were discussed including the development of a methodology for modeling networks and evaluating their reliabilities. This methodology was validated as part of a radio communication network project. In this work, some algorithms have been developed to generate minimal paths and cuts for a given network. The generation of paths and cuts is an important contribution in the process of networks reliability and optimization. These algorithms have been subsequently used to assess reliability by a method based on binary decision diagrams. Several theoretical contributions have been proposed and helped to establish an exact solution of the stochastic networks reliability in which edges and nodes are subject to failure using factoring decomposition theorem. From this research activity, several tools have been implemented and results clearly show a significant gain in time execution and memory space used by comparison to many other implementations. Key-words: Reliability, Networks, optimization, binary decision diagrams, minimal paths set and cuts set, algorithms, Birnbaum performance index, Networks, radio-telecommunication systems, programs

    Distributed Constraint Optimization:Privacy Guarantees and Stochastic Uncertainty

    Get PDF
    Distributed Constraint Satisfaction (DisCSP) and Distributed Constraint Optimization (DCOP) are formal frameworks that can be used to model a variety of problems in which multiple decision-makers cooperate towards a common goal: from computing an equilibrium of a game, to vehicle routing problems, to combinatorial auctions. In this thesis, we independently address two important issues in such multi-agent problems: 1) how to provide strong guarantees on the protection of the privacy of the participants, and 2) how to anticipate future, uncontrollable events. On the privacy front, our contributions depart from previous work in two ways. First, we consider not only constraint privacy (the agents' private costs) and decision privacy (keeping the complete solution secret), but also two other types of privacy that have been largely overlooked in the literature: agent privacy, which has to do with protecting the identities of the participants, and topology privacy, which covers information about the agents' co-dependencies. Second, while previous work focused mainly on quantitatively measuring and reducing privacy loss, our algorithms provide stronger, qualitative guarantees on what information will remain secret. Our experiments show that it is possible to provide such privacy guarantees, while still scaling to much larger problems than the previous state of the art. When it comes to reasoning under uncertainty, we propose an extension to the DCOP framework, called DCOP under Stochastic Uncertainty (StochDCOP), which includes uncontrollable, random variables with known probability distributions that model uncertain, future events. The problem becomes one of making "optimal" offline decisions, before the true values of the random variables can be observed. We consider three possible concepts of optimality: minimizing the expected cost, minimizing the worst-case cost, or maximizing the probability of a-posteriori optimality. We propose a new family of StochDCOP algorithms, exploring the tradeoffs between solution quality, computational and message complexity, and privacy. In particular, we show how discovering and reasoning about co-dependencies on common random variables can yield higher-quality solutions

    36th International Symposium on Theoretical Aspects of Computer Science: STACS 2019, March 13-16, 2019, Berlin, Germany

    Get PDF

    Advanced Information Systems and Technologies

    Get PDF
    This book comprises the proceedings of the V International Scientific Conference "Advanced Information Systems and Technologies, AIST-2017". The proceeding papers cover issues related to system analysis and modeling, project management, information system engineering, intelligent data processing computer networking and telecomunications. They will be useful for students, graduate students, researchers who interested in computer science

    Advanced Information Systems and Technologies

    Get PDF
    This book comprises the proceedings of the V International Scientific Conference "Advanced Information Systems and Technologies, AIST-2017". The proceeding papers cover issues related to system analysis and modeling, project management, information system engineering, intelligent data processing computer networking and telecomunications. They will be useful for students, graduate students, researchers who interested in computer science

    An Artificial Immune System-Inspired Multiobjective Evolutionary Algorithm with Application to the Detection of Distributed Computer Network Intrusions

    Get PDF
    Today\u27s predominantly-employed signature-based intrusion detection systems are reactive in nature and storage-limited. Their operation depends upon catching an instance of an intrusion or virus after a potentially successful attack, performing post-mortem analysis on that instance and encoding it into a signature that is stored in its anomaly database. The time required to perform these tasks provides a window of vulnerability to DoD computer systems. Further, because of the current maximum size of an Internet Protocol-based message, the database would have to be able to maintain 25665535 possible signature combinations. In order to tighten this response cycle within storage constraints, this thesis presents an Artificial Immune System-inspired Multiobjective Evolutionary Algorithm intended to measure the vector of trade-off solutions among detectors with regard to two independent objectives: best classification fitness and optimal hypervolume size. Modeled in the spirit of the human biological immune system and intended to augment DoD network defense systems, our algorithm generates network traffic detectors that are dispersed throughout the network. These detectors promiscuously monitor network traffic for exact and variant abnormal system events, based on only the detector\u27s own data structure and the ID domain truth set, and respond heuristically. The application domain employed for testing was the MIT-DARPA 1999 intrusion detection data set, composed of 7.2 million packets of notional Air Force Base network traffic. Results show our proof-of-concept algorithm correctly classifies at best 86.48% of the normal and 99.9% of the abnormal events, attributed to a detector affinity threshold typically between 39-44%. Further, four of the 16 intrusion sequences were classified with a 0% false positive rate
    • 

    corecore