2,810 research outputs found

    Overview on agent-based social modelling and the use of formal languages

    Get PDF
    Transdisciplinary Models and Applications investigates a variety of programming languages used in validating and verifying models in order to assist in their eventual implementation. This book will explore different methods of evaluating and formalizing simulation models, enabling computer and industrial engineers, mathematicians, and students working with computer simulations to thoroughly understand the progression from simulation to product, improving the overall effectiveness of modeling systems.Postprint (author's final draft

    A Rule-driven Approach for Defining the Behavior of Negotiating Software Agents

    Get PDF
    One problem with existing agent-mediated negotiation systems is that they rely on ad hoc, static, non-adaptive, and hardcoded schemes to represent the behaviour of agents. This limitation is probably due to the complexity of the negotiation task itself. Indeed, while negotiating, software (human) agents face tough decisions. These decisions are based not only on the information made available by the negotiation server, but on the behaviour of the other participants in the negotiation process as well. The information and the behaviour in question are constantly changing and highly uncertain. In the first part of the paper, we propose a rule-driven approach to represent, manage and explore negotiation strategies and coordination information. For that, we divide the behaviour of negotiating agents into protocols, strategies and coordination. Among the many advantages of the proposed solution, we can cite the high level of abstraction, the closeness to human understanding, the versatility, and the possibility to modify the agents' behaviour during the negotiation process. To validate our solution, we ran many agent tournaments, and used the rule-driven approach to implement bidding strategies that are common in the English and Dutch auctions. We also implemented simple coordination schemes across several auctions. The ongoing validation work is detailed and discussed in the second part of the paper. Un des inconvénients qu'on retrouve fréquemment dans les systèmes de négociation par agents est qu'ils reposent sur des schémas ad-hoc, non adaptatifs et figés dans le code pour représenter le comportement des agents. Cette limitation est probablement due à la complexité de l'activité de négociation elle-même. En effet, au cours de la négociation, les agents logiciels (humains) ont des décisions difficiles à prendre. Ces décisions ne sont pas seulement basées sur l'information disponible sur le serveur de négociation, mais aussi sur le comportement des autres participants durant le processus de négociation. L'information et le comportement en question changent constamment et sont très incertains. Dans la première partie de l'article, nous proposons une approche à base de règles pour représenter, gérer et explorer les stratégies de négociation ainsi que l'information de coordination. Parmi les nombreux avantages de la solution proposée, on peut citer le haut niveau d'abstraction, la proximité avec la compréhension humaine, la souplesse d'utilisation et la possibilité de modifier le comportement des agents durant le processus de négociation. Pour valider notre solution, nous avons effectué plusieurs tournois entre agents et utilisé l'approche à base de règles pour implémenter des stratégies simples applicables à l'enchère anglaise et à l'enchère hollandaise. Nous avons aussi implémenté des schémas simples de coordination impliquant plusieurs enchères. Le travail de validation, en cours, est détaillé et discuté dans la seconde partie de l'article.e-negotiation, online auction, software agent, negotiation strategy, coordination, rule-based system, rule engine, Négociation électronique, enchères en ligne, agents logiciels, stratégie de négociation, coordination, système à base de règles, moteur de règles

    Quality of Information in Mobile Crowdsensing: Survey and Research Challenges

    Full text link
    Smartphones have become the most pervasive devices in people's lives, and are clearly transforming the way we live and perceive technology. Today's smartphones benefit from almost ubiquitous Internet connectivity and come equipped with a plethora of inexpensive yet powerful embedded sensors, such as accelerometer, gyroscope, microphone, and camera. This unique combination has enabled revolutionary applications based on the mobile crowdsensing paradigm, such as real-time road traffic monitoring, air and noise pollution, crime control, and wildlife monitoring, just to name a few. Differently from prior sensing paradigms, humans are now the primary actors of the sensing process, since they become fundamental in retrieving reliable and up-to-date information about the event being monitored. As humans may behave unreliably or maliciously, assessing and guaranteeing Quality of Information (QoI) becomes more important than ever. In this paper, we provide a new framework for defining and enforcing the QoI in mobile crowdsensing, and analyze in depth the current state-of-the-art on the topic. We also outline novel research challenges, along with possible directions of future work.Comment: To appear in ACM Transactions on Sensor Networks (TOSN

    From Social Simulation to Integrative System Design

    Full text link
    As the recent financial crisis showed, today there is a strong need to gain "ecological perspective" of all relevant interactions in socio-economic-techno-environmental systems. For this, we suggested to set-up a network of Centers for integrative systems design, which shall be able to run all potentially relevant scenarios, identify causality chains, explore feedback and cascading effects for a number of model variants, and determine the reliability of their implications (given the validity of the underlying models). They will be able to detect possible negative side effect of policy decisions, before they occur. The Centers belonging to this network of Integrative Systems Design Centers would be focused on a particular field, but they would be part of an attempt to eventually cover all relevant areas of society and economy and integrate them within a "Living Earth Simulator". The results of all research activities of such Centers would be turned into informative input for political Decision Arenas. For example, Crisis Observatories (for financial instabilities, shortages of resources, environmental change, conflict, spreading of diseases, etc.) would be connected with such Decision Arenas for the purpose of visualization, in order to make complex interdependencies understandable to scientists, decision-makers, and the general public.Comment: 34 pages, Visioneer White Paper, see http://www.visioneer.ethz.c

    SYNERGY OF BUILDING CYBERSECURITY SYSTEMS

    Get PDF
    The development of the modern world community is closely related to advances in computing resources and cyberspace. The formation and expansion of the range of services is based on the achievements of mankind in the field of high technologies. However, the rapid growth of computing resources, the emergence of a full-scale quantum computer tightens the requirements for security systems not only for information and communication systems, but also for cyber-physical systems and technologies. The methodological foundations of building security systems for critical infrastructure facilities based on modeling the processes of behavior of antagonistic agents in security systems are discussed in the first chapter. The concept of information security in social networks, based on mathematical models of data protection, taking into account the influence of specific parameters of the social network, the effects on the network are proposed in second chapter. The nonlinear relationships of the parameters of the defense system, attacks, social networks, as well as the influence of individual characteristics of users and the nature of the relationships between them, takes into account. In the third section, practical aspects of the methodology for constructing post-quantum algorithms for asymmetric McEliece and Niederreiter cryptosystems on algebraic codes (elliptic and modified elliptic codes), their mathematical models and practical algorithms are considered. Hybrid crypto-code constructions of McEliece and Niederreiter on defective codes are proposed. They can significantly reduce the energy costs for implementation, while ensuring the required level of cryptographic strength of the system as a whole. The concept of security of corporate information and educational systems based on the construction of an adaptive information security system is proposed. ISBN 978-617-7319-31-2 (on-line)ISBN 978-617-7319-32-9 (print) ------------------------------------------------------------------------------------------------------------------ How to Cite: Yevseiev, S., Ponomarenko, V., Laptiev, O., Milov, O., Korol, O., Milevskyi, S. et. al.; Yevseiev, S., Ponomarenko, V., Laptiev, O., Milov, O. (Eds.) (2021). Synergy of building cybersecurity systems. Kharkiv: РС ТЕСHNOLOGY СЕNTЕR, 188. doi: http://doi.org/10.15587/978-617-7319-31-2 ------------------------------------------------------------------------------------------------------------------ Indexing:                    Розвиток сучасної світової спільноти тісно пов’язаний з досягненнями в області обчислювальних ресурсів і кіберпростору. Формування та розширення асортименту послуг базується на досягненнях людства у галузі високих технологій. Однак стрімке зростання обчислювальних ресурсів, поява повномасштабного квантового комп’ютера посилює вимоги до систем безпеки не тільки інформаційно-комунікаційних, але і до кіберфізичних систем і технологій. У першому розділі обговорюються методологічні основи побудови систем безпеки для об'єктів критичної інфраструктури на основі моделювання процесів поведінки антагоністичних агентів у систем безпеки. У другому розділі пропонується концепція інформаційної безпеки в соціальних мережах, яка заснована на математичних моделях захисту даних, з урахуванням впливу конкретних параметрів соціальної мережі та наслідків для неї. Враховуються нелінійні взаємозв'язки параметрів системи захисту, атак, соціальних мереж, а також вплив індивідуальних характеристик користувачів і характеру взаємовідносин між ними. У третьому розділі розглядаються практичні аспекти методології побудови постквантових алгоритмів для асиметричних криптосистем Мак-Еліса та Нідеррейтера на алгебраїчних кодах (еліптичних та модифікованих еліптичних кодах), їх математичні моделі та практичні алгоритми. Запропоновано гібридні конструкції криптокоду Мак-Еліса та Нідеррейтера на дефектних кодах. Вони дозволяють істотно знизити енергетичні витрати на реалізацію, забезпечуючи при цьому необхідний рівень криптографічної стійкості системи в цілому. Запропоновано концепцію безпеки корпоративних інформаційних та освітніх систем, які засновані на побудові адаптивної системи захисту інформації. ISBN 978-617-7319-31-2 (on-line)ISBN 978-617-7319-32-9 (print) ------------------------------------------------------------------------------------------------------------------ Як цитувати: Yevseiev, S., Ponomarenko, V., Laptiev, O., Milov, O., Korol, O., Milevskyi, S. et. al.; Yevseiev, S., Ponomarenko, V., Laptiev, O., Milov, O. (Eds.) (2021). Synergy of building cybersecurity systems. Kharkiv: РС ТЕСHNOLOGY СЕNTЕR, 188. doi: http://doi.org/10.15587/978-617-7319-31-2 ------------------------------------------------------------------------------------------------------------------ Індексація:                 &nbsp

    A Taxonomy for and Analysis of Anonymous Communications Networks

    Get PDF
    Any entity operating in cyberspace is susceptible to debilitating attacks. With cyber attacks intended to gather intelligence and disrupt communications rapidly replacing the threat of conventional and nuclear attacks, a new age of warfare is at hand. In 2003, the United States acknowledged that the speed and anonymity of cyber attacks makes distinguishing among the actions of terrorists, criminals, and nation states difficult. Even President Obama’s Cybersecurity Chief-elect recognizes the challenge of increasingly sophisticated cyber attacks. Now through April 2009, the White House is reviewing federal cyber initiatives to protect US citizen privacy rights. Indeed, the rising quantity and ubiquity of new surveillance technologies in cyberspace enables instant, undetectable, and unsolicited information collection about entities. Hence, anonymity and privacy are becoming increasingly important issues. Anonymization enables entities to protect their data and systems from a diverse set of cyber attacks and preserves privacy. This research provides a systematic analysis of anonymity degradation, preservation and elimination in cyberspace to enhance the security of information assets. This includes discovery/obfuscation of identities and actions of/from potential adversaries. First, novel taxonomies are developed for classifying and comparing well-established anonymous networking protocols. These expand the classical definition of anonymity and capture the peer-to-peer and mobile ad hoc anonymous protocol family relationships. Second, a unique synthesis of state-of-the-art anonymity metrics is provided. This significantly aids an entity’s ability to reliably measure changing anonymity levels; thereby, increasing their ability to defend against cyber attacks. Finally, a novel epistemic-based mathematical model is created to characterize how an adversary reasons with knowledge to degrade anonymity. This offers multiple anonymity property representations and well-defined logical proofs to ensure the accuracy and correctness of current and future anonymous network protocol design

    Enhancing Our Understanding of a Regional Economy: The Complementarity of CGE and EIO Models

    Get PDF
    Economic impact models are powerful tools for the assessment of policy changes in regional economies. Computable General Equilibrium (CGE) models have grown in popularity, becoming the dominant choice of practitioners and academics in this field. This popularity has been at the expense of an older class of model, the Econometric Input Output (EIO). The present paper demonstrates how both models, using the same input data, may yield different outcomes. However, the paper suggests that EIO has been underutilized even though it provides a strong complementary tool accompany that enhance analyses using a CGE approach. This paper urges regional economists to rediscover the EIO model, especially two variants that are described in the paper, and bring them to the forefront of their research agenda

    Overview on Agent-Based Social Modelling and the Use of Formal Languages

    Get PDF
    The use of agent-based modelling and simulation techniques in the social sciences has flourished in the recent decades. The main reason is that the object of study in these disciplines, human society present or past, is difficult to analyse through classical analytical techniques. Population dynamics and structures are inherently complex. Thus, other methodological techniques need to be found to more adequately study this field. In this context, agent-based modelling is encouraging the introduction of computer simulations to examine behavioural patterns in complex systems. Simulation provides a tool to artificially examine societies where a big number of actors with decision capacity coexist and interact. However, formal modelling in these areas has not traditionally been used compared to other fields of science, in particular in their use of formal languages during the modelling process. In this chapter, the authors aim to revise the most relevant aspects on modelling in social sciences and to discuss the use formal languages by social scientists

    What to bid and when to stop

    No full text
    Negotiation is an important activity in human society, and is studied by various disciplines, ranging from economics and game theory, to electronic commerce, social psychology, and artificial intelligence. Traditionally, negotiation is a necessary, but also time-consuming and expensive activity. Therefore, in the last decades there has been a large interest in the automation of negotiation, for example in the setting of e-commerce. This interest is fueled by the promise of automated agents eventually being able to negotiate on behalf of human negotiators.Every year, automated negotiation agents are improving in various ways, and there is now a large body of negotiation strategies available, all with their unique strengths and weaknesses. For example, some agents are able to predict the opponent's preferences very well, while others focus more on having a sophisticated bidding strategy. The problem however, is that there is little incremental improvement in agent design, as the agents are tested in varying negotiation settings, using a diverse set of performance measures. This makes it very difficult to meaningfully compare the agents, let alone their underlying techniques. As a result, we lack a reliable way to pinpoint the most effective components in a negotiating agent.There are two major advantages of distinguishing between the different components of a negotiating agent's strategy: first, it allows the study of the behavior and performance of the components in isolation. For example, it becomes possible to compare the preference learning component of all agents, and to identify the best among them. Second, we can proceed to mix and match different components to create new negotiation strategies., e.g.: replacing the preference learning technique of an agent and then examining whether this makes a difference. Such a procedure enables us to combine the individual components to systematically explore the space of possible negotiation strategies.To develop a compositional approach to evaluate and combine the components, we identify structure in most agent designs by introducing the BOA architecture, in which we can develop and integrate the different components of a negotiating agent. We identify three main components of a general negotiation strategy; namely a bidding strategy (B), possibly an opponent model (O), and an acceptance strategy (A). The bidding strategy considers what concessions it deems appropriate given its own preferences, and takes the opponent into account by using an opponent model. The acceptance strategy decides whether offers proposed by the opponent should be accepted.The BOA architecture is integrated into a generic negotiation environment called Genius, which is a software environment for designing and evaluating negotiation strategies. To explore the negotiation strategy space of the negotiation research community, we amend the Genius repository with various existing agents and scenarios from literature. Additionally, we organize a yearly international negotiation competition (ANAC) to harvest even more strategies and scenarios. ANAC also acts as an evaluation tool for negotiation strategies, and encourages the design of negotiation strategies and scenarios.We re-implement agents from literature and ANAC and decouple them to fit into the BOA architecture without introducing any changes in their behavior. For each of the three components, we manage to find and analyze the best ones for specific cases, as described below. We show that the BOA framework leads to significant improvements in agent design by wining ANAC 2013, which had 19 participating teams from 8 international institutions, with an agent that is designed using the BOA framework and is informed by a preliminary analysis of the different components.In every negotiation, one of the negotiating parties must accept an offer to reach an agreement. Therefore, it is important that a negotiator employs a proficient mechanism to decide under which conditions to accept. When contemplating whether to accept an offer, the agent is faced with the acceptance dilemma: accepting the offer may be suboptimal, as better offers may still be presented before time runs out. On the other hand, accepting too late may prevent an agreement from being reached, resulting in a break off with no gain for either party. We classify and compare state-of-the-art generic acceptance conditions. We propose new acceptance strategies and we demonstrate that they outperform the other conditions. We also provide insight into why some conditions work better than others and investigate correlations between the properties of the negotiation scenario and the efficacy of acceptance conditions.Later, we adopt a more principled approach by applying optimal stopping theory to calculate the optimal decision on the acceptance of an offer. We approach the decision of whether to accept as a sequential decision problem, by modeling the bids received as a stochastic process. We determine the optimal acceptance policies for particular opponent classes and we present an approach to estimate the expected range of offers when the type of opponent is unknown. We show that the proposed approach is able to find the optimal time to accept, and improves upon all existing acceptance strategies.Another principal component of a negotiating agent's strategy is its ability to take the opponent's preferences into account. The quality of an opponent model can be measured in two different ways. One is to use the agent's performance as a benchmark for the model's quality. We evaluate and compare the performance of a selection of state-of-the-art opponent modeling techniques in negotiation. We provide an overview of the factors influencing the quality of a model and we analyze how the performance of opponent models depends on the negotiation setting. We identify a class of simple and surprisingly effective opponent modeling techniques that did not receive much previous attention in literature.The other way to measure the quality of an opponent model is to directly evaluate its accuracy by using similarity measures. We review all methods to measure the accuracy of an opponent model and we then analyze how changes in accuracy translate into performance differences. Moreover, we pinpoint the best predictors for good performance. This leads to new insights concerning how to construct an opponent model, and what we need to measure when optimizing performance.Finally, we take two different approaches to gain more insight into effective bidding strategies. We present a new classification method for negotiation strategies, based on their pattern of concession making against different kinds of opponents. We apply this technique to classify some well-known negotiating strategies, and we formulate guidelines on how agents should bid in order to be successful, which gives insight into the bidding strategy space of negotiating agents. Furthermore, we apply optimal stopping theory again, this time to find the concessions that maximize utility for the bidder against particular opponents. We show there is an interesting connection between optimal bidding and optimal acceptance strategies, in the sense that they are mirrored versions of each other.Lastly, after analyzing all components separately, we put the pieces back together again. We take all BOA components accumulated so far, including the best ones, and combine them all together to explore the space of negotiation strategies.We compute the contribution of each component to the overall negotiation result, and we study the interaction between components. We find that combining the best agent components indeed makes the strongest agents. This shows that the component-based view of the BOA architecture not only provides a useful basis for developing negotiating agents but also provides a useful analytical tool. By varying the BOA components we are able to demonstrate the contribution of each component to the negotiation result, and thus analyze the significance of each. The bidding strategy is by far the most important to consider, followed by the acceptance conditions and finally followed by the opponent model.Our results validate the analytical approach of the BOA framework to first optimize the individual components, and then to recombine them into a negotiating agent

    Modeling dynamic community acceptance of mining using agent-based modeling

    Get PDF
    This research attempts to provide fundamental understanding into the relationship between perceived sustainability of mineral projects and community acceptance. The main objective is to apply agent-based modeling (ABM) and discrete choice modeling to understand changes in community acceptance over time due to changes in community demographics and perceptions. This objective focuses on: 1) formulating agent utility functions for ABM, based on discrete choice theory; 2) applying ABM to account for the effect of information diffusion on community acceptance; and 3) explaining the relationship between initial conditions, topology, and rate of interactions, on one hand, and community acceptance on the other hand. To achieve this objective, the research relies on discrete choice theory, agent-based modeling, innovation and diffusion theory, and stochastic processes. Discrete choice models of individual preferences of mining projects were used to formulate utility functions for this research. To account for the effect of information diffusion on community acceptance, an agent-based model was developed to describe changes in community acceptance over time, as a function of changing demographics and perceived sustainability impacts. The model was validated with discrete choice experimental data on acceptance of mining in Salt Lake City, Utah. The validated model was used in simulation experiments to explain the model\u27s sensitivity to initial conditions, topology, and rate of interactions. The research shows that the model, with the base case social network, is more sensitive to homophily and number of early adopters than average degree (number of friends). Also, the dynamics of information diffusion are sensitive to differences in clustering in the social networks. Though the research examined the effect of three networks that differ due to the type of homophily, it is their differences in clustering due to homophily that was correlated to information diffusion dynamics --Abstract, page iii
    corecore