49 research outputs found

    How can polycentric governance of spectrum work?

    Get PDF
    Spectrum policy in the US (and throughout most of the world) consists generally of a set of nationally determined policies that apply uniformly to all localities. However, it is also true that there is considerable variation in the features (e.g., traffic demand or population density), requirements and constraints of spectrum use on a local basis. Global spectrum policies designed to resolve a situation in New York City could well be overly restrictive for communities in rural areas (such as central Wyoming). At the same time, it is necessary to ensure that more permissive policies of central Wyoming would not create problems for NYC (by ensuring, for example, that relocated radios adapt to local policies). Notions of polycentric governance that have been articulated by the late E. Ostrom [16] argue that greater good can be achieved by allowing for local autonomy in resource allocation. Shared access to spectrum is generally mediated through one of several technologies. As shown in [21], approaches mediated by geolocation databases are the most cost effective in today's technology. In the database oriented Spectrum Access System, or SAS, proposed by the FCC, users are granted (renewable) usage rights based on their location for a limited period of time. Because this system grants usage rights on a case-by-case basis, it may also allow for greater local autonomy while still maintaining global coordination. For example, it would be technically feasible for the database to include parameters such as transmit power, protocol, and bandwidth. Thus, they may provide the platform by which polycentric governance might come to spectrum management. In this paper, we explore, through some case examples, what polycentric governance of spectrum might look like and how this could be implemented in a database-driven spectrum management system. In many ways this paper is a complement to [20], which evaluted emerging SAS architectures using Ostrom's socioeconomic theory. This paper explores how a SAS-based system could be constructed that is consistent with Ostrom's polycentric governance ideas. Our approach is to address spectrum management as an emergent phenomenon rather than a top down system. This paper will describe the key details of this system and present some initial modeling results in comparison with the traditional global model of spectrum regulation. It will also discuss some of the concerns associated with this approach

    A comparative study of game theoretic and evolutionary models for software agents

    No full text
    Most of the existing work in the study of bargaining behaviour uses techniques from game theory. Game theoretic models for bargaining assume that players are perfectly rational and that this rationality in common knowledge. However, the perfect rationality assumption does not hold for real-life bargaining scenarios with humans as players, since results from experimental economics show that humans find their way to the best strategy through trial and error, and not typically by means of rational deliberation. Such players are said to be boundedly rational. In playing a game against an opponent with bounded rationality, the most effective strategy of a player is not the equilibrium strategy but the one that is the best reply to the opponent's strategy. The evolutionary model provides a means for studying the bargaining behaviour of boundedly rational players. This paper provides a comprehensive comparison of the game theoretic and evolutionary approaches to bargaining by examining their assumptions, goals, and limitations. We then study the implications of these differences from the perspective of the software agent developer

    On the integration of trust with negotiation, argumentation and semantics

    Get PDF
    Agreement Technologies are needed for autonomous agents to come to mutually acceptable agreements, typically on behalf of humans. These technologies include trust computing, negotiation, argumentation and semantic alignment. In this paper, we identify a number of open questions regarding the integration of computational models and tools for trust computing with negotiation, argumentation and semantic alignment. We consider these questions in general and in the context of applications in open, distributed settings such as the grid and cloud computing. © 2013 Cambridge University Press.This work was partially supported by the Agreement Technology COST action (IC0801). The authors would like to thank for helpful discussions and comments all participants in the panel on >Trust, Argumentation and Semantics> on 16 December 2009, Agia Napa, CyprusPeer Reviewe

    An SLA-based resource virtualization approach for on-demand service provision

    Get PDF
    Cloud computing is a newly emerged research infrastructure that builds on the latest achievements of diverse research areas, such as Grid computing, Service-oriented computing, business processes and virtualization. In this paper we present an architecture for SLA-based resource virtualization that provides an extensive solution for executing user applications in Clouds. This work represents the first attempt to combine SLA-based resource negotiations with virtualized resources in terms of on-demand service provision resulting in a holistic virtualization approach. The architecture description focuses on three topics: agreement negotiation, service brokering and deployment using virtualization. The contribution is also demonstrated with a real-world case study

    Feasible negotiation procedures for multiple interdependent negotiations

    Get PDF
    Within an agent society, agents utilise their knowledge differently to achieve their individual or joint goals. Agent negotiation provides an effective solution to help agents reach agreements on their future behaviour in the society to guarantee their goals can be achieved successfully. Agents may need to conduct Multiple Interdependent Negotiations (MIN) with different opponents and for different purposes, in order to achieve a goal. By considering the complexity of negotiation environments, interdependencies, opponents and issues in the agent society, conducting MIX efficiently Is a challenging research issue. To the best of the authors\u27 knowledge, most of the state-of-art work primarily focuses on single negotiation scenarios and tries to propose sophisticated negotiation protocols and strategies to help individual agents to succeed in single negotiation. However, very little work has been done while considering interdependencies and tradeoffs among multiple negotiations, so as to help both individual agents as well as the agent society, to increase their welfare. This paper promotes the research on agent negotiation from the single negotiation level to the multiple negotiations level. To effectively conduct MIN in an agent society, this paper proposes three feasible negotiation procedures, which attempt to conduct MIN in a successive way, in a concurrent way, and in a clustered way while considering them in different negotiation situations, respectively. A simulated agent society is built to test the proposed negotiation procedures with rand om experimental settings. According to the experimental results, the successive negotiation procedure produces the highest time efficiency, the concurrent negotiation procedure promises the highest profits and success rates, whilst the clustered negotiation procedure provides a well-balanced solution between negotiation efficiency and effectiveness

    Addressing consumer demands: a manufacturing collaboration process using blockchain for knowledge representation

    Get PDF
    Under I4.0, the evolution of the manufacturing processes is supported by an increase of data that is available and produced by organisations, the digitalisation of manufacturing pipelines, and a paradigm shift in production (from mass production to mass personalisation). Additionally, organisations need to gather the necessary conditions to ensure their quick adaptation to a changing environment and replace reactiveness for proactivity. Collaboration can act as the foundation to an answer for the increase demand for customised products, with an open and transparent environment where information is shared, and actors can work together to solve a common problem. In this work we propose a model definition for an industrial collaboration network composed by a network of entities, with reasoning and interaction, that uses a blockchain for knowledge representation. Current definitions of MAS already include a representation of equipment, transportation, products, and organisations; our contribution proposes the inclusion of the consumer, represented by an agent, directly in the manufacturing process. This agent represents the preferences and needs of the consumer in product customisation scenarios which, together with the other agents, negotiate criteria and cooperate with each other. The network is composed by distinct types of agents, across multiple organisations, that share common objectives. We use Hyperledger Fabric to represent knowledge, assuring that the data is stored and shared with all entities, while keeping the information secure and assuring that it cannot be tampered with.FCT - Fundação para a Ciência e a Tecnologia(UIDB/04728/2020

    How can Polycentric Governance work?

    Get PDF
    Spectrum policy in the US (and throughout most of the world) consists generally of a set of nationally determined policies that apply uniformly to all localities. However, it is also true that there is considerable variation in the features (e.g., traffic demand or population density), requirements and constraints of spectrum use on a local basis. Global spectrum policies designed to resolve a situation in New York City could well be overly restrictive for communities in rural areas (such as central Wyoming). At the same time, it is necessary to ensure that more permissive policies of central Wyoming would not create problems for NYC (by ensuring, for example, that relocated radios adapt to local policies). Notions of polycentric governance that have been articulated by the late E. Ostrom [16] argue that greater good can be achieved by allowing for local autonomy in resource allocation. Shared access to spectrum is generally mediated through one of several technologies. As shown in [21], approaches mediated by geolocation databases are the most cost effective in today's technology. In the database oriented Spectrum Access System, or SAS, proposed by the FCC, users are granted (renewable) usage rights based on their location for a limited period of time. Because this system grants usage rights on a case-bycase basis, it may also allow for greater local autonomy while still maintaining global coordination. For example, it would be technically feasible for the database to include parameters such as transmit power, protocol, and bandwidth. Thus, they may provide the platform by which polycentric governance might come to spectrum management. In this paper, we explore, through some case examples, what polycentric governance of spectrum might look like and how this could be implemented in a database-driven spectrum management system. In many ways this paper is a complement to [20], which evaluted emerging SAS architectures using Ostrom's socioeconomic theory. This paper explores how a SAS-based system could be constructed that is consistent with Ostrom's polycentric governance ideas. Our approach is to address spectrum management as an emergent phenomenon rather than a top down system. This paper will describe the key details of this system and present some initial modeling results in comparison with the traditional global model of spectrum regulation. It will also discuss some of the concerns associated with this approach

    Online Investment Banking Phase I: Distribution via the Internet and Its Impact on IPO Performance

    Get PDF
    In the past few years, there has been a growth in Internet markets run by online investment bankers, where companies and investors can buy and sell initial public offerings (IPOs) of corporate stock. In this study, we confine our examination to the first of what we anticipate will be several phases in the evolution of Internet IPOs: the online distribution of shares. This implies the beginning of a general disintermediation in the IPO process where traditional roles of investment banks are being circumvented via the Internet as participants search for greater market efficiency. This is an important research area because potentially it affects all public companies, or companies considering going public, the investment banking industry, and all stock investors. We address two research issues not considered by previous studies. What factors affect organizational choice of online vs. traditional IPO distribution? What are the financial performance differences for IPOs distributed using online and traditional processes? These issues were addressed using company characteristic and financial performance data from 27 IPOs from the last half of 1998. We find that the Internet IPO firms are larger, have younger CEOs, choose more reputable investment banks and are more likely to be involved in a Web-based business, directly employing the Internet in their product or service, than the firms that choose the traditional method of going public. In addition, market performance, both initially and over the first three months of trading, is significantly greater for Internet IPOs

    Innovations in nature inspired optimization and learning methods

    Get PDF
    The nine papers included in this special issue represent a selection of extended contributions presented at the Third World Congress on Nature and Biologically Inspired Computing (NaBIC2011), held in Salamanca, Spain, October 19–21, 2011. Papers were selected on the basis of fundamental ideas and concepts rather than the direct usage of well-established techniques. This special issue is then aimed at practitioners, researchers and postgraduate students, who are engaged in developing and applying, advanced Nature and Biologically Inspired Computing Models to solving real-world problems. The papers are organized as follows. The first paper by Apeh et al. present a comparative investigation of 4 approaches for classifying dynamic customer profiles built using evolving transactional data over time. The changing class values of the customer profiles were analyzed together with the challenging problem of deciding whether to change the class label or adapt the classifier. The results from the experiments conducted on a highly sparse and skewed real-world transactional data show that adapting the classifiers leads to more stable classification of customer profiles in the shorter time windows; while relabelling the changed customer profile classes leads to more accurate and stable classification in the longer time windows. Frolov et al. suggested in the second paper a new approach to Boolean factor analysis, which is an extension of the previously proposed Boolean factor analysis method: Hopfield-like attractor neural network with increasing activity. The authors increased its applicability and robustness when complementing this method by a maximization of the learning set likelihood function defined according to the Noisy-OR generative model. They demonstrated the efficiency of the new method using the data set generated according to the model. Successful application of the method to the real data is shown when analyzing the data from the Kyoto Encyclopedia of Genes and Genomes database which contains full genome sequencing for 1368 organisms. In the sequel, Triguero et al. analyze the integration of a wide variety of noise filters into the self-training process to distinguish the most relevant features of filters. They are focused on the nearest neighbour rule as a base classifier and ten different noise filters. Then, they provide an extensive analysis of the performance of these filters considering different ratios of labelled data. The results are contrasted with nonparametric statistical tests that allow us to identify relevant filters, and their main characteristics, in the field of semi-supervised learning. In the Fourth paper, Gutiérrez-Avilés et al. present the TriGen algorithm, a genetic algorithm that finds triclusters of gene expression that take into account the experimental conditions and the time points simultaneously. The authors have used TriGen to mine datasets related to synthetic data, yeast (Saccharomyces Cerevisiae) cell cycle and human inflammation and host response to injury experiments. TriGen has proved to be capable of extracting groups of genes with similar patterns in subsets of conditions and times, and these groups have shown to be related in terms of their functional annotations extracted from the Gene Ontology project. In the following paper, Varela et al. introduce and study the application of Constrained Sampling Evolutionary Algorithms in the framework of an UAV based search and rescue scenario. These algorithms have been developed as a way to harness the power of Evolutionary Algorithms (EA) when operating in complex, noisy, multimodal optimization problems and transfer the advantages of their approach to real time real world problems that can be transformed into search and optimization challenges. These types of problems are denoted as Constrained Sampling problems and are characterized by the fact that the physical limitations of reality do not allow for an instantaneous determination of the fitness of the points present in the population that must be evolved. A general approach to address these problems is presented and a particular implementation using Differential Evolution as an example of CS-EA is created and evaluated using teams of UAVs in search and rescue missions.The results are compared to those of a Swarm Intelligence based strategy in the same type of problem as this approach has been widely used within the UAV path-planning field in different variants by many authors. In the Sixth paper, Zhao et al. introduce human intelligence into the computational intelligent algorithms, namely particle swarm optimization (PSO) and immune algorithms (IA). A novel human-computer cooperative PSO-based immune algorithm (HCPSO-IA) is proposed, in which the initial population consists of the initial artificial individuals supplied by human and the initial algorithm individuals are generated by a chaotic strategy. Some new artificial individuals are introduced to replace the inferior individuals of the population. HCPSO-IA benefits by giving free rein to the talents of designers and computers, and contributes to solving complex layout design problems. The experimental results illustrate that the proposed algorithm is feasible and effective. In the sequel, Rebollo-Ruiz and Graña give an extensive empirical evaluation of the innovative nature inspired Gravitational Swarm Intelligence (GSI) algorithm solving the Graph Coloring Problem (GCP). GSI follows Swarm Intelligence problem solving approach, where spatial position of agents are interpreted as problem solutions and agent motion is determined solely by local information, avoiding any central control system. To apply GSI to search for solutions of GCP, the authors map agents to graph's nodes. Agents move as particles in the gravitational field defined by goal objects corresponding to colors. When the agents fall in the gravitational well of the color goal, their corresponding nodes are colored by this color. Graph's connectivity is mapped into a repulsive force between agents corresponding to adjacent nodes. The authors discuss the convergence of the algorithm by testing it over a extensive suite of well-known benchmarking graphs. Comparison of this approach to state-of-the-art approaches in the literature show improvements in many of the benchmark graphs. In the Eighth paper, Macaˇs et al. demonstrates how the novel algorithms can be derived from opinion formation models and empirically demonstrates their usability in the area of binary optimization. Particularly, it introduces a general SITO algorithmic framework and describes four algorithms based on this general framework. Recent applications of these algorithms to pattern recognition in electronic nose, electronic tongue, newborn EEG and ICU patient mortality prediction are discussed. Finally, an open source SITO library for MATLAB and JAVA is introduced. In the final paper, Madureira et al. present a negotiation mechanism for dynamic scheduling based on social and collective intelligence. Under the proposed negotiation mechanism, agents must interact and collaborate in order to improve the global schedule. Swarm Intelligence is considered a general aggregation term for several computational techniques which use ideas and get inspiration from the social behaviors of insects and other biological systems. This work is concerned with negotiation, where multiple self-interested agents can reach agreement over the exchange of operations on competitive resources. A computational study was performed in order to validate the influence of negotiation mechanism in the system performance and the SI technique. From the obtained results it was possible to conclude about statistical evidence that negotiation mechanism influence significantly the overall system performance and about advantage of Artificial Bee Colony on effectiveness of makespan minimization and on the machine occupation maximization. We would like to thank our peer-reviewers for their diligent work and efficient efforts.We are also grateful to the Editor-in-Chief of Neurocomputing, Prof. Tom Heskes, for his continued support for the NABIC conference and for the opportunity to organize this Special issue
    corecore