315 research outputs found

    Efficient Resource Matching in Heterogeneous Grid Using Resource Vector

    Full text link
    In this paper, a method for efficient scheduling to obtain optimum job throughput in a distributed campus grid environment is presented; Traditional job schedulers determine job scheduling using user and job resource attributes. User attributes are related to current usage, historical usage, user priority and project access. Job resource attributes mainly comprise of soft requirements (compilers, libraries) and hard requirements like memory, storage and interconnect. A job scheduler dispatches jobs to a resource if a job's hard and soft requirements are met by a resource. In current scenario during execution of a job, if a resource becomes unavailable, schedulers are presented with limited options, namely re-queuing job or migrating job to a different resource. Both options are expensive in terms of data and compute time. These situations can be avoided, if the often ignored factor, availability time of a resource in a grid environment is considered. We propose resource rank approach, in which jobs are dispatched to a resource which has the highest rank among all resources that match the job's requirement. The results show that our approach can increase throughput of many serial / monolithic jobs.Comment: 10 page

    Decentralized Resource Scheduling in Grid/Cloud Computing

    Get PDF
    In the Grid/Cloud environment, applications or services and resources belong to different organizations with different objectives. Entities in the Grid/Cloud are autonomous and self-interested; however, they are willing to share their resources and services to achieve their individual and collective goals. In such open environment, the scheduling decision is a challenge given the decentralized nature of the environment. Each entity has specific requirements and objectives that need to achieve. In this thesis, we review the Grid/Cloud computing technologies, environment characteristics and structure and indicate the challenges within the resource scheduling. We capture the Grid/Cloud scheduling model based on the complete requirement of the environment. We further create a mapping between the Grid/Cloud scheduling problem and the combinatorial allocation problem and propose an adequate economic-based optimization model based on the characteristic and the structure nature of the Grid/Cloud. By adequacy, we mean that a comprehensive view of required properties of the Grid/Cloud is captured. We utilize the captured properties and propose a bidding language that is expressive where entities have the ability to specify any set of preferences in the Grid/Cloud and simple as entities have the ability to express structured preferences directly. We propose a winner determination model and mechanism that utilizes the proposed bidding language and finds a scheduling solution. Our proposed approach integrates concepts and principles of mechanism design and classical scheduling theory. Furthermore, we argue that in such open environment privacy concerns by nature is part of the requirement in the Grid/Cloud. Hence, any scheduling decision within the Grid/Cloud computing environment is to incorporate the feasibility of privacy protection of an entity. Each entity has specific requirements in terms of scheduling and privacy preferences. We analyze the privacy problem in the Grid/Cloud computing environment and propose an economic based model and solution architecture that provides a scheduling solution given privacy concerns in the Grid/Cloud. Finally, as a demonstration of the applicability of the approach, we apply our solution by integrating with Globus toolkit (a well adopted tool to enable Grid/Cloud computing environment). We also, created simulation experimental results to capture the economic and time efficiency of the proposed solution

    Splee:A declarative information-based language for multiagent interaction protocols

    Get PDF
    The Blindingly Simple Protocol Language (BSPL) is a novel information-based approach for specifying interaction protocols that can be enacted by agents in a fully decentralized manner via asynchronous messaging. We introduce Splee, an extension of BSPL. The extensions fall into two broad categories: multicast and roles. In Splee, a role binding is information that is dynamically generated during protocol enactment, potentially as the content (payload) of communication between two agents. Multicast communication is the idea that a message is sent to a set of agents. The two categories of extensions are interconnected via novel features such as set roles (the idea that a role binding can be a set of agents) and subroles (the idea that agents playing a role must be a subset of agents playing another role). We give the formal semantics of Splee and give small model characterizations of the safety and liveness of Splee protocols. We also introduce the pragmatic idea of query attachments for messages. Query attachments take advantage of Splee's information-orientation, and can help restrict the information (parameter bindings) communicated in a message

    Analysis of peer-to-peer electricity trading models in a grid-connected microgrid

    Get PDF
    The thesis proposed an investigation on the implementation of peer-to-peer (P2P) energy transaction platforms in power systems as a possible energy management solution to deal with distributed generation (DG) and renewable energy sources (RES) penetration. Firstly, a state of the art of the current P2P trading technologies development is provided, reviewing and analysing several projects carried out in this field in recent years and doing a comparison of the models, considering their commonalities, strengths and shortcomings, along with.an overview of the main techniques utilized. In the second stage, the focus shifts on the presentation of the structure of the system used in the case study investigated in the project. A multi agent system (MAS) integrated with a micro grid management platform (μGIM) acts in a grid connected microgrid located in an office building, equipped with solar panels (PVs) to operate energy transactions among different agents (prosumers/consumers). Each agent is represented by a tenant of a zone in the building, which owns a part of the total photovoltaic generation. From the starting point of the English auction model, initially used in the trading platform, two new algorithms have been implemented in the system in an attempt to improve the efficiency of the trading process. The algorithms formulation is based on the analysis of the initial model behaviour and results, and is supported by the state of art provided in the first chapter. A specific simulation platform was used to run the model using consumption data recorded from previous week of monitoring, in order to compare different trading algorithms working on the same consumption/generation profile. The developments obtained from this study proves the capabilities of the P2P energy trading to advantage the end users, allowing them to manage their own energy and pursue their personal goals. They also emphasize that this type of models have still a good improvement margin and with further studies they can represent a key element in the future smart grids and decentralized systems

    Sunk cost accounting and entrapment in corporate acquisitions and financial markets : an experimental analysis

    Get PDF
    Sunk cost accounting refers to the empirical finding that individuals tend to let their decisions be influenced by costs made at an earlier time in such a way that they are more risk seeking than they would be had they not incurred these costs. Such behaviour violates the axioms of economic theory which states individuals should only consider incremental costs and benefits when executing investments. This dissertation is concerned whether the pervasive sunk cost phenomenon extends to corporate acquisitions and financial markets. 122 students from the University of St Andrews participated in three experiments exploring the use of sunk costs in interactive negotiation contexts and financial markets. Experiment I elucidates that subjects value the sunk cost issue higher than other issues in a multi-issue negotiation. Experiment II illustrates that bidders are influenced by the sunk costs of competing bidders in a first price, sealed-bid, common-value auction. In financial markets their exists an analogous concept to sunk cost accounting known as the disposition effect. This explains the tendency of investors to sell “winning” stocks and hold “losing” stocks. Experiment III demonstrates that trading strategies in an experimental equity market are influenced by a pre-trading brokerage cost. Not only are subjects influenced in the direction that reduces the disposition effect but also trading is diminished. Without the brokerage cost there was a significant disposition effect. JEL-Classifications C70, C90, D44, D80, D81, G1

    Scalable Internet auctions

    Get PDF
    Current Internet based auction services rely, in general, on a centralised auction server; applications with large and geographically dispersed bidder client bases are thus supported in a centralised manner. Such an approach is fundamentally restrictive as too many users can overload the server, making the whole auction process unresponsive. Further, such an architecture can be vulnerable to server's failures, if not equipped with sufficient redundancy. In addition, bidders who are closer to the server are likely to have relatively faster access to the server than remote bidders, thereby gaining an unfair advantage. To overcome these shortcomings, this thesis investigates ways of enabling widely distributed, arbitrarily large number of auction servers to cooperate in conducting an auction. Allowing a bidder to register with anyone of the auction servers and place bids there, coupled with periodic exchange of auction information between servers forms the basis of the solution investigated to achieve scalability, responsiveness and fairness. Scalability and responsiveness are achieved since the total load is shared amongst many bidder servers; fairness is achieved since bidders are able to register with their local servers. The thesis presents the design and implementation of an hierarchically structured distributed Internet auction system. Protocols for inter-server cooperation are presented. Each server may be replicated locally to mask node failures. Performance evaluations of centralised and distributed configurations are performed to show the advantages of the distributed configuration over the centralised one.EThOS - Electronic Theses Online ServiceIranian Ministry of Science, Research and Technology : Isfahan UniversityGBUnited Kingdo

    Regulating the Market for Corporate Control: A Critical Assessment of the Tender Offer\u27s Role in Corporate Governance

    Get PDF
    Better answers often await better questions. In the wake of a recent series of provocative articles dealing with contested tender offers, several questions have been vigorously debated: (1) Should management of the target company be allowed to resist a hostile tender offer in order to remain an independent company? Which, if any, of the various shark repellent measures by which a potential target can make itself unattractive to a bidder are justified?; (2) If defensive tactics were generally forbidden, should the target company\u27s management still be permitted to encourage competing bids thereby creating an auction?; and (3) Do hostile takeovers in the aggregate promote economic efficiency or only a preoccupation with short-run profit maximization at the expense of strategic planning, research, and innovation? Significant as these issues sound, it is nonetheless the thesis of this Article that these are the wrong questions from which to undertake a public policy analysis of the hostile takeover. Put bluntly, these are questions of secondary (albeit substantial) significance, because they either presuppose the answers to more fundamental questions or assume that the hostile takeover is a monolithic phenomenon, which, depending on the commentator, is either efficiency enhancing or inhibiting and which therefore should either be encouraged or discouraged in the aggregate. The fallacy in this over-aggregated perspective is that it ignores the possibility that takeovers may have varied and even offsetting effects. Some takeovers may promote economic efficiency, some may result in a misallocation of economic resources, and some may be neutral in terms of economic efficiency, but involve substantial wealth transfers between the participating classes that arguably are involuntary

    Resource Management in Distributed Camera Systems

    Get PDF
    The aim of this work is to investigate different methods to solve the problem of allocating the correct amount of resources (network bandwidth and storage space) to video camera systems. Here we explore the intersection between two research areas: automatic control and game theory. Camera systems are a good example of the emergence of the Internet of Things (IoT) and its impact on our daily lives and the environment. We aim to improve today’s systems, shift from resources over-provisioning to allocate dynamically resources where they are needed the most. We optimize the storage and bandwidth allocation of camera systems to limit the impact on the environment as well as provide the best visual quality attainable with the resource limitations. This thesis is written as a collection of papers. It begins by introducing the problem with today’s camera systems, and continues with background information about resource allocation, automatic control and game theory. The third chapter de- scribes the models of the considered systems, their limitations and challenges. It then continues by providing more background on the automatic control and game theory techniques used in the proposed solutions. Finally, the proposed solutions are provided in five papers.Paper I proposes an approach to estimate the amount of data needed by surveillance cameras given camera and scenario parameters. This model is used for calculating the quasi Worst-Case Transmission Times of videos over a network. Papers II and III apply control concepts to camera network storage and bandwidth assignment. They provide simple, yet elegant solutions to the allocation of these resources in distributed camera systems. Paper IV com- bines pricing theory with control techniques to force the video quality of cam- era systems to converge to a common value based solely on the compression parameter of the provided videos. Paper V uses the VCG auction mechanism to solve the storage space allocation problem in competitive camera systems. It allows for a better system-wide visual quality than a simple split allocation given the limited system knowledge, trust and resource constraints
    corecore