250 research outputs found
Towards incentive-compatible pricing for bandwidth reservation in community network clouds
Community network clouds provide for applications of local interest deployed within community networks through collaborative efforts to provision cloud infrastructures. They complement the traditional large-scale public cloud providers similar to the model of decentralised edge clouds by bringing both content and computation closer to the users at the edges of the network. Services and applications within community network clouds require connectivity to the Internet and to the resources external to the community network, and here the current besteffort model of volunteers contributing gateway access in the community networks falls short. We model the problem of reserving the bandwidth at such gateways for guaranteeing quality-of-service for the cloud applications, and evaluate different pricing mechanisms for their suitability in ensuring maximal social welfare and eliciting truthful requests from the users. We find second-price auction based mechanisms, including Vickrey and generalised second price auctions, suitable for the bandwidth allocation problem at the gateways in the community networks.Peer ReviewedPostprint (author's final draft
Operational research and simulation methods for autonomous ride-sourcing
Ride-sourcing platforms provide on-demand shared transport services by solving decision problems related to ride-matching and pricing. The anticipated commercialisation of autonomous vehicles could transform these platforms to fleet operators and broaden their decision-making by introducing problems such as fleet sizing and empty vehicle redistribution. These problems have been frequently represented in research using aggregated mathematical programs, and alternative practises such as agent-based models. In this context, this study is set at the intersection between operational research and simulation methods to solve the multitude of autonomous ride-sourcing problems.
The study begins by providing a framework for building bespoke agent-based models for ride-sourcing fleets, derived from the principles of agent-based modelling theory, which is used to tackle the non-linear problem of minimum fleet size. The minimum fleet size problem is tackled by investigating the relationship of system parameters based on queuing theory principles and by deriving and validating a novel model for pickup wait times. Simulating the fleet function in different urban areas shows that ride-sourcing fleets operate queues with zero assignment times above the critical fleet size. The results also highlight that pickup wait times have a pivotal role in estimating the minimum fleet size in ride-sourcing operations, with agent-based modelling being a more reliable estimation method.
The focus is then shifted to empty vehicle redistribution, where the omission of market structure and underlying customer acumen, compromises the effectiveness of existing models. As a solution, the vehicle redistribution problem is formulated as a non-linear convex minimum cost flow problem that accounts for the relationship of supply and demand of rides by assuming a customer discrete choice model and a market structure. An edge splitting algorithm is then introduced to solve a transformed convex minimum cost flow problem for vehicle redistribution. Results of simulated tests show that the redistribution algorithm can significantly decrease wait times and increase profits with a moderate increase in vehicle mileage.
The study is concluded by considering the operational time-horizon decision problems of ride-matching and pricing at periods of peak travel demand. Combinatorial double auctions have been identified as a suitable alternative to surge pricing in research, as they maximise social welfare by relying on stated customer and driver valuations. However, a shortcoming of current models is the exclusion of trip detour effects in pricing estimates. The study formulates a shared-ride assignment and pricing algorithm using combinatorial double auctions to resolve the above problem. The model is reduced to the maximum weighted independent set problem, which is APX-hard. Therefore, a fast local search heuristic is proposed, producing solutions within 10\% of the exact approach for practical implementations.Open Acces
Designing Frameworks to Deliver Unknown Information to Support MBIs
This paper reports on a Catchment Modelling Framework (CMF) designed to support an Australian pilot of an auction for multiple environmental outcomes EcoTender. The CMF is used to estimate multiple environmental outcomes including carbon, terrestrial biodiversity, aquatic function (water quality and quantity) and saline land area. This information was previously unavailable for application to environmental markets. This is the first time a market-based policy has been fully integrated from desk to field with a Catchment Modelling Framework for the purchase of multiple outcomes. This framework solves the unknown information problem of linking paddock scale landuse and management to catchment-scale environmental outcomes. The framework provides the Victorian government with a replicable transparent evidence-based approach to the procurement of environment outcomes.Research Methods/ Statistical Methods,
Recommended from our members
A feature-based comparison of the centralised versus market-based decision making under lens of environment uncertainty: Case of the mobile task allocation problem
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Decision making problems are amongst the most common challenges facing managers at different management levels in the organisation: strategic, tactical, and operational. However, prior reaching decisions at the operational level of the management hierarchy, operations management departments frequently have to deal with the optimisation process to evaluate the available decision alternatives. Industries with complex supply chain structures and service organisations that have to optimise the utilisation of their resources are examples. Conventionally, operational decisions used to be taken centrally by a decision making authority located at the top of a hierarchically-structured organisation. In order to take decisions, information related to the managed system and the affecting externalities (e.g. demand) should be globally available to the decision maker. The obtained information is then processed to reach the optimal decision. This approach usually makes extensive use of information systems (IS) containing myriad of optimisation algorithms and meta-heuristics to process the high amount and complex nature of data. The decisions reached are then broadcasted to the passive actuators of the system to put them in execution. On the other hand, recent advancements in information and communication technologies (ICT) made it possible to distribute the decision making rights and proved its applicability in several sectors. The market-based approach is as such a distributed decision making mechanism where passive actuators are delegated the rights of taking individual decisions matching their self-interests. The communication among the market agents is done through market transactions regulated by auctions. The systemâs global optimisation, therefore, raise from the aggregated self-oriented market agents. As opposed to the centralised approach, the main characteristics of the market-based approach are the market mechanism and local knowledge of the agents.
The existence of both approaches attracted several studies to compare them in different contexts. Recently, some comparisons compared the centralised versus market-based approaches in the context of transportation applications from an algorithm perspective. Transportation applications and routing problems are assumed to be good candidates for this comparison given the distributed nature of the system and due to the presence of several sources of uncertainty. Uncertainty exceptions make decisions highly vulnerable and necessitating frequent corrective interventions to keep an efficient level of service. Motivated by the previous comparison studies, this research aims at further investigating the features of both approaches and to contrast them in the context of a distributed task allocation problem in light of environmental uncertainty. Similar applications are often faced by service industries with mobile workforce. Contrary to the previous comparison studies that sought to compare those approaches at the mechanism level, this research attempts to identify the effect of the most significant characteristics of each approach to face environmental uncertainty, which is reflected in this research by the arrival of dynamic tasks and the occurrence of stochasticity delays. To achieve the aim of this research, a target optimisation problem from the VRP family is proposed and solved with both approaches. Given that this research does not target proposing new algorithms, two basic solution mechanisms are adopted to compare the centralised and the market-based approach. The produced solutions are executed on a dedicated multi-agent simulation system. During execution dynamism and stochasticity are introduced.
The research findings suggest that a market-based approach is attractive to implement in highly uncertain environments when the degree of local knowledge and workersâ experience is high and when the system tends to be complex with large dimensions. It is also suggested that a centralised approach fits more in situations where uncertainty is lower and the decision maker is able to make timely decision updates, which is in turn regulated by the size of the system at hand
Operational Research: Methods and Applications
Throughout its history, Operational Research has evolved to include a variety of methods, models and algorithms that have been applied to a diverse and wide range of contexts. This encyclopedic article consists of two main sections: methods and applications. The first aims to summarise the up-to-date knowledge and provide an overview of the state-of-the-art methods and key developments in the various subdomains of the field. The second offers a wide-ranging list of areas where Operational Research has been applied. The article is meant to be read in a nonlinear fashion. It should be used as a point of reference or first-port-of-call for a diverse pool of readers: academics, researchers, students, and practitioners. The entries within the methods and applications sections are presented in alphabetical order. The authors dedicate this paper to the 2023 Turkey/Syria earthquake victims. We sincerely hope that advances in OR will play a role towards minimising the pain and suffering caused by this and future catastrophes
Algorithms for Game-Theoretic Environments
Game Theory constitutes an appropriate way for approaching the Internet and modelling situations where participants interact with each other, such as networking, online auctions and search engineâs page ranking. Mechanism Design deals with the design of private-information games and attempts implementing desired social choices in a strategic setting. This thesis studies how the efficiency of a system degrades due to the selfish behaviour of its agents, expressed in terms of the Price of Anarchy (PoA). Our objective is to design mechanisms with improved PoA, or to determine the exact value of the PoA for existing mechanisms for two well-known problems, Auctions and Network Cost-Sharing Design. We study three different settings of auctions, combinatorial auction, multi- unit auction and bandwidth allocation. The combinatorial auction constitutes a fundamental resource allocation problem that involves the interaction of selfish agents in competition for indivisible goods. Although it is well-known that by using the VCG mechanism the selfishness of the agents does not affect the efficiency of the system, i.e. the social welfare is maximised, this mechanism cannot generally be applied in computationally tractable time. In practice, several simple auctions (lacking some nice properties of the VCG) are used, such as the generalised second price auction on AdWords, the simultaneous ascending price auction for spectrum allocation, and the independent second-price auction on eBay. The latter auction is of particular interest in this thesis. Precisely, we give tight bounds on the PoA when the goods are sold in independent and simultaneous first-price auctions, where the highest bidder gets the item and pays her own bid. Then, we generalise our results to a class of auctions that we call bid-dependent auctions, where the goods are also sold in independent and simultaneous auctions and further the payment of each bidder is a function of her bid, even if she doesnât get the item. Overall, we show that the first-price auction is optimal among all bid-dependent auctions. The multi-unit auction is a special case of combinatorial auction where all items are identical. There are many variations: the discriminatory auction, the uniform price auction and the Vickrey multi-unit auction. In all those auctions, the goods are allocated to the highest marginal bids, and their difference lies on the pricing scheme. Our focus is on the discriminatory auction, which can be seen as the variant of the first-price auction adjusted to multi-unit auctions. The bandwidth allocation is equivalent to auctioning divisible resources. Allocating network resources, like bandwidth, among agents is a canonical problem in the network optimisation literature. A traditional model for this problem was proposed by Kelly [1997], where each agent receives a fraction of the resource proportional to her bid and pays her own bid. We complement the PoA bounds known in the literature and give tight bounds for a more general case. We further show that this mechanism is optimal among a wider class of mechanisms. We further study design issues for network games: given a rooted undirected graph with nonnegative edge costs, a set of players with terminal vertices need to establish connectivity with the root. Each player selects a path and the global objective is to minimise the cost of the used edges. The cost of an edge may represent infrastructure cost for establishing connectivity or renting expense, and needs to be covered by the users. There are several ways to split the edge cost among its users and this is dictated by a cost-sharing protocol. Naturally, it is in the players best interest to choose paths that charge them with small cost. The seminal work of Chen et al. [2010] was the first to address design questions for this game. They thoroughly studied the PoA for the following informational assumptions. i) The designer has full knowledge of the instance, that is, she knows both the network topology and the playersâ terminals. ii) The designer has no knowledge of the underlying graph. Arguably, there are situations where the former assumption is too optimistic while the latter is too pessimistic. We propose a model that lies in the middle-ground; the designer has prior knowledge of the underlying metric, but knows nothing about the positions of the terminals. Her goal is to process the graph and choose a universal cost-sharing protocol that has low PoA against all possible requested subsets. The main question is to what extent prior knowledge of the underlying metric can help in the design. We first demonstrate that there exist graph metrics where knowledge of the underlying metric can dramatically improve the performance of good network cost-sharing design. However, in our main technical result, we show that there exist graph metrics for which knowing the underlying metric does not help and any universal protocol matches the bound of Chen et al. [2010] which ignores the graph metric. We further study the stochastic and Bayesian games where the players choose their terminals according to a probability distribution. We showed that in the stochastic setting there exists a priority protocol that achieves constant PoA, whereas the PoA under the the Bayesian setting can be very high for any cost- sharing protocol satisfying some natural properties
Strategic development in the petrochemical industry
Imperial Users onl
First IJCAI International Workshop on Graph Structures for Knowledge Representation and Reasoning (GKR@IJCAI'09)
International audienceThe development of effective techniques for knowledge representation and reasoning (KRR) is a crucial aspect of successful intelligent systems. Different representation paradigms, as well as their use in dedicated reasoning systems, have been extensively studied in the past. Nevertheless, new challenges, problems, and issues have emerged in the context of knowledge representation in Artificial Intelligence (AI), involving the logical manipulation of increasingly large information sets (see for example Semantic Web, BioInformatics and so on). Improvements in storage capacity and performance of computing infrastructure have also affected the nature of KRR systems, shifting their focus towards representational power and execution performance. Therefore, KRR research is faced with a challenge of developing knowledge representation structures optimized for large scale reasoning. This new generation of KRR systems includes graph-based knowledge representation formalisms such as Bayesian Networks (BNs), Semantic Networks (SNs), Conceptual Graphs (CGs), Formal Concept Analysis (FCA), CPnets, GAI-nets, all of which have been successfully used in a number of applications. The goal of this workshop is to bring together the researchers involved in the development and application of graph-based knowledge representation formalisms and reasoning techniques
- âŠ