72 research outputs found

    Evolutionary Solutions and Internet Applications for Algorithmic Game Theory

    Get PDF
    The growing pervasiveness of the internet has created a new class of algorithmic problems: those in which the strategic interaction of autonomous, self-interested entities must be accounted for. So motivated, we seek to (1) use game theoretic models and techniques to study practical problems in load balancing, data streams and internet traffic congestion, and (2) demonstrate the usefulness of evolutionary game theory's adaptive learning model as an analytical and evaluative tool.First we consider the evolutionary game theory concept of stochastic stability, and propose the price of stochastic anarchy as an alternative to the price of anarchy for quantifying the cost of having no central authority. Unlike Nash equilibria, stochastically stable states are the result of natural dynamics of large populations of computationally bounded agents, and are resilient to small perturbations from ideal play. To illustrate the utility of stochastic stability, we study the load balancing game on related machines, which has an unbounded price of anarchy, even in the case of two jobs and two machines. We show that in contrast, even in the general case, the price of stochastic anarchy is bounded.Next, we propose auction-based mechanisms for admission control of continuous queries to a Data Stream Management System. When submitting a query, each user also submits a bid: how much she is willing to pay for her query to run. Our mechanisms must admit queries and set payments in a way that maximizes system revenue while incentivizing customers to use the system honestly. We propose several manipulation-resistant payment mechanisms and prove that one guarantees a profit close to a standard profit benchmark, and the others perform well experimentally.Finally, we study the long standing problem of congestion control at bottleneck routers on the internet. We examine the effectiveness of commonly-used queuing policies when each network endpoint is self-interested and has no information about the other endpoints' actions or preferences. By employing evolutionary game theory, we find that while bottleneck routers face heavy congestion at stochastically stable states under policies being currently deployed, a practical policy that was recently proposed yields fair and efficient conditions with no congestion

    Game theory for dynamic spectrum sharing cognitive radio

    Get PDF
    ‘Game Theory’ is the formal study of conflict and cooperation. The theory is based on a set of tools that have been developed in order to assist with the modelling and analysis of individual, independent decision makers. These actions potentially affect any decisions, which are made by other competitors. Therefore, it is well suited and capable of addressing the various issues linked to wireless communications. This work presents a Green Game-Based Hybrid Vertical Handover Model. The model is used for heterogeneous wireless networks, which combines both dynamic (Received Signal Strength and Node Mobility) and static (Cost, Power Consumption and Bandwidth) factors. These factors control the handover decision process; whereby the mechanism successfully eliminates any unnecessary handovers, reduces delay and overall number of handovers to 50% less and 70% less dropped packets and saves 50% more energy in comparison to other mechanisms. A novel Game-Based Multi-Interface Fast-Handover MIPv6 protocol is introduced in this thesis as an extension to the Multi-Interface Fast-handover MIPv6 protocol. The protocol works when the mobile node has more than one wireless interface. The protocol controls the handover decision process by deciding whether a handover is necessary and helps the node to choose the right access point at the right time. In addition, the protocol switches the mobile nodes interfaces ‘ON’ and ‘OFF’ when needed to control the mobile node’s energy consumption and eliminate power lost of adding another interface. The protocol successfully reduces the number of handovers to 70%, 90% less dropped packets, 40% more received packets and acknowledgments and 85% less end-to-end delay in comparison to other Protocols. Furthermore, the thesis adapts a novel combination of both game and auction theory in dynamic resource allocation and price-power-based routing in wireless Ad-Hoc networks. Under auction schemes, destinations nodes bid the information data to access to the data stored in the server node. The server will allocate the data to the winner who values it most. Once the data has been allocated to the winner, another mechanism for dynamic routing is adopted. The routing mechanism is based on the source-destination cooperation, power consumption and source-compensation to the intermediate nodes. The mechanism dramatically increases the seller’s revenue to 50% more when compared to random allocation scheme and briefly evaluates the reliability of predefined route with respect to data prices, source and destination cooperation for different network settings. Last but not least, this thesis adjusts an adaptive competitive second-price pay-to-bid sealed auction game and a reputation-based game. This solves the fairness problems associated with spectrum sharing amongst one primary user and a large number of secondary users in a cognitive radio environment. The proposed games create a competition between the bidders and offers better revenue to the players in terms of fairness to more than 60% in certain scenarios. The proposed game could reach the maximum total profit for both primary and secondary users with better fairness; this is illustrated through numerical results.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Call Center Experience Optimization: A Case for a Virtual Predictive Queue

    Get PDF
    The evolution of the call center into contact centers and the growth of their use in providing customer-facing service by many companies has brought considerable capabilities in maintaining customer relationships but it also has brought challenges in providing quality service when call volumes are high. Limited in their ability to provide service at all times to all customers, companies are forced to balance the costs associated with hiring more customer service representatives and the quality of service provided by a fewer number. A primary challenge when there are not enough customer service representatives to engage the volume of callers in a timely manner is the significant wait times that can be experienced by many customers. Normally, callers are handled in accordance with a first-come, first-served policy with exceptions being skill-based routing to those customer service representatives with specialized skills. A proposed call center infrastructure framework called a Virtual Predictive Queue (VPQ) can allow some customers to benefit from a shorter call queue wait time. This proposed system can be implemented within a call center’s Automatic Call Distribution (ACD) device associated with computer telephony integration (CTI) and theoretically will not violate a first-come, first served policy

    Decentralized Resource Scheduling in Grid/Cloud Computing

    Get PDF
    In the Grid/Cloud environment, applications or services and resources belong to different organizations with different objectives. Entities in the Grid/Cloud are autonomous and self-interested; however, they are willing to share their resources and services to achieve their individual and collective goals. In such open environment, the scheduling decision is a challenge given the decentralized nature of the environment. Each entity has specific requirements and objectives that need to achieve. In this thesis, we review the Grid/Cloud computing technologies, environment characteristics and structure and indicate the challenges within the resource scheduling. We capture the Grid/Cloud scheduling model based on the complete requirement of the environment. We further create a mapping between the Grid/Cloud scheduling problem and the combinatorial allocation problem and propose an adequate economic-based optimization model based on the characteristic and the structure nature of the Grid/Cloud. By adequacy, we mean that a comprehensive view of required properties of the Grid/Cloud is captured. We utilize the captured properties and propose a bidding language that is expressive where entities have the ability to specify any set of preferences in the Grid/Cloud and simple as entities have the ability to express structured preferences directly. We propose a winner determination model and mechanism that utilizes the proposed bidding language and finds a scheduling solution. Our proposed approach integrates concepts and principles of mechanism design and classical scheduling theory. Furthermore, we argue that in such open environment privacy concerns by nature is part of the requirement in the Grid/Cloud. Hence, any scheduling decision within the Grid/Cloud computing environment is to incorporate the feasibility of privacy protection of an entity. Each entity has specific requirements in terms of scheduling and privacy preferences. We analyze the privacy problem in the Grid/Cloud computing environment and propose an economic based model and solution architecture that provides a scheduling solution given privacy concerns in the Grid/Cloud. Finally, as a demonstration of the applicability of the approach, we apply our solution by integrating with Globus toolkit (a well adopted tool to enable Grid/Cloud computing environment). We also, created simulation experimental results to capture the economic and time efficiency of the proposed solution

    Optimized traffic scheduling and routing in smart home networks

    Get PDF
    Home networks are evolving rapidly to include heterogeneous physical access and a large number of smart devices that generate different types of traffic with different distributions and different Quality of Service (QoS) requirements. Due to their particular architectures, which are very dense and very dynamic, the traditional one-pair-node shortest path solution is no longer efficient to handle inter-smart home networks (inter-SHNs) routing constraints such as delay, packet loss, and bandwidth in all-pair node heterogenous links. In addition, Current QoS-aware scheduling methods consider only the conventional priority metrics based on the IP Type of Service (ToS) field to make decisions for bandwidth allocation. Such priority based scheduling methods are not optimal to provide both QoS and Quality of Experience (QoE), especially for smart home applications, since higher priority traffic does not necessarily require higher stringent delay than lower-priority traffic. Moreover, current QoS-aware scheduling methods in the intra-smart home network (intra-SHN) do not consider concurrent traffic caused by the fluctuation of intra-SH network traffic distributions. Thus, the goal of this dissertation is to build an efficient heterogenous multi-constrained routing mechanism and an optimized traffic scheduling tool in order to maintain a cost-effective communication between all wired-wireless connected devices in inter-SHNs and to effectively process concurrent and non-concurrent traffic in intra-SHN. This will help Internet service providers (ISPs) and home user to enhance the overall QoS and QoE of their applications while maintaining a relevant communication in both inter-SHNs and intra-SHN. In order to meet this goal, three key issues are required to be addressed in our framework and are summarized as follows: i) how to build a cost-effective routing mechanism in heterogonous inter-SHNs ? ii) how to efficiently schedule the multi-sourced intra-SHN traffic based on both QoS and QoE ? and iii) how to design an optimized queuing model for intra-SHN concurrent traffics while considering their QoS requirements? As part of our contributions to solve the first problem highlighted above, we present an analytical framework for dynamically optimizing data flows in inter-SHNs using Software-defined networking (SDN). We formulate a QoS-based routing optimization problem as a constrained shortest path problem and then propose an optimized solution (QASDN) to determine minimal cost between all pairs of nodes in the network taking into account the different types of physical accesses and the network utilization patterns. To address the second issue and to solve the gaps between QoS and QoE, we propose a new queuing model for QoS-level Pair traffic with mixed arrival distributions in Smart Home network (QP-SH) to make a dynamic QoS-aware scheduling decision meeting delay requirements of all traffic while preserving their degrees of criticality. A new metric combining the ToS field and the maximum number of packets that can be processed by the system's service during the maximum required delay, is defined. Finally, as part of our contribution to address the third issue, we present an analytic model for a QoS-aware scheduling optimization of concurrent intra-SHN traffics with mixed arrival distributions and using probabilistic queuing disciplines. We formulate a hybrid QoS-aware scheduling problem for concurrent traffics in intra-SHN, propose an innovative queuing model (QC-SH) based on the auction economic model of game theory to provide a fair multiple access over different communication channels/ports, and design an applicable model to implement auction game on both sides; traffic sources and the home gateway, without changing the structure of the IEEE 802.11 standard. The results of our work offer SHNs more effective data transfer between all heterogenous connected devices with optimal resource utilization, a dynamic QoS/QoE-aware traffic processing in SHN as well as an innovative model for optimizing concurrent SHN traffic scheduling with enhanced fairness strategy. Numerical results show an improvement up to 90% for network resource utilization, 77% for bandwidth, 40% for scheduling with QoS and QoE and 57% for concurrent traffic scheduling delay using our proposed solutions compared with Traditional methods

    Analysis and implementation of load balancers in real-time bidding

    Get PDF
    This report reflects on the best way to implement a software component which defines the balancer module, that is a specific module meant to spread the traffic a web-platform receives over multiple back-end servers. In particular, the discussion will be centered on which load-balancing algorithm and tool is the best by the point of view of a high-demand throughput system in order to avoid the overload of some compute nodes, considering that many open-source load-balancers can be found in the market in a great variety of forms, implementations and features; the focus will be over the needs of a Demand Side Platform, where performances are put at first place and the internals of the platform itself change constantly (such as the number of servers and the addresses of the servers itself). This research will be conducted following best-practices in Software Engineering and Research field, with the purpose to aggregate the various learning contributions gathered during my Double Degree experience among Barcelona and Torino. First, a background over the topic is provided, with a glance to the RTB world and the main concept that this kind of system deploys and an insight over the internals of the balancer component by means of proxy models and load-balancing strategies. Second, a preliminary research over the main software solutions will be conducted, with the aim of filtering the ones that don't match the requirements provided by a professional tech company; the documentation supplied by each balancer will be analyzed with the objective to fill a software evaluation matrix, provided to highlight the various feature supplied by each balancer and to discard faulty solutions. Then, a testing environment will be built for every solution still under evaluation in order to effectively check that the component respects the declared features. Moreover, the testing environment is exploited to discover which is the best software product by means of overall performances, requirement considered crucial for a low-latency-high-throughput platform; the final goal of this step is to provide a winner to the software selection process that will be implemented in the final step by means of stressing the limit of the softwares under evaluation both by means of incoming total connections and requests per second. Finally, the ultimate candidate will be implemented inside the platform environment: it will be installed and configured over the Infrastructure as a Service that hosts the Demand Side Platform environment, mapping the agents described later in the discussion with the actual final component's configuration file. In conclusion, the final goal is to observe the effects that this analysis and the consequent implementation over the production environment metrics will cause, with the objective to improve the quality of service of the back-end by reducing the average response times from servers side and to show a possible decrease of the infrastructure costs

    An approach for joint scheduling of production and predictive maintenance activities

    Get PDF
    The Industry 4.0 paradigm, thanks to the deployment of cutting-edge technologies enabling the deployment of new services, contributes to improve the agility of productive organizations. Among these services, the Prognostic and Health Management (PHM) contributes to the health assessment of the manufacturing resources and to prognose their future conditions by providing decision supports for production and predictive maintenance management. However, the future conditions of technical production resources depend on the productive tasks they will have to carry out. If their future conditions will not satisfy production criteria, maintenance tasks will have to be planned and productive tasks will be delayed or assigned to other resources for which their future conditions considering these new tasks must be assessed. In this context, a multi-agent system SCEMP (Supervisor, Customers, Environment, Maintainers and Producers) is here proposed in which production scheduling and predictive maintenance planning collaborate and exploit decision supports provided by PHM modules. The proposed multi-agent system provides a framework in which production and the predictive maintenance activities can be scheduled simultaneously by compromising on their objectives. During the scheduling process, SCEMP enables to identify the needed predictive maintenance from the assignments of production tasks to machines, the machine component prognoses and machine models. It schedules production tasks and predictive maintenance activities according to the number, competencies and availabilities of production and maintenance resources. The SCEMP framework is described and presented in the tough job shop context. For this context, case studies have been generated and scheduled within acceptable computation times. To illustrate the SCEMP functioning, some simplified case studies are detailed with the obtained performances. It is flexible and can be adapted to various manufacturing situations. It can also be used to assess the interest of implementing prognostic functions for machine components

    Dynamic server selection in a multithreaded network computing environment

    Get PDF
    Research has been conducted at the Iowa State University Center for Nondestructive Evaluation (CNDE) to create a structure in which existing numerical modeling programs can be converted to execute in a network computing environment. This research task is to include the development of an extensible architecture which accommodates the timely integration of new processing capabilities and requirements. The research was motivated by many needs within the CNDE to reduce the predicted run times associated with the current and future modeling programs

    Operational research and simulation methods for autonomous ride-sourcing

    Get PDF
    Ride-sourcing platforms provide on-demand shared transport services by solving decision problems related to ride-matching and pricing. The anticipated commercialisation of autonomous vehicles could transform these platforms to fleet operators and broaden their decision-making by introducing problems such as fleet sizing and empty vehicle redistribution. These problems have been frequently represented in research using aggregated mathematical programs, and alternative practises such as agent-based models. In this context, this study is set at the intersection between operational research and simulation methods to solve the multitude of autonomous ride-sourcing problems. The study begins by providing a framework for building bespoke agent-based models for ride-sourcing fleets, derived from the principles of agent-based modelling theory, which is used to tackle the non-linear problem of minimum fleet size. The minimum fleet size problem is tackled by investigating the relationship of system parameters based on queuing theory principles and by deriving and validating a novel model for pickup wait times. Simulating the fleet function in different urban areas shows that ride-sourcing fleets operate queues with zero assignment times above the critical fleet size. The results also highlight that pickup wait times have a pivotal role in estimating the minimum fleet size in ride-sourcing operations, with agent-based modelling being a more reliable estimation method. The focus is then shifted to empty vehicle redistribution, where the omission of market structure and underlying customer acumen, compromises the effectiveness of existing models. As a solution, the vehicle redistribution problem is formulated as a non-linear convex minimum cost flow problem that accounts for the relationship of supply and demand of rides by assuming a customer discrete choice model and a market structure. An edge splitting algorithm is then introduced to solve a transformed convex minimum cost flow problem for vehicle redistribution. Results of simulated tests show that the redistribution algorithm can significantly decrease wait times and increase profits with a moderate increase in vehicle mileage. The study is concluded by considering the operational time-horizon decision problems of ride-matching and pricing at periods of peak travel demand. Combinatorial double auctions have been identified as a suitable alternative to surge pricing in research, as they maximise social welfare by relying on stated customer and driver valuations. However, a shortcoming of current models is the exclusion of trip detour effects in pricing estimates. The study formulates a shared-ride assignment and pricing algorithm using combinatorial double auctions to resolve the above problem. The model is reduced to the maximum weighted independent set problem, which is APX-hard. Therefore, a fast local search heuristic is proposed, producing solutions within 10\% of the exact approach for practical implementations.Open Acces
    • 

    corecore