220 research outputs found

    Maximizing Revenues for Online-Dial-a-Ride

    Full text link
    In the classic Dial-a-Ride Problem, a server travels in some metric space to serve requests for rides. Each request has a source, destination, and release time. We study a variation of this problem where each request also has a revenue that is earned if the request is satisfied. The goal is to serve requests within a time limit such that the total revenue is maximized. We first prove that the version of this problem where edges in the input graph have varying weights is NP-complete. We also prove that no algorithm can be competitive for this problem. We therefore consider the version where edges in the graph have unit weight and develop a 2-competitive algorithm for this problem

    Modeling Carrier Behavior in Sequential Auction Transportation Markets

    Get PDF
    Online markets for transportation services, in the form of Internet sites that dynamically match shipments (shippers? demand) and transportation capacity (carriers? offer) through auction mechanisms are changing the traditional structure of transportation markets. A general framework for the study of carriers? behavior in a sequential auction transportation marketplace is provided. The unique characteristics of these marketplaces and the sources of difficulty in analyzing the behavior of these marketplaces are discussed. Learning and behavior in a sequential Vickrey auction marketplace is analyzed and simulated. Some results and the overall behavioral framework are also discussed

    Auction Settings Impacts on the Performance of Truckload Transportation Marketplaces

    Get PDF
    This paper compares the performance of different sequential auction settings for the procurement of truckload services. In this environment, demands arrive randomly over time and are described by pick up, delivery locations and hard timewindows. Upon demand arrival, carriers compete for the loads. Different auction and information disclosure settings are studied. Learning methodologies are discussed and analyzed. Simulation results are presented

    Quantifying Opportunity Costs in Sequential Transportation Auctions for Truckload Acquisition

    Get PDF
    The principal focus of this research is to quantify opportunity costs in sequential transportation auctions. This paper focuses on the study a transportation marketplace with time-sensitive truckload pickup-and-delivery requests. In this paper, two carriers compete for service requests; each arriving service request triggers an auction where carriers compete with each other to win the right of servicing the load. An expression to evaluate opportunity costs is derived. This paper shows that the impact of evaluating opportunity costs is dependent on the competitive market setting. A simulation framework is used to evaluate different strategies. Some results and the overall simulation framework are also discussed

    Decentralized Data Fusion and Active Sensing with Mobile Sensors for Modeling and Predicting Spatiotemporal Traffic Phenomena

    Get PDF
    The problem of modeling and predicting spatiotemporal traffic phenomena over an urban road network is important to many traffic applications such as detecting and forecasting congestion hotspots. This paper presents a decentralized data fusion and active sensing (D2FAS) algorithm for mobile sensors to actively explore the road network to gather and assimilate the most informative data for predicting the traffic phenomenon. We analyze the time and communication complexity of D2FAS and demonstrate that it can scale well with a large number of observations and sensors. We provide a theoretical guarantee on its predictive performance to be equivalent to that of a sophisticated centralized sparse approximation for the Gaussian process (GP) model: The computation of such a sparse approximate GP model can thus be parallelized and distributed among the mobile sensors (in a Google-like MapReduce paradigm), thereby achieving efficient and scalable prediction. We also theoretically guarantee its active sensing performance that improves under various practical environmental conditions. Empirical evaluation on real-world urban road network data shows that our D2FAS algorithm is significantly more time-efficient and scalable than state-of-the-art centralized algorithms while achieving comparable predictive performance.Comment: 28th Conference on Uncertainty in Artificial Intelligence (UAI 2012), Extended version with proofs, 13 page

    The Stochastic Container Relocation Problem

    Get PDF
    The Container Relocation Problem (CRP) is concerned with finding a sequence of moves of containers that minimizes the number of relocations needed to retrieve all containers, while respecting a given order of retrieval. However, the assumption of knowing the full retrieval order of containers is particularly unrealistic in real operations. This paper studies the stochastic CRP (SCRP), which relaxes this assumption. A new multi-stage stochastic model, called the batch model, is introduced, motivated, and compared with an existing model (the online model). The two main contributions are an optimal algorithm called Pruning-Best-First-Search (PBFS) and a randomized approximate algorithm called PBFS-Approximate with a bounded average error. Both algorithms, applicable in the batch and online models, are based on a new family of lower bounds for which we show some theoretical properties. Moreover, we introduce two new heuristics outperforming the best existing heuristics. Algorithms, bounds and heuristics are tested in an extensive computational section. Finally, based on strong computational evidence, we conjecture the optimality of the “Leveling” heuristic in a special “no information” case, where at any retrieval stage, any of the remaining containers is equally likely to be retrieved next

    Scheduling over Scenarios on Two Machines

    Get PDF
    We consider scheduling problems over scenarios where the goal is to find a single assignment of the jobs to the machines which performs well over all possible scenarios. Each scenario is a subset of jobs that must be executed in that scenario and all scenarios are given explicitly. The two objectives that we consider are minimizing the maximum makespan over all scenarios and minimizing the sum of the makespans of all scenarios. For both versions, we give several approximation algorithms and lower bounds on their approximability. With this research into optimization problems over scenarios, we have opened a new and rich field of interesting problems.Comment: To appear in COCOON 2014. The final publication is available at link.springer.co

    On Conceptually Simple Algorithms for Variants of Online Bipartite Matching

    Full text link
    We present a series of results regarding conceptually simple algorithms for bipartite matching in various online and related models. We first consider a deterministic adversarial model. The best approximation ratio possible for a one-pass deterministic online algorithm is 1/21/2, which is achieved by any greedy algorithm. D\"urr et al. recently presented a 22-pass algorithm called Category-Advice that achieves approximation ratio 3/53/5. We extend their algorithm to multiple passes. We prove the exact approximation ratio for the kk-pass Category-Advice algorithm for all k1k \ge 1, and show that the approximation ratio converges to the inverse of the golden ratio 2/(1+5)0.6182/(1+\sqrt{5}) \approx 0.618 as kk goes to infinity. The convergence is extremely fast --- the 55-pass Category-Advice algorithm is already within 0.01%0.01\% of the inverse of the golden ratio. We then consider a natural greedy algorithm in the online stochastic IID model---MinDegree. This algorithm is an online version of a well-known and extensively studied offline algorithm MinGreedy. We show that MinDegree cannot achieve an approximation ratio better than 11/e1-1/e, which is guaranteed by any consistent greedy algorithm in the known IID model. Finally, following the work in Besser and Poloczek, we depart from an adversarial or stochastic ordering and investigate a natural randomized algorithm (MinRanking) in the priority model. Although the priority model allows the algorithm to choose the input ordering in a general but well defined way, this natural algorithm cannot obtain the approximation of the Ranking algorithm in the ROM model

    Shaping Biological Knowledge: Applications in Proteomics

    Get PDF
    The central dogma of molecular biology has provided a meaningful principle for data integration in the field of genomics. In this context, integration reflects the known transitions from a chromosome to a protein sequence: transcription, intron splicing, exon assembly and translation. There is no such clear principle for integrating proteomics data, since the laws governing protein folding and interactivity are not quite understood. In our effort to bring together independent pieces of information relative to proteins in a biologically meaningful way, we assess the bias of bioinformatics resources and consequent approximations in the framework of small-scale studies. We analyse proteomics data while following both a data-driven (focus on proteins smaller than 10 kDa) and a hypothesis-driven (focus on whole bacterial proteomes) approach. These applications are potentially the source of specialized complements to classical biological ontologies

    Pricing in Dynamic Vehicle Routing Problems

    Full text link
    corecore