25 research outputs found

    Incentivizing the use of bike trailers for dynamic repositioning in bike sharing systems

    Get PDF
    Bike Sharing System (BSS) is a green mode of transportation that is employed extensively for short distance travels in major cities of the world. Unfortunately, the users behaviour driven by their personal needs can often result in empty or full base stations, thereby resulting in loss of customer demand. To counter this loss in customer demand, BSS operators typically utilize a fleet of carrier vehicles for repositioning the bikes between stations. However, this fuel burning mode of repositioning incurs a significant amount of routing, labor cost and further increases carbon emissions. Therefore, we propose a potentially self-sustaining and environment friendly system of dynamic repositioning, that moves idle bikes during the day with the help of bike trailers. A bike trailer is an add-on to a bike that can help with carrying 3-5 bikes at once. Specifically, we make the following key contributions: (i) We provide an optimization formulation that generates “repositioning” tasks so as to minimize the expected lost demand over past demand scenarios; (ii) Within the budget constraints of the operator, we then design a mechanism to crowdsource the tasks among potential users who intend to execute repositioning tasks; (iii) Finally, we provide extensive results on a wide range of demand scenarios from a real-world data set to demonstrate that our approach is highly competitive to the existing fuel burning mode of repositioning while being green

    A Deep Reinforcement Learning Framework for Rebalancing Dockless Bike Sharing Systems

    Full text link
    Bike sharing provides an environment-friendly way for traveling and is booming all over the world. Yet, due to the high similarity of user travel patterns, the bike imbalance problem constantly occurs, especially for dockless bike sharing systems, causing significant impact on service quality and company revenue. Thus, it has become a critical task for bike sharing systems to resolve such imbalance efficiently. In this paper, we propose a novel deep reinforcement learning framework for incentivizing users to rebalance such systems. We model the problem as a Markov decision process and take both spatial and temporal features into consideration. We develop a novel deep reinforcement learning algorithm called Hierarchical Reinforcement Pricing (HRP), which builds upon the Deep Deterministic Policy Gradient algorithm. Different from existing methods that often ignore spatial information and rely heavily on accurate prediction, HRP captures both spatial and temporal dependencies using a divide-and-conquer structure with an embedded localized module. We conduct extensive experiments to evaluate HRP, based on a dataset from Mobike, a major Chinese dockless bike sharing company. Results show that HRP performs close to the 24-timeslot look-ahead optimization, and outperforms state-of-the-art methods in both service level and bike distribution. It also transfers well when applied to unseen areas

    Mechanism Design with Predicted Task Revenue for Bike Sharing Systems

    Full text link
    Bike sharing systems have been widely deployed around the world in recent years. A core problem in such systems is to reposition the bikes so that the distribution of bike supply is reshaped to better match the dynamic bike demand. When the bike-sharing company or platform is able to predict the revenue of each reposition task based on historic data, an additional constraint is to cap the payment for each task below its predicted revenue. In this paper, we propose an incentive mechanism called {\em TruPreTar} to incentivize users to park bicycles at locations desired by the platform toward rebalancing supply and demand. TruPreTar possesses four important economic and computational properties such as truthfulness and budget feasibility. Furthermore, we prove that even when the payment budget is tight, the total revenue still exceeds or equals the budget. Otherwise, TruPreTar achieves 2-approximation as compared to the optimal (revenue-maximizing) solution, which is close to the lower bound of at least 2\sqrt{2} that we also prove. Using an industrial dataset obtained from a large bike-sharing company, our experiments show that TruPreTar is effective in rebalancing bike supply and demand and, as a result, generates high revenue that outperforms several benchmark mechanisms.Comment: Accepted by AAAI 2020; This is the full version that contains all the proof

    Rebalancing shared mobility systems by user incentive scheme via reinforcement learning

    Get PDF
    Shared mobility systems regularly suffer from an imbalance of vehicle supply within the system, leading to users being unable to receive service. If such imbalance problems are not mitigated some users will not be serviced. There is an increasing interest in the use of reinforcement learning (RL) techniques for improving the resource supply balance and service level of systems. The goal of these techniques is to produce an effective user incentivization policy scheme to encourage users of a shared mobility system to slightly alter their travel behavior in exchange for a small monetary incentive. These slight changes in user behavior are intended to over time increase the service level of the shared mobility system and improve user experience. In this thesis, two important questions are explored: (1) What state-action representation should be used to produce an effective user incentive scheme for a shared mobility system? (2) How effective are reinforcement learning-based solutions on the rebalancing problem under varying levels of resource supply, user demand, and budget? Our extensive empirical results based on data-driven simulation show that: 1. A state space with predicted user behavior coupled with a simple action mechanism produces an effective incentive scheme under varying environment scenarios. 2. The reinforcement learning-based incentive mechanisms perform at varying degrees of effectiveness under different environmental scenarios in terms of service level

    Resource constrained deep reinforcement learning

    Get PDF
    In urban environments, supply resources have to be constantly matched to the "right" locations (where customer demand is present) so as to improve quality of life. For instance, ambulances have to be matched to base stations regularly so as to reduce response time for emergency incidents in EMS (Emergency Management Systems); vehicles (cars, bikes, scooters etc.) have to be matched to docking stations so as to reduce lost demand in shared mobility systems. Such problem domains are challenging owing to the demand uncertainty, combinatorial action spaces (due to allocation) and constraints on allocation of resources (e.g., total resources, minimum and maximum number of resources at locations and regions). Existing systems typically employ myopic and greedy optimization approaches to optimize allocation of supply resources to locations. Such approaches typically are unable to handle surges or variances in demand patterns well. Recent research has demonstrated the ability of Deep RL methods in adapting well to highly uncertain environments. However, existing Deep RL methods are unable to handle combinatorial action spaces and constraints on allocation of resources. To that end, we have developed three approaches on top of the well known actor critic approach, DDPG (Deep Deterministic Policy Gradient) that are able to handle constraints on resource allocation. More importantly, we demonstrate that they are able to outperform leading approaches on simulators validated on semi-real and real data sets

    Dynamic repositioning to reduce lost demand in Bike Sharing Systems

    Get PDF
    National Research Foundation (NRF) Singapore under SMART Center for Future Mobilit
    corecore