4,578 research outputs found

    Methods for Utilizing Connected Vehicle Data in Support of Traffic Bottleneck Management

    Get PDF
    The decision to select the best Intelligent Transportation System (ITS) technologies from available options has always been a challenging task. The availability of connected vehicle/automated vehicle (CV/AV) technologies in the near future is expected to add to the complexity of the ITS investment decision-making process. The goal of this research is to develop a multi-criteria decision-making analysis (MCDA) framework to support traffic agencies’ decision-making process with consideration of CV/AV technologies. The decision to select between technology alternatives is based on identified performance measures and criteria, and constraints associated with each technology. Methods inspired by the literature were developed for incident/bottleneck detection and back-of-queue (BOQ) estimation and warning based on connected vehicle (CV) technologies. The mobility benefits of incident/bottleneck detection with different technologies were assessed using microscopic simulation. The performance of technology alternatives was assessed using simulated CV and traffic detector data in a microscopic simulation environment to be used in the proposed MCDA method for the purpose of alternative selection. In addition to assessing performance measures, there are a number of constraints and risks that need to be assessed in the alternative selection process. Traditional alternative analyses based on deterministic return on investment analysis are unable to capture the risks and uncertainties associated with the investment problem. This research utilizes a combination of a stochastic return on investment and a multi-criteria decision analysis method referred to as the Analytical Hierarchy Process (AHP) to select between ITS deployment alternatives considering emerging technologies. The approach is applied to an ITS investment case study to support freeway bottleneck management. The results of this dissertation indicate that utilizing CV data for freeway segments is significantly more cost-effective than using point detectors in detecting incidents and providing travel time estimates one year after CV technology becomes mandatory for all new vehicles and for corridors with moderate to heavy traffic. However, for corridors with light, there is a probability of CV deployment not being effective in the first few years due to low measurement reliability of travel times and high latency of incident detection, associated with smaller sample sizes of the collected data

    An integrated variable speed limit and ALINEA ramp metering model in the presence of High Bus Volume

    Get PDF
    Under many circumstances, when providing full bus priority methods, urban transport officials have to operate buses in mixed traffic based on their road network limitations. In the case of Istanbul’s Metrobus lane, for instance, when the route comes to the pre-designed Bosphorus Bridge, it has no choice but to merge with highway mixed traffic until it gets to the other side. Much has been written on the relative success of implementing Ramp Metering (RM), for example ALINEA (‘Asservissement line´ aire d’entre´ e autoroutie’) and Variable Speed Limits (VSL), two of the most widely-used “merging congestion” management strategies, in both a separate and combined manner. However, there has been no detailed study regarding the combination of these systems in the face of high bus volume. This being the case, the ultimate goal of this study is to bridge this gap by developing and proposing a combination of VSL and RM strategies in the presence of high bus volume (VSL+ALINEA/B). The proposed model has been coded using microscopic simulation software—VISSIM—and its vehicle actuated programming (VAP) feature; referred to as VisVAP. For current traffic conditions, the proposed model is able to improve total travel time by 9.0%, lower the number of average delays of mixed traffic and buses by 29.1% and 81.5% respectively, increase average speed by 12.7%, boost bottleneck throughout by 2.8%, and lower fuel consumption, Carbon Monoxide (CO), Nitrogen Oxides (NOx), and Volatile Organic Compounds (VOC) emissions by 17.3% compared to the existing “VSL+ALINEA” model. The results of the scenario analysis confirmed that the proposed model is not only able to decrease delay times on the Metrobus system but is also able to improve the adverse effects of high bus volume when subject to adjacent mixed traffic flow along highway sections

    Exploring performance and power properties of modern multicore chips via simple machine models

    Full text link
    Modern multicore chips show complex behavior with respect to performance and power. Starting with the Intel Sandy Bridge processor, it has become possible to directly measure the power dissipation of a CPU chip and correlate this data with the performance properties of the running code. Going beyond a simple bottleneck analysis, we employ the recently published Execution-Cache-Memory (ECM) model to describe the single- and multi-core performance of streaming kernels. The model refines the well-known roofline model, since it can predict the scaling and the saturation behavior of bandwidth-limited loop kernels on a multicore chip. The saturation point is especially relevant for considerations of energy consumption. From power dissipation measurements of benchmark programs with vastly different requirements to the hardware, we derive a simple, phenomenological power model for the Sandy Bridge processor. Together with the ECM model, we are able to explain many peculiarities in the performance and power behavior of multicore processors, and derive guidelines for energy-efficient execution of parallel programs. Finally, we show that the ECM and power models can be successfully used to describe the scaling and power behavior of a lattice-Boltzmann flow solver code.Comment: 23 pages, 10 figures. Typos corrected, DOI adde

    Optimal Orchestration of Virtual Network Functions

    Full text link
    -The emergence of Network Functions Virtualization (NFV) is bringing a set of novel algorithmic challenges in the operation of communication networks. NFV introduces volatility in the management of network functions, which can be dynamically orchestrated, i.e., placed, resized, etc. Virtual Network Functions (VNFs) can belong to VNF chains, where nodes in a chain can serve multiple demands coming from the network edges. In this paper, we formally define the VNF placement and routing (VNF-PR) problem, proposing a versatile linear programming formulation that is able to accommodate specific features and constraints of NFV infrastructures, and that is substantially different from existing virtual network embedding formulations in the state of the art. We also design a math-heuristic able to scale with multiple objectives and large instances. By extensive simulations, we draw conclusions on the trade-off achievable between classical traffic engineering (TE) and NFV infrastructure efficiency goals, evaluating both Internet access and Virtual Private Network (VPN) demands. We do also quantitatively compare the performance of our VNF-PR heuristic with the classical Virtual Network Embedding (VNE) approach proposed for NFV orchestration, showing the computational differences, and how our approach can provide a more stable and closer-to-optimum solution

    Revisiting the empirical fundamental relationship of traffic flow for highways using a causal econometric approach

    Get PDF
    The fundamental relationship of traffic flow is empirically estimated by fitting a regression curve to a cloud of observations of traffic variables. Such estimates, however, may suffer from the confounding/endogeneity bias due to omitted variables such as driving behaviour and weather. To this end, this paper adopts a causal approach to obtain an unbiased estimate of the fundamental flow-density relationship using traffic detector data. In particular, we apply a Bayesian non-parametric spline-based regression approach with instrumental variables to adjust for the aforementioned confounding bias. The proposed approach is benchmarked against standard curve-fitting methods in estimating the flow-density relationship for three highway bottlenecks in the United States. Our empirical results suggest that the saturated (or hypercongested) regime of the estimated flow-density relationship using correlational curve fitting methods may be severely biased, which in turn leads to biased estimates of important traffic control inputs such as capacity and capacity-drop. We emphasise that our causal approach is based on the physical laws of vehicle movement in a traffic stream as opposed to a demand-supply framework adopted in the economics literature. By doing so, we also aim to conciliate the engineering and economics approaches to this empirical problem. Our results, thus, have important implications both for traffic engineers and transport economists

    Training of Convolutional Networks on Multiple Heterogeneous Datasets for Street Scene Semantic Segmentation

    Full text link
    We propose a convolutional network with hierarchical classifiers for per-pixel semantic segmentation, which is able to be trained on multiple, heterogeneous datasets and exploit their semantic hierarchy. Our network is the first to be simultaneously trained on three different datasets from the intelligent vehicles domain, i.e. Cityscapes, GTSDB and Mapillary Vistas, and is able to handle different semantic level-of-detail, class imbalances, and different annotation types, i.e. dense per-pixel and sparse bounding-box labels. We assess our hierarchical approach, by comparing against flat, non-hierarchical classifiers and we show improvements in mean pixel accuracy of 13.0% for Cityscapes classes and 2.4% for Vistas classes and 32.3% for GTSDB classes. Our implementation achieves inference rates of 17 fps at a resolution of 520x706 for 108 classes running on a GPU.Comment: IEEE Intelligent Vehicles 201

    Methods of Congestion Control for Adaptive Continuous Media

    Get PDF
    Since the first exchange of data between machines in different locations in early 1960s, computer networks have grown exponentially with millions of people now using the Internet. With this, there has also been a rapid increase in different kinds of services offered over the World Wide Web from simple e-mails to streaming video. It is generally accepted that the commonly used protocol suite TCP/IP alone is not adequate for a number of modern applications with high bandwidth and minimal delay requirements. Many technologies are emerging such as IPv6, Diffserv, Intserv etc, which aim to replace the onesize-fits-all approach of the current lPv4. There is a consensus that the networks will have to be capable of multi-service and will have to isolate different classes of traffic through bandwidth partitioning such that, for example, low priority best-effort traffic does not cause delay for high priority video traffic. However, this research identifies that even within a class there may be delays or losses due to congestion and the problem will require different solutions in different classes. The focus of this research is on the requirements of the adaptive continuous media class. These are traffic flows that require a good Quality of Service but are also able to adapt to the network conditions by accepting some degradation in quality. It is potentially the most flexible traffic class and therefore, one of the most useful types for an increasing number of applications. This thesis discusses the QoS requirements of adaptive continuous media and identifies an ideal feedback based control system that would be suitable for this class. A number of current methods of congestion control have been investigated and two methods that have been shown to be successful with data traffic have been evaluated to ascertain if they could be adapted for adaptive continuous media. A novel method of control based on percentile monitoring of the queue occupancy is then proposed and developed. Simulation results demonstrate that the percentile monitoring based method is more appropriate to this type of flow. The problem of congestion control at aggregating nodes of the network hierarchy, where thousands of adaptive flows may be aggregated to a single flow, is then considered. A unique method of pricing mean and variance is developed such that each individual flow is charged fairly for its contribution to the congestion
    • 

    corecore