55,701 research outputs found

    The State-of-the-art of Coordinated Ramp Control with Mixed Traffic Conditions

    Get PDF
    Ramp metering, a traditional traffic control strategy for conventional vehicles, has been widely deployed around the world since the 1960s. On the other hand, the last decade has witnessed significant advances in connected and automated vehicle (CAV) technology and its great potential for improving safety, mobility and environmental sustainability. Therefore, a large amount of research has been conducted on cooperative ramp merging for CAVs only. However, it is expected that the phase of mixed traffic, namely the coexistence of both human-driven vehicles and CAVs, would last for a long time. Since there is little research on the system-wide ramp control with mixed traffic conditions, the paper aims to close this gap by proposing an innovative system architecture and reviewing the state-of-the-art studies on the key components of the proposed system. These components include traffic state estimation, ramp metering, driving behavior modeling, and coordination of CAVs. All reviewed literature plot an extensive landscape for the proposed system-wide coordinated ramp control with mixed traffic conditions.Comment: 8 pages, 1 figure, IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE - ITSC 201

    Vision-Based Lane-Changing Behavior Detection Using Deep Residual Neural Network

    Get PDF
    Accurate lane localization and lane change detection are crucial in advanced driver assistance systems and autonomous driving systems for safer and more efficient trajectory planning. Conventional localization devices such as Global Positioning System only provide road-level resolution for car navigation, which is incompetent to assist in lane-level decision making. The state of art technique for lane localization is to use Light Detection and Ranging sensors to correct the global localization error and achieve centimeter-level accuracy, but the real-time implementation and popularization for LiDAR is still limited by its computational burden and current cost. As a cost-effective alternative, vision-based lane change detection has been highly regarded for affordable autonomous vehicles to support lane-level localization. A deep learning-based computer vision system is developed to detect the lane change behavior using the images captured by a front-view camera mounted on the vehicle and data from the inertial measurement unit for highway driving. Testing results on real-world driving data have shown that the proposed method is robust with real-time working ability and could achieve around 87% lane change detection accuracy. Compared to the average human reaction to visual stimuli, the proposed computer vision system works 9 times faster, which makes it capable of helping make life-saving decisions in time

    A Survey on Compiler Autotuning using Machine Learning

    Full text link
    Since the mid-1990s, researchers have been trying to use machine-learning based approaches to solve a number of different compiler optimization problems. These techniques primarily enhance the quality of the obtained results and, more importantly, make it feasible to tackle two main compiler optimization problems: optimization selection (choosing which optimizations to apply) and phase-ordering (choosing the order of applying optimizations). The compiler optimization space continues to grow due to the advancement of applications, increasing number of compiler optimizations, and new target architectures. Generic optimization passes in compilers cannot fully leverage newly introduced optimizations and, therefore, cannot keep up with the pace of increasing options. This survey summarizes and classifies the recent advances in using machine learning for the compiler optimization field, particularly on the two major problems of (1) selecting the best optimizations and (2) the phase-ordering of optimizations. The survey highlights the approaches taken so far, the obtained results, the fine-grain classification among different approaches and finally, the influential papers of the field.Comment: version 5.0 (updated on September 2018)- Preprint Version For our Accepted Journal @ ACM CSUR 2018 (42 pages) - This survey will be updated quarterly here (Send me your new published papers to be added in the subsequent version) History: Received November 2016; Revised August 2017; Revised February 2018; Accepted March 2018

    Cache-aware Performance Modeling and Prediction for Dense Linear Algebra

    Full text link
    Countless applications cast their computational core in terms of dense linear algebra operations. These operations can usually be implemented by combining the routines offered by standard linear algebra libraries such as BLAS and LAPACK, and typically each operation can be obtained in many alternative ways. Interestingly, identifying the fastest implementation -- without executing it -- is a challenging task even for experts. An equally challenging task is that of tuning each routine to performance-optimal configurations. Indeed, the problem is so difficult that even the default values provided by the libraries are often considerably suboptimal; as a solution, normally one has to resort to executing and timing the routines, driven by some form of parameter search. In this paper, we discuss a methodology to solve both problems: identifying the best performing algorithm within a family of alternatives, and tuning algorithmic parameters for maximum performance; in both cases, we do not execute the algorithms themselves. Instead, our methodology relies on timing and modeling the computational kernels underlying the algorithms, and on a technique for tracking the contents of the CPU cache. In general, our performance predictions allow us to tune dense linear algebra algorithms within few percents from the best attainable results, thus allowing computational scientists and code developers alike to efficiently optimize their linear algebra routines and codes.Comment: Submitted to PMBS1

    Efficient methods of automatic calibration for rainfall-runoff modelling in the Floreon+ system

    Get PDF
    Calibration of rainfall-runoff model parameters is an inseparable part of hydrological simulations. To achieve more accurate results of these simulations, it is necessary to implement an efficient calibration method that provides sufficient refinement of the model parameters in a reasonable time frame. In order to perform the calibration repeatedly for large amount of data and provide results of calibrated model simulations for the flood warning process in a short time, the method also has to be automated. In this paper, several local and global optimization methods are tested for their efficiency. The main goal is to identify the most accurate method for the calibration process that provides accurate results in an operational time frame (typically less than 1 hour) to be used in the flood prediction Floreon(+) system. All calibrations were performed on the measured data during the rainfall events in 2010 in the Moravian-Silesian region (Czech Republic) using our in-house rainfall-runoff model.Web of Science27441339

    An efficient genetic algorithm for large-scale transmit power control of dense and robust wireless networks in harsh industrial environments

    Get PDF
    The industrial wireless local area network (IWLAN) is increasingly dense, due to not only the penetration of wireless applications to shop floors and warehouses, but also the rising need of redundancy for robust wireless coverage. Instead of simply powering on all access points (APs), there is an unavoidable need to dynamically control the transmit power of APs on a large scale, in order to minimize interference and adapt the coverage to the latest shadowing effects of dominant obstacles in an industrial indoor environment. To fulfill this need, this paper formulates a transmit power control (TPC) model that enables both powering on/off APs and transmit power calibration of each AP that is powered on. This TPC model uses an empirical one-slope path loss model considering three-dimensional obstacle shadowing effects, to enable accurate yet simple coverage prediction. An efficient genetic algorithm (GA), named GATPC, is designed to solve this TPC model even on a large scale. To this end, it leverages repair mechanism-based population initialization, crossover and mutation, parallelism as well as dedicated speedup measures. The GATPC was experimentally validated in a small-scale IWLAN that is deployed a real industrial indoor environment. It was further numerically demonstrated and benchmarked on both small- and large-scales, regarding the effectiveness and the scalability of TPC. Moreover, sensitivity analysis was performed to reveal the produced interference and the qualification rate of GATPC in function of varying target coverage percentage as well as number and placement direction of dominant obstacles. (C) 2018 Elsevier B.V. All rights reserved
    • …
    corecore