680 research outputs found

    Incorporating the 10th Edition Institute of Traffic Engineers (ITE) Trip Generation Rates Into Virginia Department of Transportation Guidelines

    Get PDF
    The Institute of Transportation Engineers (ITE) released the Trip Generation (TG) 10th edition in 2017, which significantly updated its database, and some of its trip generation rates were substantially lower than those of earlier editions. This study aims to investigate the applicability of the TG 10th edition in various Virginia contexts and to recommend how to incorporate the TG 10th edition into state guidelines. The research team surveyed 31 state transportation agencies to obtain a clear understanding of current practices in the adoption of trip rates and trip estimation approaches. We systematically compared trip rates of TG 9th and 10th editions using hypothesis tests and identified land uses with significant rate reduction. Trip generation data were collected from 37 sites in Virginia during weekday PM peaks for the mixed-use sites and single-use sites with significantly reduced 10th edition rates (multi-family low-rise and general office). To investigate the use of trip rates in different settings, general offices in both general urban/suburban and dense multi-use urban were considered. For mixed-use developments, we explored the combinations of four internal trip capture models and TG rates of 9th and 10th editions to identify the best trip estimation approach. Given that all trip data were collected after the outbreak of the COVID-19 pandemic, Streetlight data were used to adjust trip counts to account for the impacts of COVID. This study recommends that VDOT’s Office of Land Use provide guidance to VDOT districts to accept traffic impact analysis reports using ITE’s 10th Edition Trip Generation and the 3rd Edition of the Trip Generation Handbook. It is further recommended that the Office of Land Use provide guidance to the districts to accept traffic impact analysis reports prepared using the methodology presented in the 3rd edition of the Trip Generation Handbook to estimate internal capture for mixed-use developments

    Positioning Improvement for Spaceborne Laser Footprint Based on Precisely Terrain Data

    Get PDF
    Spaceborne laser altimetry represents a novel active remote sensing technology applicable to earth observation, which together with imaging spectroscopy and synthetic aperture radar as a core technology for data acquisition in the earth observation systems. However, the accuracy of horizontal positioning for laser footprints from spaceborne laser altimeters declines due to various factors such as the changes in the orbital environment and the deterioration of performance. Moreover, the limited frequency of in-orbit calibration of the spaceborne laser altimeters and the non-disclosure of calibration parameters mean that users are heavily reliant on positioning accuracy of the altimetry data provided. To address this issue, a new algorithm is proposed in this study for enhancing the accuracy of horizontal positioning for laser footprints in the absence of satellite altimeter pointing and ranging parameters. In this algorithm, high-resolution DSM is taken as the reference terrain data to take advantage of the higher precision in elevation over horizontal positioning of the laser footprints. By adjusting the horizontal position of the laser footprint within a small area, the algorithm achieves the optimal alignment of laser elevation data with the reference terrain. Then, the resulting shift in the horizontal position of the laser footprints is referenced to correct their horizontal positioning during that period. Based on the high-accuracy DSM data collected from the Xinjiang autonomous region in China and the data collected by the GF-7 satellite, simulation experiments are performed in this study to analyze and validate the proposed algorithm. According to the experimental results, the horizontal accuracy of the laser footprints improves significantly from 12.56 m to 3.11 m after optimization by the proposed method. With the elimination of 9.45 m horizontal error, accuracy is improved by 75.23%. This method is demonstrated as effective in further optimizing the horizontal position of laser altimetry data products in the absence of altimeter parameters and original data, which promotes the application of spaceborne laser data

    Automating Intersection Marking Data Collection and Condition Assessment at Scale With An Artificial Intelligence-Powered System

    Get PDF
    Intersection markings play a vital role in providing road users with guidance and information. The conditions of intersection markings will be gradually degrading due to vehicular traffic, rain, and/or snowplowing. Degraded markings can confuse drivers, leading to increased risk of traffic crashes. Timely obtaining high-quality information of intersection markings lays a foundation for making informed decisions in safety management and maintenance prioritization. However, current labor-intensive and high-cost data collection practices make it very challenging to gather intersection data on a large scale. This paper develops an automated system to intelligently detect intersection markings and to assess their degradation conditions with existing roadway Geographic information systems (GIS) data and aerial images. The system harnesses emerging artificial intelligence (AI) techniques such as deep learning and multi-task learning to enhance its robustness, accuracy, and computational efficiency. AI models were developed to detect lane-use arrows (85% mean average precision) and crosswalks (89% mean average precision) and to assess the degradation conditions of markings (91% overall accuracy for lane-use arrows and 83% for crosswalks). Data acquisition and computer vision modules developed were integrated and a graphical user interface (GUI) was built for the system. The proposed system can fully automate the processes of marking data collection and condition assessment on a large scale with almost zero cost and short processing time. The developed system has great potential to propel urban science forward by providing fundamental urban infrastructure data for analysis and decision-making across various critical areas such as data-driven safety management and prioritization of infrastructure maintenance

    Nocturne: a scalable driving benchmark for bringing multi-agent learning one step closer to the real world

    Full text link
    We introduce \textit{Nocturne}, a new 2D driving simulator for investigating multi-agent coordination under partial observability. The focus of Nocturne is to enable research into inference and theory of mind in real-world multi-agent settings without the computational overhead of computer vision and feature extraction from images. Agents in this simulator only observe an obstructed view of the scene, mimicking human visual sensing constraints. Unlike existing benchmarks that are bottlenecked by rendering human-like observations directly using a camera input, Nocturne uses efficient intersection methods to compute a vectorized set of visible features in a C++ back-end, allowing the simulator to run at 2000+2000+ steps-per-second. Using open-source trajectory and map data, we construct a simulator to load and replay arbitrary trajectories and scenes from real-world driving data. Using this environment, we benchmark reinforcement-learning and imitation-learning agents and demonstrate that the agents are quite far from human-level coordination ability and deviate significantly from the expert trajectories

    GL-Fusion: Global-Local Fusion Network for Multi-view Echocardiogram Video Segmentation

    Full text link
    Cardiac structure segmentation from echocardiogram videos plays a crucial role in diagnosing heart disease. The combination of multi-view echocardiogram data is essential to enhance the accuracy and robustness of automated methods. However, due to the visual disparity of the data, deriving cross-view context information remains a challenging task, and unsophisticated fusion strategies can even lower performance. In this study, we propose a novel Gobal-Local fusion (GL-Fusion) network to jointly utilize multi-view information globally and locally that improve the accuracy of echocardiogram analysis. Specifically, a Multi-view Global-based Fusion Module (MGFM) is proposed to extract global context information and to explore the cyclic relationship of different heartbeat cycles in an echocardiogram video. Additionally, a Multi-view Local-based Fusion Module (MLFM) is designed to extract correlations of cardiac structures from different views. Furthermore, we collect a multi-view echocardiogram video dataset (MvEVD) to evaluate our method. Our method achieves an 82.29% average dice score, which demonstrates a 7.83% improvement over the baseline method, and outperforms other existing state-of-the-art methods. To our knowledge, this is the first exploration of a multi-view method for echocardiogram video segmentation. Code available at: https://github.com/xmed-lab/GL-FusionComment: Accepted By MICCAI 202

    GraphEcho: Graph-Driven Unsupervised Domain Adaptation for Echocardiogram Video Segmentation

    Full text link
    Echocardiogram video segmentation plays an important role in cardiac disease diagnosis. This paper studies the unsupervised domain adaption (UDA) for echocardiogram video segmentation, where the goal is to generalize the model trained on the source domain to other unlabelled target domains. Existing UDA segmentation methods are not suitable for this task because they do not model local information and the cyclical consistency of heartbeat. In this paper, we introduce a newly collected CardiacUDA dataset and a novel GraphEcho method for cardiac structure segmentation. Our GraphEcho comprises two innovative modules, the Spatial-wise Cross-domain Graph Matching (SCGM) and the Temporal Cycle Consistency (TCC) module, which utilize prior knowledge of echocardiogram videos, i.e., consistent cardiac structure across patients and centers and the heartbeat cyclical consistency, respectively. These two modules can better align global and local features from source and target domains, improving UDA segmentation results. Experimental results showed that our GraphEcho outperforms existing state-of-the-art UDA segmentation methods. Our collected dataset and code will be publicly released upon acceptance. This work will lay a new and solid cornerstone for cardiac structure segmentation from echocardiogram videos. Code and dataset are available at: https://github.com/xmed-lab/GraphEchoComment: Accepted By ICCV 202

    Learning Personalized Story Evaluation

    Full text link
    While large language models (LLMs) have shown impressive results for more objective tasks such as QA and retrieval, it remains nontrivial to evaluate their performance on open-ended text generation for reasons including (1) data contamination; (2) multi-dimensional evaluation criteria; and (3) subjectiveness stemming from reviewers' personal preferences. To address such issues, we propose to model personalization in an uncontaminated open-ended generation assessment. We create two new datasets Per-MPST and Per-DOC for personalized story evaluation, by re-purposing existing datasets with proper anonymization and new personalized labels. We further develop a personalized story evaluation model PERSE to infer reviewer preferences and provide a personalized evaluation. Specifically, given a few exemplary reviews from a particular reviewer, PERSE predicts either a detailed review or fine-grained comparison in several aspects (such as interestingness and surprise) for that reviewer on a new text input. Experimental results show that PERSE outperforms GPT-4 by 15.8% on Kendall correlation of story ratings, and by 13.7% on pairwise preference prediction accuracy. Both datasets and code will be released.Comment: 19 page
    corecore