7,002 research outputs found

    Artificial Neural Network Based Prediction Mechanism for Wireless Network on Chips Medium Access Control

    Get PDF
    As per Moore’s law, continuous improvement over silicon process technologies has made the integration of hundreds of cores on to a single chip possible. This has resulted in the paradigm shift towards multicore and many-core chips where, hundreds of cores can be integrated on the same die and interconnected using an on-chip packet-switched network called a Network-on-Chip (NoC). Various tasks running on different cores generate different rates of communication between pairs of cores. This lead to the increase in spatial and temporal variation in the workloads, which impact the long distance data communication over multi-hop wire line paths in conventional NoCs. Among different alternatives, due to the CMOS compatibility and energy-efficiency, low-latency wireless interconnects operating in the millimeter wave (mm-wave) band is nearer term solution to this multi-hop communication problem in traditional NoCs. This has led to the recent exploration of millimeter-wave (mm-wave) wireless technologies in wireless NoC architectures (WiNoC). In a WiNoC, the mm-wave wireless interconnect is realized by equipping some NoC switches with an wireless interface (WI) that contains an antenna and transceiver circuit tuned to operate in the mm-wave frequency. To enable collision free and energy-efficient communication among the WIs, the WIs is also equipped with a medium access control mechanism (MAC) unit. Due to the simplicity and low-overhead implementation, a token passing based MAC mechanism to enable Time Division Multiple Access (TDMA) has been adopted in many WiNoC architectures. However, such simple MAC mechanism is agnostic of the demand of the WIs. Based on the tasks mapped on a multicore system the demand through the WIs can vary both spatially and temporally. Hence, if the MAC is agnostic of such demand variation, energy is wasted when no flit is transferred through the wireless channel. To efficiently utilize the wireless channel, MAC mechanisms that can dynamically allocate token possession period of the WIs have been explored in recent time for WiNoCs. In the dynamic MAC mechanism, a history-based prediction is used to predict the bandwidth demand of the WIs to adjust the token possession period with respect to the traffic variation. However, such simple history based predictors are not accurate and limits the performance gain due to the dynamic MACs in a WiNoC. In this work, we investigate the design of an artificial neural network (ANN) based prediction methodology to accurately predict the bandwidth demand of each WI. Through system level simulation, we show that the dynamic MAC mechanisms enabled with the ANN based prediction mechanism can significantly improve the performance of a WiNoC in terms of peak bandwidth, packet energy and latency compared to the state-of-the-art dynamic MAC mechanisms

    Integrated Traffic and Communication Performance Evaluation of an Intelligent Vehicle Infrastructure Integration (VII) System for Online Travel Time Prediction

    Get PDF
    This paper presents a framework for online highway travel time prediction using traffic measurements that are likely to be available from Vehicle Infrastructure Integration (VII) systems, in which vehicle and infrastructure devices communicate to improve mobility and safety. In the proposed intelligent VII system, two artificial intelligence (AI) paradigms, namely Artificial Neural Networks (ANN) and Support Vector Regression (SVR), are used to determine future travel time based on such information as current travel time, VII-enabled vehicles’ flow and density. The development and performance evaluation of the VII-ANN and VII-SVR frameworks, in both of the traffic and communications domains, were conducted, using an integrated simulation platform, for a highway network in Greenville, South Carolina. Specifically, the simulation platform allows for implementing traffic surveillance and management methods in the traffic simulator PARAMICS, and for evaluating different communication protocols and network parameters in the communication network simulator, ns-2. The study’s findings reveal that the designed communications system was capable of supporting the travel time prediction functionality. They also demonstrate that the travel time prediction accuracy of the VII-AI framework was superior to a baseline instantaneous travel time prediction algorithm, with the VII-SVR model slightly outperforming the VII-ANN model. Moreover, the VII-AI framework was shown to be capable of performing reasonably well during non-recurrent congestion scenarios, which traditionally have challenged traffic sensor-based highway travel time prediction methods

    Resource management for multimedia traffic over ATM broadband satellite networks

    Get PDF
    PhDAbstract not availabl

    Modeling driver distraction mechanism and its safety impact in automated vehicle environment.

    Get PDF
    Automated Vehicle (AV) technology expects to enhance driving safety by eliminating human errors. However, driver distraction still exists under automated driving. The Society of Automotive Engineers (SAE) has defined six levels of driving automation from Level 0~5. Until achieving Level 5, human drivers are still needed. Therefore, the Human-Vehicle Interaction (HVI) necessarily diverts a driver’s attention away from driving. Existing research mainly focused on quantifying distraction in human-operated vehicles rather than in the AV environment. It causes a lack of knowledge on how AV distraction can be detected, quantified, and understood. Moreover, existing research in exploring AV distraction has mainly pre-defined distraction as a binary outcome and investigated the patterns that contribute to distraction from multiple perspectives. However, the magnitude of AV distraction is not accurately quantified. Moreover, past studies in quantifying distraction have mainly used wearable sensors’ data. In reality, it is not realistic for drivers to wear these sensors whenever they drive. Hence, a research motivation is to develop a surrogate model that can replace the wearable device-based data to predict AV distraction. From the safety perspective, there lacks a comprehensive understanding of how AV distraction impacts safety. Furthermore, a solution is needed for safely offsetting the impact of distracted driving. In this context, this research aims to (1) improve the existing methods in quantifying Human-Vehicle Interaction-induced (HVI-induced) driver distraction under automated driving; (2) develop a surrogate driver distraction prediction model without using wearable sensor data; (3) quantitatively reveal the dynamic nature of safety benefits and collision hazards of HVI-induced visual and cognitive distractions under automated driving by mathematically formulating the interrelationships among contributing factors; and (4) propose a conceptual prototype of an AI-driven, Ultra-advanced Collision Avoidance System (AUCAS-L3) targeting HVI-induced driver distraction under automated driving without eye-tracking and video-recording. Fixation and pupil dilation data from the eye tracking device are used to model driver distraction, focusing on visual and cognitive distraction, respectively. In order to validate the proposed methods for measuring and modeling driver distraction, a data collection was conducted by inviting drivers to try out automated driving under Level 3 automation on a simulator. Each driver went through a jaywalker scenario twice, receiving a takeover request under two types of HVI, namely “visual only” and “visual and audible”. Each driver was required to wear an eye-tracker so that the fixation and pupil dilation data could be collected when driving, along with driving performance data being recorded by the simulator. In addition, drivers’ demographical information was collected by a pre-experiment survey. As a result, the magnitude of visual and cognitive distraction was quantified, exploring the dynamic changes over time. Drivers are more concentrated and maintain a higher level of takeover readiness under the “visual and audible” warning, compared to “visual only” warning. The change of visual distraction was mathematically formulated as a function of time. In addition, the change of visual distraction magnitude over time is explained from the driving psychology perspective. Moreover, the visual distraction was also measured by direction in this research, and hotspots of visual distraction were identified with regard to driving safety. When discussing the cognitive distraction magnitude, the driver’s age was identified as a contributing factor. HVI warning type contributes to the significant difference in cognitive distraction acceleration rate. After drivers reach the maximum visual distraction, cognitive distraction tends to increase continuously. Also, this research contributes to quantitatively revealing how visual and cognitive distraction impacts the collision hazards, respectively. Moreover, this research contributes to the literature by developing deep learning-based models in predicting a driver’s visual and cognitive distraction intensity, focusing on demographics, HVI warning types, and driving performance. As a solution to safety issues caused by driver distraction, the AUCAS-L3 has been proposed. The AUCAS-L3 is validated with high accuracies in predicting (a) whether a driver is distracted and does not perform takeover actions and (b) whether crashes happen or not if taken over. After predicting the presence of driver distraction or a crash, AUCAS-L3 automatically applies the brake pedal for drivers as effective and efficient protection to driver distraction under automated driving. And finally, a conceptual prototype in predicting AV distraction and traffic conflict was proposed, which can predict the collision hazards in advance of 0.82 seconds on average
    • 

    corecore