11,355 research outputs found
Worst-Case Communication Time Analysis for On-Chip Networks with Finite Buffers
Network-on-Chip (NoC) is the ideal interconnection architecture for many-core systems due to its superior scalability and performance. An NoC must deliver critical messages from a realtime application within specific deadlines. A violation of this requirement may compromise the entire system operation. Therefore, a series of experiments considering worst-case scenarios must be conducted to verify if deadlines can be satisfied. However, simulation-based experiments are time-consuming, and one alternative is schedulability analysis. In this context, this work proposes a schedulability analysis to
accelerate design space exploration in real-time applications on NoC-based systems. The proposed worstcase analysis estimates the maximum latency of traffic flows assuming direct and indirect blocking. Besides, we consider the size of buffers to reduce the analysis’ pessimism. We also present an extension of the analysis, including self-blocking. We conduct a series of experiments to evaluate the proposed analysis using a cycle-accurate simulator. The experimental results show that the proposed solution presents tighter results and runs four orders of magnitude faster than the simulation.N/
Audio-visual multi-modality driven hybrid feature learning model for crowd analysis and classification
The high pace emergence in advanced software systems, low-cost hardware and decentralized cloud computing technologies have broadened the horizon for vision-based surveillance, monitoring and control. However, complex and inferior feature learning over visual artefacts or video streams, especially under extreme conditions confine majority of the at-hand vision-based crowd analysis and classification systems. Retrieving event-sensitive or crowd-type sensitive spatio-temporal features for the different crowd types under extreme conditions is a highly complex task. Consequently, it results in lower accuracy and hence low reliability that confines existing methods for real-time crowd analysis. Despite numerous efforts in vision-based approaches, the lack of acoustic cues often creates ambiguity in crowd classification. On the other hand, the strategic amalgamation of audio-visual features can enable accurate and reliable crowd analysis and classification. Considering it as motivation, in this research a novel audio-visual multi-modality driven hybrid feature learning model is developed for crowd analysis and classification. In this work, a hybrid feature extraction model was applied to extract deep spatio-temporal features by using Gray-Level Co-occurrence Metrics (GLCM) and AlexNet transferrable learning model. Once extracting the different GLCM features and AlexNet deep features, horizontal concatenation was done to fuse the different feature sets. Similarly, for acoustic feature extraction, the audio samples (from the input video) were processed for static (fixed size) sampling, pre-emphasis, block framing and Hann windowing, followed by acoustic feature extraction like GTCC, GTCC-Delta, GTCC-Delta-Delta, MFCC, Spectral Entropy, Spectral Flux, Spectral Slope and Harmonics to Noise Ratio (HNR). Finally, the extracted audio-visual features were fused to yield a composite multi-modal feature set, which is processed for classification using the random forest ensemble classifier. The multi-class classification yields a crowd-classification accurac12529y of (98.26%), precision (98.89%), sensitivity (94.82%), specificity (95.57%), and F-Measure of 98.84%. The robustness of the proposed multi-modality-based crowd analysis model confirms its suitability towards real-world crowd detection and classification tasks
IoT-Based Vehicle Monitoring and Driver Assistance System Framework for Safety and Smart Fleet Management
Curbing road accidents has always been one of the utmost priorities in every country. In Malaysia, Traffic Investigation and Enforcement Department reported that Malaysia’s total number of road accidents has increased from 373,071 to 533,875 in the last decade. One of the significant causes of road accidents is driver’s behaviours. However, drivers’ behaviour was challenging to regulate by the enforcement team or fleet operators, especially heavy vehicles. We proposed adopting the Internet of Things (IoT) and its’ emerging technologies to monitor and alert driver’s behavioural and driving patterns in reducing road accidents. In this work, we proposed a lane tracking and iris detection algorithm to monitor and alert the driver’s behaviour when the vehicle sways away from the lane and the driver feeling drowsy, respectively. We implemented electronic devices such as cameras, a global positioning system module, a global system communication module, and a microcontroller as an intelligent transportation system in the vehicle. We implemented face recognition for person identification using the same in-vehicle camera and recorded the working duration for authentication and operation health monitoring, respectively. With the GPS module, we monitored and alerted against permissible vehicle’s speed accordingly. We integrated IoT on the system for the fleet centre to monitor and alert the driver’s behavioural activities in real-time through the user access portal. We validated it successfully on Malaysian roads. The outcome of this pilot project benefits the safety of drivers, public road users, and passengers. The impact of this framework leads to a new regulation by the government agencies towards merit and demerit system, real-time fleet monitoring of intelligent transportation systems, and socio-economy such as cheaper health premiums. The big data can be used to predict the driver’s behavioural in the future
The State of the Art in Deep Learning Applications, Challenges, and Future Prospects::A Comprehensive Review of Flood Forecasting and Management
Floods are a devastating natural calamity that may seriously harm both infrastructure and people. Accurate flood forecasts and control are essential to lessen these effects and safeguard populations. By utilizing its capacity to handle massive amounts of data and provide accurate forecasts, deep learning has emerged as a potent tool for improving flood prediction and control. The current state of deep learning applications in flood forecasting and management is thoroughly reviewed in this work. The review discusses a variety of subjects, such as the data sources utilized, the deep learning models used, and the assessment measures adopted to judge their efficacy. It assesses current approaches critically and points out their advantages and disadvantages. The article also examines challenges with data accessibility, the interpretability of deep learning models, and ethical considerations in flood prediction. The report also describes potential directions for deep-learning research to enhance flood predictions and control. Incorporating uncertainty estimates into forecasts, integrating many data sources, developing hybrid models that mix deep learning with other methodologies, and enhancing the interpretability of deep learning models are a few of these. These research goals can help deep learning models become more precise and effective, which will result in better flood control plans and forecasts. Overall, this review is a useful resource for academics and professionals working on the topic of flood forecasting and management. By reviewing the current state of the art, emphasizing difficulties, and outlining potential areas for future study, it lays a solid basis. Communities may better prepare for and lessen the destructive effects of floods by implementing cutting-edge deep learning algorithms, thereby protecting people and infrastructure
A Low-Delay MAC for IoT Applications: Decentralized Optimal Scheduling of Queues without Explicit State Information Sharing
We consider a system of several collocated nodes sharing a time slotted
wireless channel, and seek a MAC (medium access control) that (i) provides low
mean delay, (ii) has distributed control (i.e., there is no central scheduler),
and (iii) does not require explicit exchange of state information or control
signals. The design of such MAC protocols must keep in mind the need for
contention access at light traffic, and scheduled access in heavy traffic,
leading to the long-standing interest in hybrid, adaptive MACs.
Working in the discrete time setting, for the distributed MAC design, we
consider a practical information structure where each node has local
information and some common information obtained from overhearing. In this
setting, "ZMAC" is an existing protocol that is hybrid and adaptive. We
approach the problem via two steps (1) We show that it is sufficient for the
policy to be "greedy" and "exhaustive". Limiting the policy to this class
reduces the problem to obtaining a queue switching policy at queue emptiness
instants. (2) Formulating the delay optimal scheduling as a POMDP (partially
observed Markov decision process), we show that the optimal switching rule is
Stochastic Largest Queue (SLQ).
Using this theory as the basis, we then develop a practical distributed
scheduler, QZMAC, which is also tunable. We implement QZMAC on standard
off-the-shelf TelosB motes and also use simulations to compare QZMAC with the
full-knowledge centralized scheduler, and with ZMAC. We use our implementation
to study the impact of false detection while overhearing the common
information, and the efficiency of QZMAC. Our simulation results show that the
mean delay with QZMAC is close that of the full-knowledge centralized
scheduler.Comment: 28 pages, 19 figure
Beam scanning by liquid-crystal biasing in a modified SIW structure
A fixed-frequency beam-scanning 1D antenna based on Liquid Crystals (LCs) is designed for application in 2D scanning with lateral alignment. The 2D array environment imposes full decoupling of adjacent 1D antennas, which often conflicts with the LC requirement of DC biasing: the proposed design accommodates both. The LC medium is placed inside a Substrate Integrated Waveguide (SIW) modified to work as a Groove Gap Waveguide, with radiating slots etched on the upper broad wall, that radiates as a Leaky-Wave Antenna (LWA). This allows effective application of the DC bias voltage needed for tuning the LCs. At the same time, the RF field remains laterally confined, enabling the possibility to lay several antennas in parallel and achieve 2D beam scanning. The design is validated by simulation employing the actual properties of a commercial LC medium
Machine learning and mixed reality for smart aviation: applications and challenges
The aviation industry is a dynamic and ever-evolving sector. As technology advances and becomes more sophisticated, the aviation industry must keep up with the changing trends. While some airlines have made investments in machine learning and mixed reality technologies, the vast majority of regional airlines continue to rely on inefficient strategies and lack digital applications. This paper investigates the state-of-the-art applications that integrate machine learning and mixed reality into the aviation industry. Smart aerospace engineering design, manufacturing, testing, and services are being explored to increase operator productivity. Autonomous systems, self-service systems, and data visualization systems are being researched to enhance passenger experience. This paper investigate safety, environmental, technological, cost, security, capacity, and regulatory challenges of smart aviation, as well as potential solutions to ensure future quality, reliability, and efficiency
Sensing User's Activity, Channel, and Location with Near-Field Extra-Large-Scale MIMO
This paper proposes a grant-free massive access scheme based on the
millimeter wave (mmWave) extra-large-scale multiple-input multiple-output
(XL-MIMO) to support massive Internet-of-Things (IoT) devices with low latency,
high data rate, and high localization accuracy in the upcoming sixth-generation
(6G) networks. The XL-MIMO consists of multiple antenna subarrays that are
widely spaced over the service area to ensure line-of-sight (LoS)
transmissions. First, we establish the XL-MIMO-based massive access model
considering the near-field spatial non-stationary (SNS) property. Then, by
exploiting the block sparsity of subarrays and the SNS property, we propose a
structured block orthogonal matching pursuit algorithm for efficient active
user detection (AUD) and channel estimation (CE). Furthermore, different
sensing matrices are applied in different pilot subcarriers for exploiting the
diversity gains. Additionally, a multi-subarray collaborative localization
algorithm is designed for localization. In particular, the angle of arrival
(AoA) and time difference of arrival (TDoA) of the LoS links between active
users and related subarrays are extracted from the estimated XL-MIMO channels,
and then the coordinates of active users are acquired by jointly utilizing the
AoAs and TDoAs. Simulation results show that the proposed algorithms outperform
existing algorithms in terms of AUD and CE performance and can achieve
centimeter-level localization accuracy.Comment: Submitted to IEEE Transactions on Communications, Major revision.
Codes will be open to all on https://gaozhen16.github.io/ soo
Joint optimization of platoon control and resource scheduling in cooperative vehicle-infrastructure system
Vehicle platooning technology is essential in achieving group consensus, on-road safety, and fuel-saving. Meanwhile, Vehicle-to-Infrastructure (V2I) communication significantly facilitates the development of connected vehicles. However, the coupled effects of the longitudinal vehicle’s mobility, platoon control and V2I communication may result in a low reliable communication network between the platoon vehicle and the roadside unit, there is a tradeoff between the platoon control and communication reliability. In this paper, we investigate a biobjective joint optimization problem where the first objective is to maximize the success probability of data transmission (communication reliability) and the second objective function is to minimize the traffic oscillation flow. The vehicle’s mobility state of the platoon vehicle affects the channel capacity and transmission performance. In this context, we deeply explore the relationship between control signals and resource scheduling and theoretically deduce a closed-form expression of the optimal communication reliability objective. Through this closed expression, we transform the bi-objective model into a single objective MPC model by using ϵ-constraint method. We design an efficient algorithm for solving the joint optimization model and prove the convergence. To verify the effectiveness of the proposed method, we finally evaluate the spacing error, speed error, and resource scheduling of platooning vehicles through simulation experiments in two experimental scenarios. The results show that the proposed control-communication co-design can improve the platoon control performance while satisfying the high reliability of V2I communications
Reinforcement Learning Empowered Unmanned Aerial Vehicle Assisted Internet of Things Networks
This thesis aims towards performance enhancement for unmanned aerial vehicles (UAVs) assisted internet of things network (IoT). In this realm, novel reinforcement learning (RL) frameworks have been proposed for solving intricate joint optimisation scenarios. These scenarios include, uplink, downlink and combined. The multi-access technique utilised is non-orthogonal multiple access (NOMA), as key enabler in this regime. The outcomes of this research entail, enhancement in key performance metrics, such as sum-rate, energy efficiency and consequent reduction in outage. For the scenarios involving uplink transmissions by IoT devices, adaptive and tandem rein forcement learning frameworks have been developed. The aim is to maximise capacity over fixed UAV trajectory. The adaptive framework is utilised in a scenario wherein channel suitability is ascertained for uplink transmissions utilising a fixed clustering regime in NOMA. Tandem framework is utilised in a scenario wherein multiple-channel resource suitability is ascertained along with, power allocation, dynamic clustering and IoT node associations to NOMA clusters and channels. In scenarios involving downlink transmission to IoT devices, an ensemble RL (ERL) frame work is proposed for sum-rate enhancement over fixed UAV trajectory. For dynamic UAV trajec tory, hybrid decision framework (HDF) is proposed for energy efficiency optimisation. Downlink transmission power and bandwidth is managed for NOMA transmissions over fixed and dynamic UAV trajectories, facilitating IoT networks. In UAV enabled relaying scenario, for control system plants and their respective remotely deployed sensors, a head start reinforcement learning framework based on deep learning is de veloped and implemented. NOMA is invoked, in both uplink and downlink transmissions for IoT network. Dynamic NOMA clustering, power management and nodes association along with UAV height control is jointly managed. The primary aim is the, enhancement of net sum-rate and its subsequent manifestation in facilitating the IoT assisted use case. The simulation results relating to aforesaid scenarios indicate, enhanced sum-rate, energy efficiency and reduced outage for UAV-assisted IoT networks. The proposed RL frameworks surpass in performance in comparison to existing frameworks as benchmarks for the same sce narios. The simulation platforms utilised are MATLAB and Python, for network modeling, RL framework design and validation
- …