33 research outputs found
STS-CCL: Spatial-Temporal Synchronous Contextual Contrastive Learning for Urban Traffic Forecasting
Efficiently capturing the complex spatiotemporal representations from
large-scale unlabeled traffic data remains to be a challenging task. In
considering of the dilemma, this work employs the advanced contrastive learning
and proposes a novel Spatial-Temporal Synchronous Contextual Contrastive
Learning (STS-CCL) model. First, we elaborate the basic and strong augmentation
methods for spatiotemporal graph data, which not only perturb the data in terms
of graph structure and temporal characteristics, but also employ a
learning-based dynamic graph view generator for adaptive augmentation. Second,
we introduce a Spatial-Temporal Synchronous Contrastive Module (STS-CM) to
simultaneously capture the decent spatial-temporal dependencies and realize
graph-level contrasting. To further discriminate node individuals in negative
filtering, a Semantic Contextual Contrastive method is designed based on
semantic features and spatial heterogeneity, achieving node-level contrastive
learning along with negative filtering. Finally, we present a hard mutual-view
contrastive training scheme and extend the classic contrastive loss to an
integrated objective function, yielding better performance. Extensive
experiments and evaluations demonstrate that building a predictor upon STS-CCL
contrastive learning model gains superior performance than existing traffic
forecasting benchmarks. The proposed STS-CCL is highly suitable for large
datasets with only a few labeled data and other spatiotemporal tasks with data
scarcity issue.Comment: This work was accepted by the 49th IEEE International Conference on
Acoustics, Speech, & Signal Processing (ICASSP 2024). We will present our
work in Seoul, Kore
Language Semantic Graph Guided Data-Efficient Learning
Developing generalizable models that can effectively learn from limited data
and with minimal reliance on human supervision is a significant objective
within the machine learning community, particularly in the era of deep neural
networks. Therefore, to achieve data-efficient learning, researchers typically
explore approaches that can leverage more related or unlabeled data without
necessitating additional manual labeling efforts, such as Semi-Supervised
Learning (SSL), Transfer Learning (TL), and Data Augmentation (DA). SSL
leverages unlabeled data in the training process, while TL enables the transfer
of expertise from related data distributions. DA broadens the dataset by
synthesizing new data from existing examples. However, the significance of
additional knowledge contained within labels has been largely overlooked in
research. In this paper, we propose a novel perspective on data efficiency that
involves exploiting the semantic information contained in the labels of the
available data. Specifically, we introduce a Language Semantic Graph (LSG)
which is constructed from labels manifest as natural language descriptions.
Upon this graph, an auxiliary graph neural network is trained to extract
high-level semantic relations and then used to guide the training of the
primary model, enabling more adequate utilization of label knowledge. Across
image, video, and audio modalities, we utilize the LSG method in both TL and
SSL scenarios and illustrate its versatility in significantly enhancing
performance compared to other data-efficient learning approaches. Additionally,
our in-depth analysis shows that the LSG method also expedites the training
process.Comment: Accepted by NeurIPS 202
A fuzzy-clustering based approach for MADM handover in 5G ultra-dense networks
As the global data traffic has significantly increased in the recent year, the ultra-dense deployment of cellular networks (UDN) is being proposed as one of the key technologies in the fifth-generation mobile communications system (5G) to provide a much higher density of radio resource. The densification of small base stations could introduce much higher inter-cell interference and lead user to meet the edge of coverage more frequently. As the current handover scheme was originally proposed for macro BS, it could cause serious handover issues in UDN i.e. ping-pong handover, handover failures and frequent handover. In order to address these handover challenges and provide a high quality of service (QoS) to the user in UDN. This paper proposed a novel handover scheme, which integrates both advantages of fuzzy logic and multiple attributes decision algorithms (MADM) to ensure handover process be triggered at the right time and connection be switched to the optimal neighbouring BS. To further enhance the performance of the proposed scheme, this paper also adopts the subtractive clustering technique by using historical data to define the optimal membership functions within the fuzzy system. Performance results show that the proposed handover scheme outperforms traditional approaches and can significantly minimise the number of handovers and the ping-pong handover while maintaining QoS at a relatively high level. © 2019, Springer Science+Business Media, LLC, part of Springer Nature
A proactive mobile edge cache policy based on the prediction by partial matching
The proactive caching has been an emerging approach to cost-effectively boost the network capacity and reduce access latency. While the performance of which extremely relies on the content prediction. Therefore, in this paper, a proactive cache policy is proposed in a distributed manner considering the prediction of the content popularity and user location to minimise the latency and maximise the cache hit rate. Here, a backpropagation neural network is applied to predict the content popularity, and prediction by partial matching is chosen to predict the user location. The simulation results reveal our proposed cache policy is around 27%-60% improved in the cache hit ratio and 14%-60% reduced in the average latency, compared with the two conventional reactive policies, i.e., LFU and LRU policies
Deep convolutional neural networks for cardiovascular vulnerable plaque detection
In this paper, an accurate two-stage deep learning method is proposed to detect vulnerable plaques in ultrasonic images of cardiovascular. Firstly, a Fully Convonutional Neural Network (FCN) named U-Net is used to segment the original Intravascular Optical Coherence Tomography (IVOCT) cardiovascular images. We experiment on different threshold values to find the best threshold for removing noise and background in the original images. Secondly, a modified Faster RCNN is adopted to do precise detection. The modified Faster R-CNN utilize six-scale anchors (122,162,322,642,1282,2562) instead of the conventional one scale or three scale approaches. First, we present three problems in cardiovascular vulnerable plaque diagnosis, then we demonstrate how our method solve these problems. The proposed method in this paper apply deep convolutional neural networks to the whole diagnostic procedure. Test results show the Recall rate, Precision rate, IoU (Intersection-over-Union) rate and Total score are 0.94, 0.885, 0.913 and 0.913 respectively, higher than the 1st team of CCCV2017 Cardiovascular OCT Vulnerable Plaque Detection Challenge. AP of the designed Faster RCNN is 83.4%, higher than conventional approaches which use one-scale or three-scale anchors. These results demonstrate the superior performance of our proposed method and the power of deep learning approaches in diagnose cardiovascular vulnerable plaques
Data-Centric Evolution in Autonomous Driving: A Comprehensive Survey of Big Data System, Data Mining, and Closed-Loop Technologies
The aspiration of the next generation's autonomous driving (AD) technology
relies on the dedicated integration and interaction among intelligent
perception, prediction, planning, and low-level control. There has been a huge
bottleneck regarding the upper bound of autonomous driving algorithm
performance, a consensus from academia and industry believes that the key to
surmount the bottleneck lies in data-centric autonomous driving technology.
Recent advancement in AD simulation, closed-loop model training, and AD big
data engine have gained some valuable experience. However, there is a lack of
systematic knowledge and deep understanding regarding how to build efficient
data-centric AD technology for AD algorithm self-evolution and better AD big
data accumulation. To fill in the identified research gaps, this article will
closely focus on reviewing the state-of-the-art data-driven autonomous driving
technologies, with an emphasis on the comprehensive taxonomy of autonomous
driving datasets characterized by milestone generations, key features, data
acquisition settings, etc. Furthermore, we provide a systematic review of the
existing benchmark closed-loop AD big data pipelines from the industrial
frontier, including the procedure of closed-loop frameworks, key technologies,
and empirical studies. Finally, the future directions, potential applications,
limitations and concerns are discussed to arouse efforts from both academia and
industry for promoting the further development of autonomous driving. The
project repository is available at:
https://github.com/LincanLi98/Awesome-Data-Centric-Autonomous-Driving
Fuzzy-TOPSIS based optimal handover decision-making algorithm for fifth-generation of mobile communications system
With the increasing demand for higher bandwidth and data rate of the mobile user. There are massive Base Stations (BS) will be deployed in the future wireless environment. Several issues could be raised dues to dense deployment of BSs, i.e. handover (HO) ping-pong effect, unnecessary HO and frequent HO. To avoid these effects, the handover decision-making strategies become extremely important to select the optimal BS among all detected BS and ensure QoS for each mobile user. In this paper, the author develops a fuzzy-TOPSIS based HO algorithm to minimise the ping-pong effect and number of HO. The proposed algorithm integrates both advantages of fuzzy logic and TOPSIS. The Received Signal Strength Intensity (RSSI) and Signal to Noise Ratio (SNR) are considered as HO criteria in this approach. For the simulation result, the proposed HO algorithm can reduce ping-pong rate and a number of HO effectivity by comparing to conventional RSSI-based HO approach and classical Multi-Attribute Decision Making (MADM) HO method, i.e. simple additive weighting (SAW) and TOPSIS