79,705 research outputs found

    Weather Data Transmission Driven By Artificial Neural Network based Prediction

    Get PDF
    Nowadays, the trend of big data that can be describe as a massive volume and unstructured data have become more complicated because of its difficulty to be process using traditional database and software techniques. Due to increase in size of data, there also demand for big bandwidth for the transmission of big data. Data transmission is important in communication in providing information at different location. In this project we focus on big data transmission in the context of weather data. Weather data is important for meteorologist as it helps them to make weather prediction. Real time weather prediction is really important as it would help in making quick decision to react with the environment and planning for our daily activities. The purpose of this project is to develop a real time and low bandwidth usage for weather data transmission driven by an artificial neural network perform weather forecast using Adaptive Forecasting Model. This project seeks an application context offshore because the data transmission from offshore to onshore is very costly and requires high usage of network bandwidth. Other than that, offshore weather can change rapidly and cause offshore activity to be delayed

    Practical Commercial 5G Standalone (SA) Uplink Throughput Prediction

    Full text link
    While the 5G New Radio (NR) network promises a huge uplift of the uplink throughput, the improvement can only be seen when the User Equipment (UE) is connected to the high-frequency millimeter wave (mmWave) band. With the rise of uplink-intensive smartphone applications such as the real-time transmission of UHD 4K/8K videos, and Virtual Reality (VR)/Augmented Reality (AR) contents, uplink throughput prediction plays a huge role in maximizing the users' quality of experience (QoE). In this paper, we propose using a ConvLSTM-based neural network to predict the future uplink throughput based on past uplink throughput and RF parameters. The network is trained using the data from real-world drive tests on commercial 5G SA networks while riding commuter trains, which accounted for various frequency bands, handover, and blind spots. To make sure our model can be practically implemented, we then limited our model to only use the information available via Android API, then evaluate our model using the data from both commuter trains and other methods of transportation. The results show that our model reaches an average prediction accuracy of 98.9\% with an average RMSE of 1.80 Mbps across all unseen evaluation scenarios

    Weather Data Transmission Driven By Artificial Neural Network based Prediction

    Get PDF
    Nowadays, the trend of big data that can be describe as a massive volume and unstructured data have become more complicated because of its difficulty to be process using traditional database and software techniques. Due to increase in size of data, there also demand for big bandwidth for the transmission of big data. Data transmission is important in communication in providing information at different location. In this project we focus on big data transmission in the context of weather data. Weather data is important for meteorologist as it helps them to make weather prediction. Real time weather prediction is really important as it would help in making quick decision to react with the environment and planning for our daily activities. The purpose of this project is to develop a real time and low bandwidth usage for weather data transmission driven by an artificial neural network perform weather forecast using Adaptive Forecasting Model. This project seeks an application context offshore because the data transmission from offshore to onshore is very costly and requires high usage of network bandwidth. Other than that, offshore weather can change rapidly and cause offshore activity to be delayed

    Environmentally adaptive acoustic transmission loss prediction in turbulent and nonturbulent atmospheres

    Get PDF
    Includes bibliographical references (page 497).An environmentally adaptive system for prediction of acoustic transmission loss (TL) in the atmosphere is developed in this paper. This system uses several back propagation neural network predictors, each corresponding to a specific environmental condition. The outputs of the expert predictors are combined using a fuzzy confidence measure and a nonlinear fusion system. Using this prediction methodology the computational intractability of traditional acoustic model-based approaches is eliminated. The proposed TL prediction system is tested on two synthetic acoustic data sets for a wide range of geometrical, source and environmental conditions including both nonturbulent and turbulent atmospheres. Test results of the system showed root mean square (RMS) errors of 1.84 dB for the nonturbulent and 1.36 dB for the turbulent conditions, respectively, which are acceptable levels for near real-time performance. Additionally, the environmentally adaptive system demonstrated improved TL prediction accuracy at high frequencies and large values of horizontal separation between source and receiver

    Dynamic Encoding and Decoding of Information for Split Learning in Mobile-Edge Computing: Leveraging Information Bottleneck Theory

    Full text link
    Split learning is a privacy-preserving distributed learning paradigm in which an ML model (e.g., a neural network) is split into two parts (i.e., an encoder and a decoder). The encoder shares so-called latent representation, rather than raw data, for model training. In mobile-edge computing, network functions (such as traffic forecasting) can be trained via split learning where an encoder resides in a user equipment (UE) and a decoder resides in the edge network. Based on the data processing inequality and the information bottleneck (IB) theory, we present a new framework and training mechanism to enable a dynamic balancing of the transmission resource consumption with the informativeness of the shared latent representations, which directly impacts the predictive performance. The proposed training mechanism offers an encoder-decoder neural network architecture featuring multiple modes of complexity-relevance tradeoffs, enabling tunable performance. The adaptability can accommodate varying real-time network conditions and application requirements, potentially reducing operational expenditure and enhancing network agility. As a proof of concept, we apply the training mechanism to a millimeter-wave (mmWave)-enabled throughput prediction problem. We also offer new insights and highlight some challenges related to recurrent neural networks from the perspective of the IB theory. Interestingly, we find a compression phenomenon across the temporal domain of the sequential model, in addition to the compression phase that occurs with the number of training epochs.Comment: Accepted to Proc. IEEE Globecom 202

    An Overview on Application of Machine Learning Techniques in Optical Networks

    Get PDF
    Today's telecommunication networks have become sources of enormous amounts of widely heterogeneous data. This information can be retrieved from network traffic traces, network alarms, signal quality indicators, users' behavioral data, etc. Advanced mathematical tools are required to extract meaningful information from these data and take decisions pertaining to the proper functioning of the networks from the network-generated data. Among these mathematical tools, Machine Learning (ML) is regarded as one of the most promising methodological approaches to perform network-data analysis and enable automated network self-configuration and fault management. The adoption of ML techniques in the field of optical communication networks is motivated by the unprecedented growth of network complexity faced by optical networks in the last few years. Such complexity increase is due to the introduction of a huge number of adjustable and interdependent system parameters (e.g., routing configurations, modulation format, symbol rate, coding schemes, etc.) that are enabled by the usage of coherent transmission/reception technologies, advanced digital signal processing and compensation of nonlinear effects in optical fiber propagation. In this paper we provide an overview of the application of ML to optical communications and networking. We classify and survey relevant literature dealing with the topic, and we also provide an introductory tutorial on ML for researchers and practitioners interested in this field. Although a good number of research papers have recently appeared, the application of ML to optical networks is still in its infancy: to stimulate further work in this area, we conclude the paper proposing new possible research directions

    Deep Learning of Microstructures

    Get PDF
    The internal structure of materials also called the microstructure plays a critical role in the properties and performance of materials. The chemical element composition is one of the most critical factors in changing the structure of materials. However, the chemical composition alone is not the determining factor, and a change in the production process can also significantly alter the materials\u27 structure. Therefore, many efforts have been made to discover and improve production methods to optimize the functional properties of materials. The most critical challenge in finding materials with enhanced properties is to understand and define the salient features of the structure of materials that have the most significant impact on the desired property. In other words, by process, structure, and property (PSP) linkages, the effect of changing process variables on material structure and, consequently, the property can be examined and used as a powerful tool in material design with desirable characteristics. In particular, forward PSP linkages construction has received considerable attention thanks to the sophisticated physics-based models. Recently, machine learning (ML), and data science have also been used as powerful tools to find PSP linkages in materials science. One key advantage of the ML-based models is their ability to construct both forward and inverse PSP linkages. Early ML models in materials science were primarily focused on process-property linkages construction. Recently, more microstructures are included in the materials design ML models. However, the inverse design of microstructures, i.e., the prediction of vii process and chemistry from a microstructure morphology image have received limited attention. This is a critical knowledge gap to address specifically for the problems that the ideal microstructure or morphology with the specific chemistry associated with the morphological domains are known, but the chemistry and processing which would lead to that ideal morphology are unknown. In this study, first, we propose a framework based on a deep learning approach that enables us to predict the chemistry and processing history just by reading the morphological distribution of one element. As a case study, we used a dataset from spinodal decomposition simulation of Fe-Cr-Co alloy created by the phase-field method. The mixed dataset, which includes both images, i.e., the morphology of Fe distribution, and continuous data, i.e., the Fe minimum and maximum concentration in the microstructures, are used as input data, and the spinodal temperature and initial chemical composition are utilized as the output data to train the proposed deep neural network. The proposed convolutional layers were compared with pretrained EfficientNet convolutional layers as transfer learning in microstructure feature extraction. The results show that the trained shallow network is effective for chemistry prediction. However, accurate prediction of processing temperature requires more complex feature extraction from the morphology of the microstructure. We benchmarked the model predictive accuracy for real alloy systems with a Fe-Cr-Co transmission electron microscopy micrograph. The predicted chemistry and heat treatment temperature were in good agreement with the ground truth. The treatment time was considered to be constant in the first study. In the second work, we propose a fused-data deep learning framework that can predict the heat treatment time as well as temperature and initial chemical compositions by reading the morphology of Fe distribution and its concentration. The results show that the trained deep neural network has the highest accuracy for chemistry and then time and temperature. We identified two scenarios for inaccurate predictions; 1) There are several paths for an identical microstructure, and 2) Microstructures reach steady-state morphologies after a long time of aging. The error analysis shows that most of the wrong predictions are not wrong, but the other right answers. We validated the model successfully with an experimental Fe-Cr-Co transmission electron microscopy micrograph. Finally, since the data generation by simulation is computationally expensive, we propose a quick and accurate Predictive Recurrent Neural Network (PredRNN) model for the microstructure evolution prediction. Essentially, microstructure evolution prediction is a spatiotemporal sequence prediction problem, where the prediction of material microstructure is difficult due to different process histories and chemistry. As a case study, we used a dataset from spinodal decomposition simulation of Fe-Cr-Co alloy created by the phase-field method for training and predicting future microstructures by previous observations. The results show that the trained network is capable of efficient prediction of microstructure evolution

    6G White Paper on Machine Learning in Wireless Communication Networks

    Full text link
    The focus of this white paper is on machine learning (ML) in wireless communications. 6G wireless communication networks will be the backbone of the digital transformation of societies by providing ubiquitous, reliable, and near-instant wireless connectivity for humans and machines. Recent advances in ML research has led enable a wide range of novel technologies such as self-driving vehicles and voice assistants. Such innovation is possible as a result of the availability of advanced ML models, large datasets, and high computational power. On the other hand, the ever-increasing demand for connectivity will require a lot of innovation in 6G wireless networks, and ML tools will play a major role in solving problems in the wireless domain. In this paper, we provide an overview of the vision of how ML will impact the wireless communication systems. We first give an overview of the ML methods that have the highest potential to be used in wireless networks. Then, we discuss the problems that can be solved by using ML in various layers of the network such as the physical layer, medium access layer, and application layer. Zero-touch optimization of wireless networks using ML is another interesting aspect that is discussed in this paper. Finally, at the end of each section, important research questions that the section aims to answer are presented
    • …
    corecore