176,685 research outputs found

    Models and Protocols for Resource Optimization in Wireless Mesh Networks

    Get PDF
    Wireless mesh networks are built on a mix of fixed and mobile nodes interconnected via wireless links to form a multihop ad hoc network. An emerging application area for wireless mesh networks is their evolution into a converged infrastructure used to share and extend, to mobile users, the wireless Internet connectivity of sparsely deployed fixed lines with heterogeneous capacity, ranging from ISP-owned broadband links to subscriber owned low-speed connections. In this thesis we address different key research issues for this networking scenario. First, we propose an analytical predictive tool, developing a queuing network model capable of predicting the network capacity and we use it in a load aware routing protocol in order to provide, to the end users, a quality of service based on the throughput. We then extend the queuing network model and introduce a multi-class queuing network model to predict analytically the average end-to-end packet delay of the traffic flows among the mobile end users and the Internet. The analytical models are validated against simulation. Second, we propose an address auto-configuration solution to extend the coverage of a wireless mesh network by interconnecting it to a mobile ad hoc network in a transparent way for the infrastructure network (i.e., the legacy Internet interconnected to the wireless mesh network). Third, we implement two real testbed prototypes of the proposed solutions as a proof-of-concept, both for the load aware routing protocol and the auto-configuration protocol. Finally we discuss the issues related to the adoption of ad hoc networking technologies to address the fragility of our communication infrastructure and to build the next generation of dependable, secure and rapidly deployable communications infrastructures

    Computational classifiers for predicting the short-term course of Multiple sclerosis

    Get PDF
    The aim of this study was to assess the diagnostic accuracy (sensitivity and specificity) of clinical, imaging and motor evoked potentials (MEP) for predicting the short-term prognosis of multiple sclerosis (MS). METHODS: We obtained clinical data, MRI and MEP from a prospective cohort of 51 patients and 20 matched controls followed for two years. Clinical end-points recorded were: 1) expanded disability status scale (EDSS), 2) disability progression, and 3) new relapses. We constructed computational classifiers (Bayesian, random decision-trees, simple logistic-linear regression-and neural networks) and calculated their accuracy by means of a 10-fold cross-validation method. We also validated our findings with a second cohort of 96 MS patients from a second center. RESULTS: We found that disability at baseline, grey matter volume and MEP were the variables that better correlated with clinical end-points, although their diagnostic accuracy was low. However, classifiers combining the most informative variables, namely baseline disability (EDSS), MRI lesion load and central motor conduction time (CMCT), were much more accurate in predicting future disability. Using the most informative variables (especially EDSS and CMCT) we developed a neural network (NNet) that attained a good performance for predicting the EDSS change. The predictive ability of the neural network was validated in an independent cohort obtaining similar accuracy (80%) for predicting the change in the EDSS two years later. CONCLUSIONS: The usefulness of clinical variables for predicting the course of MS on an individual basis is limited, despite being associated with the disease course. By training a NNet with the most informative variables we achieved a good accuracy for predicting short-term disability

    Hybrid Neural Networks for Learning the Trend in Time Series

    Get PDF
    The trend of time series characterizes the intermediate upward and downward behaviour of time series. Learning and forecasting the trend in time series data play an important role in many real applications, ranging from resource allocation in data centers, load schedule in smart grid, and so on. Inspired by the recent successes of neural networks, in this paper we propose TreNet, a novel end-to-end hybrid neural network to learn local and global contextual features for predicting the trend of time series. TreNet leverages convolutional neural networks (CNNs) to extract salient features from local raw data of time series. Meanwhile, considering the long-range dependency existing in the sequence of historical trends of time series, TreNet uses a long-short term memory recurrent neural network (LSTM) to capture such dependency. Then, a feature fusion layer is to learn joint representation for predicting the trend. TreNet demonstrates its effectiveness by outperforming CNN, LSTM, the cascade of CNN and LSTM, Hidden Markov Model based method and various kernel based baselines on real datasets

    Pattern Reduction for Low-Traffic Speculative Video Transmission in Cloud Gaming System

    Get PDF
    Ishioka T., Fukui T., Fujiwara T., et al. Pattern Reduction for Low-Traffic Speculative Video Transmission in Cloud Gaming System. IEEE Access 12, 8902 (2024); https://doi.org/10.1109/ACCESS.2024.3352435.Cloud gaming allows users to play high-quality games on low-end devices by offloading game processing to the cloud. However, network latency remains a significant issue affecting the gaming experience. Speculative execution is a promising approach to hide network latency by predicting and transmitting future frames early. However, existing methods generate excessive compute load and network traffic due to many potential input patterns. This paper introduces a pattern reduction method that uses a bit field representation of the input and facilitates efficient speculative execution in cloud games. There are two pattern reduction techniques: analyzing temporal patterns to detect frequent transitions and using LSTM-based predictions to estimate input probabilities. Experiments using actual gaming data show that the proposed methods significantly reduce rendered frames and network traffic versus prior speculative execution methods. The results demonstrate the method's effectiveness and scalability across diverse game genres

    Machine Learning-based Classification of Combustion Events in an RCCI Engine Using Heat Release Rate Shapes

    Get PDF
    Reactivity controlled compression ignition (RCCI) mode offers high thermal efficiency and low nitrogen oxides (NOx) and soot emissions. However, high cyclic variability at low engine load and high pressure rise rates at high loads limit RCCI operation. Therefore, it is important to control the combustion event in an RCCI engines to prevent abnormal engine combustion. To this end, combustion in RCCI mode was studied by analyzing the heat release rates calculated from the in-cylinder pressure data at 798 different operating conditions. Five distinct heat release shapes are identified. These different heat release traces were characterized based on start of combustion, burn duration, combustion phasing, maximum pressure rise rate, maximum amount of heat release, maximum in-cylinder gas temperature and pressure. Both supervised and unsupervised machine learning approaches are used to classify different types of heat release rates. K-means clustering, an unsupervised algorithm, could not cluster the heat release traces distinctly. Convolution neural network (CNN) and decision trees, supervised classification algorithms, were designed to classify the heat release rates. The CNN algorithm showed 70% accuracy in predicting the shapes of heat release rates while decision tree resulted in 74.5% accuracy in predicting different heat release rate traces

    Simultaneous Measurement Imputation and Outcome Prediction for Achilles Tendon Rupture Rehabilitation

    Full text link
    Achilles Tendon Rupture (ATR) is one of the typical soft tissue injuries. Rehabilitation after such a musculoskeletal injury remains a prolonged process with a very variable outcome. Accurately predicting rehabilitation outcome is crucial for treatment decision support. However, it is challenging to train an automatic method for predicting the ATR rehabilitation outcome from treatment data, due to a massive amount of missing entries in the data recorded from ATR patients, as well as complex nonlinear relations between measurements and outcomes. In this work, we design an end-to-end probabilistic framework to impute missing data entries and predict rehabilitation outcomes simultaneously. We evaluate our model on a real-life ATR clinical cohort, comparing with various baselines. The proposed method demonstrates its clear superiority over traditional methods which typically perform imputation and prediction in two separate stages

    Neural network-based reduced-order modeling for nonlinear vertical sloshing with experimental validation

    Get PDF
    In this paper, a nonlinear reduced-order model based on neural networks is introduced in order to model vertical sloshing in presence of Rayleigh–Taylor instability of the free surface for use in fluid–structure interaction simulations. A box partially filled with water, representative of a wing tank, is first set on vertical harmonic motion via a controlled electrodynamic shaker. Accelerometers and load cells at the interface between the tank and an electrodynamic shaker are employed to train a neural network-based reduced-order model for vertical sloshing. The model is then investigated for its capacity to consistently simulate the amount of dissipation associated with vertical sloshing under different fluid dynamics regimes. The identified tank is then experimentally attached at the free end of a cantilever beam to test the effectiveness of the neural network in predicting the sloshing forces when coupled with the overall structure. The experimental free response and random seismic excitation responses are then compared with that obtained by simulating an equivalent virtual model in which the identified nonlinear reduced-order model is integrated to account for the effects of violent vertical sloshing

    Finding kernel function for stock market prediction with support vector regression

    Get PDF
    Stock market prediction is one of the fascinating issues of stock market research. Accurate stock prediction becomes the biggest challenge in investment industry because the distribution of stock data is changing over the time. Time series forcasting, Neural Network (NN) and Support Vector Machine (SVM) are once commonly used for prediction on stock price. In this study, the data mining operation called time series forecasting is implemented. The large amount of stock data collected from Kuala Lumpur Stock Exchange is used for the experiment to test the validity of SVMs regression. SVM is a new machine learning technique with principle of structural minimization risk, which have greater generalization ability and proved success in time series prediction. Two kernel functions namely Radial Basis Function and polynomial are compared for finding the accurate prediction values. Besides that, backpropagation neural network are also used to compare the predictions performance. Several experiments are conducted and some analyses on the experimental results are done. The results show that SVM with polynomial kernels provide a promising alternative tool in KLSE stock market prediction
    corecore