688 research outputs found

    Stochastic user behaviour modelling and network simulation for resource management in cooperation with mobile telecommunications and broadcast networks

    Get PDF
    The latest generations of telecommunications networks have been designed to deliver higher data rates than widely used second generation telecommunications networks, providing flexible communication capabilities that can deliver high quality video images. However, these new generations of telecommunications networks are interference limited, impairing their performance in cases of heavy traffic and high usage. This limits the services offered by a telecommunications network operator to those that the operator is confident their network can meet the demand for. One way to lift this constraint would be for the mobile telecommunications network operator to obtain the cooperation of a broadcast network operator so that during periods when the demand for the service is too high for the telecommunications network to meet, the service can be transferred to the broadcast network. In the United Kingdom the most recent telecommunications networks on the market are third generation UMTS networks while the terrestrial digital broadcast networks are DVB-T networks. This paper proposes a way for UMTS network operators to forecast the traffic associated with high demand services intended to be deployed on the UMTS network and when demand requires to transfer it to a cooperating DVB-T network. The paper aims to justify to UMTS network operators the use of a DVB-T network as a support for a UMTS network by clearly showing how using a DVB-T network to support it can increase the revenue generated by their network

    Configuration of Distributed Message Converter Systems using Performance Modeling

    Get PDF
    To find a configuration of a distributed system satisfying performance goals is a complex search problem that involves many design parameters, like hardware selection, job distribution and process configuration. Performance models are a powerful tools to analyse potential system configurations, however, their evaluation is expensive, such that only a limited number of possible configurations can be evaluated. In this paper we present a systematic method to find a satisfactory configuration with feasible effort, based on a two-step approach. First, using performance estimates a hardware configuration is determined and then the software configuration is incrementally optimized evaluating Layered Queueing Network models. We applied this method to the design of performant EDI converter systems in the financial domain, where increasing message volumes need to be handled due to the increasing importance of B2B interaction

    Mobile edge computing-based data-driven deep learning framework for anomaly detection

    Get PDF
    5G is anticipated to embed an artificial intelligence (AI)-empowerment to adroitly plan, optimize and manage the highly complex network by leveraging data generated at different positions of the network architecture. Outages and situation leading to congestion in a cell pose severe hazard for the network. High false alarms and inadequate accuracy are the major limitations of modern approaches for the anomaly—outage and sudden hype in traffic activity that may result in congestion—detection in mobile cellular networks. This indicates wasting limited resources that ultimately leads to an elevated operational expenditure (OPEX) and also interrupting quality of service (QoS) and quality of experience (QoE). Motivated by the outstanding success of deep learning (DL) technology, our study applies it for detection of the above-mentioned anomalies and also supports mobile edge computing (MEC) paradigm in which core network (CN)’s computations are divided across the cellular infrastructure among different MEC servers (co-located with base stations), to relief the CN. Each server monitors user activities of multiple cells and utilizes LL -layer feedforward deep neural network (DNN) fueled by real call detail record (CDR) dataset for anomaly detection. Our framework achieved 98.8% accuracy with 0.44% false positive rate (FPR)—notable improvements that surmount the deficiencies of the old studies. The numerical results explicate the usefulness and dominance of our proposed detector

    Drone-Delivery Network for Opioid Overdose -- Nonlinear Integer Queueing-Optimization Models and Methods

    Full text link
    We propose a new stochastic emergency network design model that uses a fleet of drones to quickly deliver naxolone in response to opioid overdoses. The network is represented as a collection of M/G/K queuing systems in which the capacity K of each system is a decision variable and the service time is modelled as a decision-dependent random variable. The model is an optimization-based queuing problem which locates fixed (drone bases) and mobile (drones) servers and determines the drone dispatching decisions, and takes the form of a nonlinear integer problem, which is intractable in its original form. We develop an efficient reformulation and algorithmic framework. Our approach reformulates the multiple nonlinearities (fractional, polynomial, exponential, factorial terms) to give a mixed-integer linear programming (MILP) formulation. We demonstrate its generalizablity and show that the problem of minimizing the average response time of a network of M/G/K queuing systems with unknown capacity K is always MILP-representable. We design two algorithms and demonstrate that the outer approximation branch-and-cut method is the most efficient and scales well. The analysis based on real-life overdose data reveals that drones can in Virginia Beach: 1) decrease the response time by 78%, 2) increase the survival chance by 432%, 3) save up to 34 additional lives per year, and 4) provide annually up to 287 additional quality-adjusted life years

    Deep Learning Based Anomaly Detection for Fog-Assisted IoVs Network

    Get PDF
    Internet of vehicles (IoVs) allows millions of vehicles to be connected and share information for various purposes. The main applications of IoVs are traffic management, emergency messages delivery, E-health, traffic, and temperature monitoring. On the other hand, IoVs lack in location awareness and geographic distribution, which is critical for some IoVs applications such as smart traffic lights and information sharing in vehicles. To support these topographies, fog computing was proposed as an appealing and novel term, which was integrated with IoVs to extend storage, computation, and networking. Unfortunately, it is also challenged with various security and privacy hazards, which is a serious concern of smart cities. Therefore, we can formulate that Fog-assisted IoVs (Fa-IoVs), are challenged by security threats during information dissemination among mobile nodes. These security threats of Fa-IoVs are considered as anomalies which is a serious concern that needs to be addressed for smooth Fa-IoVs network communication. Here, smooth communication refers to less risk of important data loss, delay, communication overhead, etc. This research work aims to identify research gaps in the Fa-IoVs network and present a deep learning-based dynamic scheme named CAaDet (Convolutional autoencoder Aided anomaly detection) to detect anomalies. CAaDet exploits convolutional layers with a customized autoencoder for useful feature extraction and anomaly detection. Performance evaluation of the proposed scheme is done by using the F1-score metric where experiments are carried out by exploiting a benchmark dataset named NSL-KDD. CAaDet also observes the behavior of fog nodes and hidden neurons and selects the best match to reduce false alarms and improve F1-score. The proposed scheme achieved significant improvement over existing schemes for anomaly detection. Identified research gaps in Fa-IoVs can give future directions to researchers and attract more attention to this new era

    Classification of traffic over collaborative IoT and Cloud platforms using deep learning recurrent LSTM

    Get PDF
    Internet of Things (IoT) and cloud based collaborative platforms are emerging as new infrastructures during recent decades. The classification of network traffic in terms of benign and malevolent traffic is indispensable for IoT-cloud based collaborative platforms to utilize the channel capacity optimally for transmitting the benign traffic and to block the malicious traffic. The traffic classification mechanism should be dynamic and capable enough to classify the network traffic in a quick manner, so that the malevolent traffic can be identified in earlier stages and benign traffic can be channelized to the destined nodes speedily. In this paper, we are presenting deep learning recurrent LSTM based technique to classify the traffic over IoT-cloud platforms. Machine learning techniques (MLTs) have also been employed for comparison of the performance of these techniques with the proposed LSTM RNet classification method. In the proposed research work, network traffic is classified into three classes namely Tor-Normal, NonTor-Normal and NonTor-Malicious traffic. The research outcome shows that the proposed LSTM RNet classify the traffic accurately and also helps in reducing the network latency and in enhancing the data transmission rate as well as network throughput

    Enabling Location-Based Services in Data Centers

    Get PDF
    In this article, we explore services and capabilities that can be enabled by the localization of various assets in a data center or IT environment. We also describe the underlying location estimation method and the protocol to enable localization. Finally, we present a management framework for these services and present a few case studies to assess benefits of location-based services in data centers
    • …
    corecore