1,155 research outputs found

    Prediction-Based Energy Saving Mechanism in 3GPP NB-IoT Networks

    Get PDF
    The current expansion of the Internet of things (IoT) demands improved communication platforms that support a wide area with low energy consumption. The 3rd Generation Partnership Project introduced narrowband IoT (NB-IoT) as IoT communication solutions. NB-IoT devices should be available for over 10 years without requiring a battery replacement. Thus, a low energy consumption is essential for the successful deployment of this technology. Given that a high amount of energy is consumed for radio transmission by the power amplifier, reducing the uplink transmission time is key to ensure a long lifespan of an IoT device. In this paper, we propose a prediction-based energy saving mechanism (PBESM) that is focused on enhanced uplink transmission. The mechanism consists of two parts: first, the network architecture that predicts the uplink packet occurrence through a deep packet inspection; second, an algorithm that predicts the processing delay and pre-assigns radio resources to enhance the scheduling request procedure. In this way, our mechanism reduces the number of random accesses and the energy consumed by radio transmission. Simulation results showed that the energy consumption using the proposed PBESM is reduced by up to 34% in comparison with that in the conventional NB-IoT method

    Performance Evaluation of LTE Random Access Procedure under Distributed Location Data Mining for Road Traffic Monitoring

    Get PDF
    The widely-used cellular system, such as LTE, has a large potential to serve vehicle-related connectivity which is one of the potential IoT market in the near future. However, the connection-oriented nature of LTE Random Access procedure remains the challenge. The RA procedure in LTE is prone to be overloaded with simultaneous access request from large number of mobile users. This overload condition is likely to happen due to synchronized location reporting for road traffic monitoring in Intelligent Transportation System (ITS). This work studies the performance of RA procedure under overload condition in term of probability of successful report delivery and average report delivery delay. Subsequently, two proposals are made to increase the performance: report time spreading and truncated binary exponential decrease backoff mechanism. The two proposals are evaluated and compared with the existing configuration of LTE RA procedure. Our simulation results show that both proposals can increase the probability of successful report delivery and obtain fairly low average report delivery delay, which is necessary for accurate location reporting and road traffic monitoring system

    5GAuRA. D3.3: RAN Analytics Mechanisms and Performance Benchmarking of Video, Time Critical, and Social Applications

    Get PDF
    5GAuRA deliverable D3.3.This is the final deliverable of Work Package 3 (WP3) of the 5GAuRA project, providing a report on the project’s developments on the topics of Radio Access Network (RAN) analytics and application performance benchmarking. The focus of this deliverable is to extend and deepen the methods and results provided in the 5GAuRA deliverable D3.2 in the context of specific use scenarios of video, time critical, and social applications. In this respect, four major topics of WP3 of 5GAuRA – namely edge-cloud enhanced RAN architecture, machine learning assisted Random Access Channel (RACH) approach, Multi-access Edge Computing (MEC) content caching, and active queue management – are put forward. Specifically, this document provides a detailed discussion on the service level agreement between tenant and service provider in the context of network slicing in Fifth Generation (5G) communication networks. Network slicing is considered as a key enabler to 5G communication system. Legacy telecommunication networks have been providing various services to all kinds of customers through a single network infrastructure. In contrast, by deploying network slicing, operators are now able to partition one network into individual slices, each with its own configuration and Quality of Service (QoS) requirements. There are many applications across industry that open new business opportunities with new business models. Every application instance requires an independent slice with its own network functions and features, whereby every single slice needs an individual Service Level Agreement (SLA). In D3.3, we propose a comprehensive end-to-end structure of SLA between the tenant and the service provider of sliced 5G network, which balances the interests of both sides. The proposed SLA defines reliability, availability, and performance of delivered telecommunication services in order to ensure that right information is delivered to the right destination at right time, safely and securely. We also discuss the metrics of slicebased network SLA such as throughput, penalty, cost, revenue, profit, and QoS related metrics, which are, in the view of 5GAuRA, critical features of the agreement.Peer ReviewedPostprint (published version

    PROCESS FOR BREAKING DOWN THE LTE SIGNAL TO EXTRACT KEY INFORMATION

    Get PDF
    The increasingly important role of Long Term Evolution (LTE) has increased security concerns among the service providers and end users and made security of the network even more indispensable. The main thrust of this thesis is to investigate if the LTE signal can be broken down in a methodical way to obtain information that would otherwise be private; e.g., the Global Positioning System (GPS) location of the user equipment/base station or identity (ID) of the user. The study made use of signal simulators and software to analyze the LTE signal to develop a method to remove noise, breakdown the LTE signal and extract desired information. From the simulation results, it was possible to extract key information in the downlink like the Downlink Control Information (DCI), Cell-Radio Network Temporary Identifier (C-RNTI) and physical Cell Identity (Cell-ID). This information can be modified to cause service disruptions in the network within a reasonable amount of time and with modest computing resources.Defence Science and Technology Agency, SingaporeApproved for public release; distribution is unlimited

    Adaptive and secured resource management in distributed and Internet systems

    Get PDF
    The effectiveness of computer system resource management has been always determined by two major factors: (1) workload demands and management objectives, (2) the updates of the computer technology. These two factors are dynamically changing, and resource management systems must be timely adaptive to the changes. This dissertation attempts to address several important and related resource management issues.;We first study memory system utilization in centralized servers by improving memory performance of sorting algorithms, which provides fundamental understanding on memory system organizations and its performance optimizations for data-intensive workloads. to reduce different types of cache misses, we restructure the mergesort and quicksort algorithms by integrating tiling, padding, and buffering techniques and by repartitioning the data set. Our study shows substantial performance improvements from our new methods.;We have further extended the work to improve load sharing for utilizing global memory resources in distributed systems. Aiming at reducing the memory resource contention caused by page faults and I/O activities, we have developed and examined load sharing policies by considering effective usage of global memory in addition to CPU load balancing in both homogeneous and heterogeneous clusters.;Extending our research from clusters to Internet systems, we have further investigated memory and storage utilizations in Web caching systems. We have proposed several novel management schemes to restructure and decentralize the existing caching system by exploiting data locality at different levels of the global memory hierarchy and by effectively sharing data objects among the clients and their proxy caches.;Data integrity and communication anonymity issues are raised from our decentralized Web caching system design, which are also security concerns for general peer-to-peer systems. We propose an integrity protocol to ensure data integrity, and several protocols to achieve mutual communication anonymity between an information requester and a provider.;The potential impact and contributions of this dissertation are briefly stated as follows: (1) two major research topics identified in this dissertation are fundamentally important for the growth and development of information technology, and will continue to be demanding topics for a long term. (2) Our proposed cache-effective sorting methods bridge a serious gap between analytical complexity of algorithms and their execution complexity in practice due to the increasingly deep memory hierarchy in computer systems. This approach can also be used to improve memory performance at different levels of the memory hierarchy, such as I/O and file systems. (3) Our load sharing principle of giving a high priority to the requests of data accesses in memory and I/Os timely adapts the technology changes and effectively responds to the increasing demand of data-intensive applications. (4) Our proposed decentralized Web caching framework and its resource management schemes present a comprehensive case study to examine the P2P model. Our results and experiences can be used for related and further studies in distributed computing. (5) The proposed data integrity and communication anonymity protocols address limits and weaknesses of existing ones, and place a solid foundation for us to continue our work in this important area

    Performance measurement and evaluation of time-shared operating systems

    Get PDF
    Time-shared, virtual memory systems are very complex and changes in their performance may be caused by many factors - by variations in the workload as well as changes in system configuration. The evaluation of these systems can thus best be carried out by linking results obtained from a planned programme of measurements, taken on the system, to some model of it. Such a programme of measurements is best carried out under conditions in which all the parameters likely to affect the system's performance are reproducible, and under the control of the experimenter. In order that this be possible the workload used must be simulated and presented to the target system through some form of automatic workload driver. A case study of such a methodology is presented in which the system (in this case the Edinburgh Multi-Access System) is monitored during a controlled experiment (designed and analysed using standard techniques in common use in many other branches of experimental science) and the results so obtained used to calibrate and validate a simple simulation model of the system. This model is then used in further investigation of the effect of certain system parameters upon the system performance. The factors covered by this exercise include the effect of varying: main memory size, process loading algorithm and secondary memory characteristics

    C-MOS array design techniques: SUMC multiprocessor system study

    Get PDF
    The current capabilities of LSI techniques for speed and reliability, plus the possibilities of assembling large configurations of LSI logic and storage elements, have demanded the study of multiprocessors and multiprocessing techniques, problems, and potentialities. Evaluated are three previous systems studies for a space ultrareliable modular computer multiprocessing system, and a new multiprocessing system is proposed that is flexibly configured with up to four central processors, four 1/0 processors, and 16 main memory units, plus auxiliary memory and peripheral devices. This multiprocessor system features a multilevel interrupt, qualified S/360 compatibility for ground-based generation of programs, virtual memory management of a storage hierarchy through 1/0 processors, and multiport access to multiple and shared memory units
    corecore