7,703 research outputs found

    Millimeter-wave Wireless LAN and its Extension toward 5G Heterogeneous Networks

    Full text link
    Millimeter-wave (mmw) frequency bands, especially 60 GHz unlicensed band, are considered as a promising solution for gigabit short range wireless communication systems. IEEE standard 802.11ad, also known as WiGig, is standardized for the usage of the 60 GHz unlicensed band for wireless local area networks (WLANs). By using this mmw WLAN, multi-Gbps rate can be achieved to support bandwidth-intensive multimedia applications. Exhaustive search along with beamforming (BF) is usually used to overcome 60 GHz channel propagation loss and accomplish data transmissions in such mmw WLANs. Because of its short range transmission with a high susceptibility to path blocking, multiple number of mmw access points (APs) should be used to fully cover a typical target environment for future high capacity multi-Gbps WLANs. Therefore, coordination among mmw APs is highly needed to overcome packet collisions resulting from un-coordinated exhaustive search BF and to increase the total capacity of mmw WLANs. In this paper, we firstly give the current status of mmw WLANs with our developed WiGig AP prototype. Then, we highlight the great need for coordinated transmissions among mmw APs as a key enabler for future high capacity mmw WLANs. Two different types of coordinated mmw WLAN architecture are introduced. One is the distributed antenna type architecture to realize centralized coordination, while the other is an autonomous coordination with the assistance of legacy Wi-Fi signaling. Moreover, two heterogeneous network (HetNet) architectures are also introduced to efficiently extend the coordinated mmw WLANs to be used for future 5th Generation (5G) cellular networks.Comment: 18 pages, 24 figures, accepted, invited paper

    Metamodel-based importance sampling for structural reliability analysis

    Full text link
    Structural reliability methods aim at computing the probability of failure of systems with respect to some prescribed performance functions. In modern engineering such functions usually resort to running an expensive-to-evaluate computational model (e.g. a finite element model). In this respect simulation methods, which may require 103−610^{3-6} runs cannot be used directly. Surrogate models such as quadratic response surfaces, polynomial chaos expansions or kriging (which are built from a limited number of runs of the original model) are then introduced as a substitute of the original model to cope with the computational cost. In practice it is almost impossible to quantify the error made by this substitution though. In this paper we propose to use a kriging surrogate of the performance function as a means to build a quasi-optimal importance sampling density. The probability of failure is eventually obtained as the product of an augmented probability computed by substituting the meta-model for the original performance function and a correction term which ensures that there is no bias in the estimation even if the meta-model is not fully accurate. The approach is applied to analytical and finite element reliability problems and proves efficient up to 100 random variables.Comment: 20 pages, 7 figures, 2 tables. Preprint submitted to Probabilistic Engineering Mechanic

    Distributed video coding for wireless video sensor networks: a review of the state-of-the-art architectures

    Get PDF
    Distributed video coding (DVC) is a relatively new video coding architecture originated from two fundamental theorems namely, Slepian–Wolf and Wyner–Ziv. Recent research developments have made DVC attractive for applications in the emerging domain of wireless video sensor networks (WVSNs). This paper reviews the state-of-the-art DVC architectures with a focus on understanding their opportunities and gaps in addressing the operational requirements and application needs of WVSNs

    Investigation on Design and Development Methods for Internet of Things

    Get PDF
    The thesis work majorly focuses on the development methodologies of the Internet of Things (IoT). A detailed literature survey is presented for the discussion of various challenges in the development of software and design and deployment of hardware. The thesis work deals with the efficient development methodologies for the deployment of IoT system. Efficient hardware and software development reduces the risk of the system bugs and faults. The optimal placement of the IoT devices is the major challenge for the monitoring application. A Qualitative Spatial Reasoning (QSR) and Qualitative Temporal Reasoning (QTR) methodologies are proposed to build software systems. The proposed hybrid methodology includes the features of QSR, QTR, and traditional databased methodologies. The hybrid methodology is proposed to build the software systems and direct them to the specific goal of obtaining outputs inherent to the process. The hybrid methodology includes the support of tools and is detailed, integrated, and fits the general proposal. This methodology repeats the structure of Spatio-temporal reasoning goals. The object-oriented IoT device placement is the major goal of the proposed work. Segmentation and object detection is used for the division of the region into sub-regions. The coverage and connectivity are maintained by the optimal placement of the IoT devices using RCC8 and TPCC algorithms. Over the years, IoT has offered different solutions in all kinds of areas and contexts. The diversity of these challenges makes it hard to grasp the underlying principles of the different solutions and to design an appropriate custom implementation on the IoT space. One of the major objective of the proposed thesis work is to study numerous production-ready IoT offerings, extract recurring proven solution principles, and classify them into spatial patterns. The method of refinement of the goals is employed so that complex challenges are solved by breaking them down into simple and achievable sub-goals. The work deals with the major sub-goals e.g. efficient coverage of the field, connectivity of the IoT devices, Spatio-temporal aggregation of the data, and estimation of spatially connected regions of event detection. We have proposed methods to achieve each sub-goal for all different types of spatial patterns. The spatial patterns developed can be used in ongoing and future research on the IoT to understand the principles of the IoT, which will, in turn, promote the better development of existing and new IoT devices. The next objective is to utilize the IoT network for enterprise architecture (EA) based IoT application. EA defines the structure and operation of an organization to determine the most effective way for it to achieve its objectives. Digital transformation of EA is achieved through analysis, planning, design, and implementation, which interprets enterprise goals into an IoT-enabled enterprise design. A blueprint is necessary for the readying of IT resources that support business services and processes. A systematic approach is proposed for the planning and development of EA for IoT-Applications. The Enterprise Interface (EI) layer is proposed to efficiently categorize the data. The data is categorized based on local and global factors. The clustered data is then utilized by the end-users. A novel four-tier structure is proposed for Enterprise Applications. We analyzed the challenges, contextualized them, and offered solutions and recommendations. The last objective of the thesis work is to develop energy-efficient data consistency method. The data consistency is a challenge for designing energy-efficient medium access control protocol used in IoT. The energy-efficient data consistency method makes the protocol suitable for low, medium, and high data rate applications. The idea of energyefficient data consistency protocol is proposed with data aggregation. The proposed protocol efficiently utilizes the data rate as well as saves energy. The optimal sampling rate selection method is introduced for maintaining the data consistency of continuous and periodic monitoring node in an energy-efficient manner. In the starting phase, the nodes will be classified into event and continuous monitoring nodes. The machine learning based logistic classification method is used for the classification of nodes. The sampling rate of continuous monitoring nodes is optimized during the setup phase by using optimized sampling rate data aggregation algorithm. Furthermore, an energy-efficient time division multiple access (EETDMA) protocol is used for the continuous monitoring on IoT devices, and an energy-efficient bit map assisted (EEBMA) protocol is proposed for the event driven nodes

    SSthreshless Start: A Sender-Side TCP Intelligence for Long Fat Network

    Full text link
    Measurement shows that 85% of TCP flows in the internet are short-lived flows that stay most of their operation in the TCP startup phase. However, many previous studies indicate that the traditional TCP Slow Start algorithm does not perform well, especially in long fat networks. Two obvious problems are known to impact the Slow Start performance, which are the blind initial setting of the Slow Start threshold and the aggressive increase of the probing rate during the startup phase regardless of the buffer sizes along the path. Current efforts focusing on tuning the Slow Start threshold and/or probing rate during the startup phase have not been considered very effective, which has prompted an investigation with a different approach. In this paper, we present a novel TCP startup method, called threshold-less slow start or SSthreshless Start, which does not need the Slow Start threshold to operate. Instead, SSthreshless Start uses the backlog status at bottleneck buffer to adaptively adjust probing rate which allows better seizing of the available bandwidth. Comparing to the traditional and other major modified startup methods, our simulation results show that SSthreshless Start achieves significant performance improvement during the startup phase. Moreover, SSthreshless Start scales well with a wide range of buffer size, propagation delay and network bandwidth. Besides, it shows excellent friendliness when operating simultaneously with the currently popular TCP NewReno connections.Comment: 25 pages, 10 figures, 7 table

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments

    Get PDF
    This report considers the application of Articial Intelligence (AI) techniques to the problem of misuse detection and misuse localisation within telecommunications environments. A broad survey of techniques is provided, that covers inter alia rule based systems, model-based systems, case based reasoning, pattern matching, clustering and feature extraction, articial neural networks, genetic algorithms, arti cial immune systems, agent based systems, data mining and a variety of hybrid approaches. The report then considers the central issue of event correlation, that is at the heart of many misuse detection and localisation systems. The notion of being able to infer misuse by the correlation of individual temporally distributed events within a multiple data stream environment is explored, and a range of techniques, covering model based approaches, `programmed' AI and machine learning paradigms. It is found that, in general, correlation is best achieved via rule based approaches, but that these suffer from a number of drawbacks, such as the difculty of developing and maintaining an appropriate knowledge base, and the lack of ability to generalise from known misuses to new unseen misuses. Two distinct approaches are evident. One attempts to encode knowledge of known misuses, typically within rules, and use this to screen events. This approach cannot generally detect misuses for which it has not been programmed, i.e. it is prone to issuing false negatives. The other attempts to `learn' the features of event patterns that constitute normal behaviour, and, by observing patterns that do not match expected behaviour, detect when a misuse has occurred. This approach is prone to issuing false positives, i.e. inferring misuse from innocent patterns of behaviour that the system was not trained to recognise. Contemporary approaches are seen to favour hybridisation, often combining detection or localisation mechanisms for both abnormal and normal behaviour, the former to capture known cases of misuse, the latter to capture unknown cases. In some systems, these mechanisms even work together to update each other to increase detection rates and lower false positive rates. It is concluded that hybridisation offers the most promising future direction, but that a rule or state based component is likely to remain, being the most natural approach to the correlation of complex events. The challenge, then, is to mitigate the weaknesses of canonical programmed systems such that learning, generalisation and adaptation are more readily facilitated
    • …
    corecore