9,032 research outputs found

    Machine Learning in Wireless Sensor Networks: Algorithms, Strategies, and Applications

    Get PDF
    Wireless sensor networks monitor dynamic environments that change rapidly over time. This dynamic behavior is either caused by external factors or initiated by the system designers themselves. To adapt to such conditions, sensor networks often adopt machine learning techniques to eliminate the need for unnecessary redesign. Machine learning also inspires many practical solutions that maximize resource utilization and prolong the lifespan of the network. In this paper, we present an extensive literature review over the period 2002-2013 of machine learning methods that were used to address common issues in wireless sensor networks (WSNs). The advantages and disadvantages of each proposed algorithm are evaluated against the corresponding problem. We also provide a comparative guide to aid WSN designers in developing suitable machine learning solutions for their specific application challenges.Comment: Accepted for publication in IEEE Communications Surveys and Tutorial

    Advanced Compression and Latency Reduction Techniques Over Data Networks

    Get PDF
    Applications and services operating over Internet protocol (IP) networks often suffer from high latency and packet loss rates. These problems are attributed to data congestion resulting from the lack of network resources available to support the demand. The usage of IP networks is not only increasing, but very dynamic as well. In order to alleviate the above-mentioned problems and to maintain a reasonable Quality of Service (QoS) for the end users, two novel adaptive compression techniques are proposed to reduce packets’ payload size. The proposed schemes exploit lossless compression algorithms to perform the compression process on the packets’ payloads and thus decrease the overall net- work congestion. The first adaptive compression scheme utilizes two key network performance indicators as design metrics. These metrics include the varying round-trip time (RTT) and the number of dropped packets. The second compression scheme uses other network information such as the incoming packet rate, intermediate nodes processing rate, average packet waiting time within a queue of an intermediate node, and time required to perform the compression process. The performances of the proposed algorithms are evaluated through Network Simulator 3 (NS3). The simulation results show an improvement in network conditions, such as the number of dropped packets, network latency, and throughput

    SecMon: End-to-End Quality and Security Monitoring System

    Get PDF
    The Voice over Internet Protocol (VoIP) is becoming a more available and popular way of communicating for Internet users. This also applies to Peer-to-Peer (P2P) systems and merging these two have already proven to be successful (e.g. Skype). Even the existing standards of VoIP provide an assurance of security and Quality of Service (QoS), however, these features are usually optional and supported by limited number of implementations. As a result, the lack of mandatory and widely applicable QoS and security guaranties makes the contemporary VoIP systems vulnerable to attacks and network disturbances. In this paper we are facing these issues and propose the SecMon system, which simultaneously provides a lightweight security mechanism and improves quality parameters of the call. SecMon is intended specially for VoIP service over P2P networks and its main advantage is that it provides authentication, data integrity services, adaptive QoS and (D)DoS attack detection. Moreover, the SecMon approach represents a low-bandwidth consumption solution that is transparent to the users and possesses a self-organizing capability. The above-mentioned features are accomplished mainly by utilizing two information hiding techniques: digital audio watermarking and network steganography. These techniques are used to create covert channels that serve as transport channels for lightweight QoS measurement's results. Furthermore, these metrics are aggregated in a reputation system that enables best route path selection in the P2P network. The reputation system helps also to mitigate (D)DoS attacks, maximize performance and increase transmission efficiency in the network.Comment: Paper was presented at 7th international conference IBIZA 2008: On Computer Science - Research And Applications, Poland, Kazimierz Dolny 31.01-2.02 2008; 14 pages, 5 figure

    Perceptually Important Points-Based Data Aggregation Method for Wireless Sensor Networks

    Get PDF
    يستهلك إرسال واستقبال البيانات معظم الموارد في شبكات الاستشعار اللاسلكية (WSNs). تعد الطاقة التي توفرها البطارية أهم مورد يؤثر على عمر WSN في عقدة المستشعر. لذلك، نظرًا لأن عُقد المستشعر تعمل بالاعتماد على بطاريتها المحدودة ، فإن توفير الطاقة ضروري. يمكن تعريف تجميع البيانات كإجراء مطبق للقضاء على عمليات الإرسال الزائدة عن الحاجة ، ويوفر معلومات مدمجة إلى المحطات الأساسية ، مما يؤدي بدوره إلى تحسين فعالية الطاقة وزيادة عمر الشبكات اللاسلكية ذات للطاقة المحدودة. في هذا البحث ، تم اقتراح طريقة تجميع البيانات المستندة إلى النقاط المهمة إدراكيًا (PIP-DA) لشبكات المستشعرات اللاسلكية لتقليل البيانات الزائدة عن الحاجة قبل إرسالها إلى المحطة الاساسية. من خلال استخدام مجموعة بيانات Intel Berkeley Research Lab (IBRL) ، تم قياس كفاءة الطريقة المقترحة. توضح النتائج التجريبية فوائد الطريقة المقترحة حيث تعمل على تقليل الحمل على مستوى عقدة الاستشعار حتى 1.25٪ في البيانات المتبقية وتقليل استهلاك الطاقة حتى 93٪ مقارنة ببروتوكولات PFF و ATP.The transmitting and receiving of data consume the most resources in Wireless Sensor Networks (WSNs). The energy supplied by the battery is the most important resource impacting WSN's lifespan in the sensor node. Therefore, because sensor nodes run from their limited battery, energy-saving is necessary. Data aggregation can be defined as a procedure applied for the elimination of redundant transmissions, and it provides fused information to the base stations, which in turn improves the energy effectiveness and increases the lifespan of energy-constrained WSNs. In this paper, a Perceptually Important Points Based Data Aggregation (PIP-DA) method for Wireless Sensor Networks is suggested to reduce redundant data before sending them to the sink. By utilizing Intel Berkeley Research Lab (IBRL) dataset, the efficiency of the proposed method was measured. The experimental findings illustrate the benefits of the proposed method as it reduces the overhead on the sensor node level up to 1.25% in remaining data and reduces the energy consumption up to 93% compared to prefix frequency filtering (PFF) and ATP protocols

    Distributed top-k aggregation queries at large

    Get PDF
    Top-k query processing is a fundamental building block for efficient ranking in a large number of applications. Efficiency is a central issue, especially for distributed settings, when the data is spread across different nodes in a network. This paper introduces novel optimization methods for top-k aggregation queries in such distributed environments. The optimizations can be applied to all algorithms that fall into the frameworks of the prior TPUT and KLEE methods. The optimizations address three degrees of freedom: 1) hierarchically grouping input lists into top-k operator trees and optimizing the tree structure, 2) computing data-adaptive scan depths for different input sources, and 3) data-adaptive sampling of a small subset of input sources in scenarios with hundreds or thousands of query-relevant network nodes. All optimizations are based on a statistical cost model that utilizes local synopses, e.g., in the form of histograms, efficiently computed convolutions, and estimators based on order statistics. The paper presents comprehensive experiments, with three different real-life datasets and using the ns-2 network simulator for a packet-level simulation of a large Internet-style network

    Communication-Efficient Split Learning via Adaptive Feature-Wise Compression

    Full text link
    This paper proposes a novel communication-efficient split learning (SL) framework, named SplitFC, which reduces the communication overhead required for transmitting intermediate feature and gradient vectors during the SL training process. The key idea of SplitFC is to leverage different dispersion degrees exhibited in the columns of the matrices. SplitFC incorporates two compression strategies: (i) adaptive feature-wise dropout and (ii) adaptive feature-wise quantization. In the first strategy, the intermediate feature vectors are dropped with adaptive dropout probabilities determined based on the standard deviation of these vectors. Then, by the chain rule, the intermediate gradient vectors associated with the dropped feature vectors are also dropped. In the second strategy, the non-dropped intermediate feature and gradient vectors are quantized using adaptive quantization levels determined based on the ranges of the vectors. To minimize the quantization error, the optimal quantization levels of this strategy are derived in a closed-form expression. Simulation results on the MNIST, CIFAR-10, and CelebA datasets demonstrate that SplitFC provides more than a 5.6% increase in classification accuracy compared to state-of-the-art SL frameworks, while they require 320 times less communication overhead compared to the vanilla SL framework without compression

    Performance evaluation of MPEG-4 video streaming over UMTS networks using an integrated tool environment

    Get PDF
    Universal Mobile Telecommunications System (UMTS) is a third-generation mobile communications system that supports wireless wideband multimedia applications. This paper investigates the video quality attained in streaming MPEG-4 video over UMTS networks using an integrated tool environment, which comprises an MPEG-4 encoder/decoder, a network simulator and video quality evaluation tools. The benefit of such an integrated tool environment is that it allows the evaluation of real video sources compressed using an MPEG-4 encoder. Simulation results show that UMTS Radio Link Control (RLC) outperforms the unacknowledged mode. The latter mode provides timely delivery but no error recovery. The acknowledged mode can deliver excellent perceived video quality for RLC block error rates up to 30% utilizing a playback buffer at the streaming client. Based on the analysis of the performance results, a self-adaptive RLC acknowledged mode protocol is proposed
    corecore