187 research outputs found

    Compression-based Data Reduction Technique for IoT Sensor Networks

    Get PDF
    في شبكات أجهزة استشعار إنترنت الأشياء ، يعد توفير الطاقة أمرًا مهمًا جدًا نظرًا لأن عقد أجهزة استشعار إنترنت الأشياء تعمل ببطاريتها المحدودة. يعد نقل البيانات مكلفًا للغاية في عقد أجهزة استشعار إنترنت الأشياء ويهدر معظم الطاقة ، في حين أن استهلاك الطاقة أقل بكثير بالنسبة لمعالجة البيانات. هناك العديد من التقنيات والمفاهيم التي تعنى بتوفير الطاقة ، وهي مخصصة في الغالب لتقليل نقل البيانات. لذلك ، يمكننا الحفاظ على كمية كبيرة من الطاقة مع تقليل عمليات نقل البيانات في شبكات مستشعر إنترنت الأشياء. في هذا البحث ، اقترحنا طريقة تقليل البيانات القائمة على الضغط (CBDR) والتي تعمل في مستوى عقد أجهزة استشعار إنترنت الأشياء. يتضمن CBDR مرحلتين للضغط ، مرحلة التكميم باستخدام طريقة SAX والتي تقلل النطاق الديناميكي لقراءات بيانات المستشعر ، بعد ذلك ضغط LZW بدون خسارة لضغط مخرجات المرحلة الاولى. يؤدي تكميم قراءات البيانات لعقد المستشعر إلى حجم ابجدية الـ SAX إلى تقليل القراءات ، مع الاستفادة من أفضل أحجام الضغط ، مما يؤدي إلى تحقيق ضغط أكبر في LZW. نقترح أيضًا تحسينًا آخر لطريقة CBDR وهو إضافة ناقل حركة ديناميكي (DT-CBDR) لتقليل إجمالي عدد البيانات المرسلة إلى البوابة والمعالجة المطلوبة. يتم استخدام محاكي OMNeT ++ جنبًا إلى جنب مع البيانات الحسية الحقيقية التي تم جمعها في Intel Lab لإظهار أداء الطريقة المقترحة. توضح تجارب المحاكاة أن تقنية CBDR المقترحة تقدم أداء أفضل من التقنيات الأخرى في الأدبياتEnergy savings are very common in IoT sensor networks because IoT sensor nodes operate with their own limited battery. The data transmission in the IoT sensor nodes is very costly and consume much of the energy while the energy usage for data processing is considerably lower. There are several energy-saving strategies and principles, mainly dedicated to reducing the transmission of data. Therefore, with minimizing data transfers in IoT sensor networks, can conserve a considerable amount of energy. In this research, a Compression-Based Data Reduction (CBDR) technique was suggested which works in the level of IoT sensor nodes. The CBDR includes two stages of compression, a lossy SAX Quantization stage which reduces the dynamic range of the sensor data readings, after which a lossless LZW compression to compress the loss quantization output. Quantizing the sensor node data readings down to the alphabet size of SAX results in lowering, to the advantage of the best compression sizes, which contributes to greater compression from the LZW end of things. Also, another improvement was suggested to the CBDR technique which is to add a Dynamic Transmission (DT-CBDR) to decrease both the total number of data sent to the gateway and the processing required. OMNeT++ simulator along with real sensory data gathered at Intel Lab is used to show the performance of the proposed technique. The simulation experiments illustrate that the proposed CBDR technique provides better performance than the other techniques in the literature

    The Uniformization Process of the Fast Congestion Notification (FN)

    Get PDF
    Fast Congestion Notification (FN) is one of the proactive queue management mechanisms that practices congestion avoidance to help avoid the beginning of congestion by marking or dropping packets before the routers queue gets full; and exercises congestion control, when congestion avoidance fails, by increasing the rate of packet marking or dropping. Technically, FN avoids the queue overflows by controlling the instantaneous queue size below the optimal queue size, and control congestion by keeping the average arrival rate close to the outgoing link capacity. Upon arrival of each packet, FN uses the instantaneous queue size and the average arrival rate to calculate the packet marking or dropping probability. FN marks or drops packets at fairly regular intervals to avoid long intermarking intervals and clustered packet marks or drops. Too many marked or dropped packets close together can cause global synchronization, and also too long packet intermarking times between marked or dropped packets can cause large queue sizes and congestion. This paper shows how FN controls the queue size, avoids congestion, and reduces global synchronization by uniformizing marked or dropped packet intervals.Comment: 5 Pages IEEE format, International Journal of Computer Science and Information Security, IJCSIS 2009, ISSN 1947 5500,Impact Factor 0.423, http://sites.google.com/site/ijcsis

    Experimental Investigation of Self-compacting High Performance Concrete Containing Calcined Kaolin Clay and Nano Lime

    Get PDF
    The aim of this research is to investigate the effect of pozzolanic materials and nano particles on improve the strength characteristic by the properties of a self-compacting high-performance concrete that includes calcined clay with nano lime. In this study, two blends systems are worked on, they are the binary and the ternary systems. For binary mixtures, test samples were prepared from 5% CC, 10% CC, 15% CC and 3% NL by partial replacement of the cement weight. While ternary mixtures, samples were prepared from 5% CC 3% NL, 10% CC 3% NL and 15% CC 3% NL by partial substitution of cement weight. The tests conducted on mixes are fresh tests like slump flow diameter, V-funnel, L-box, and segregation resistance. The compressive strength test was determined at 7, 28 and 56 days. While splitting tensile strength tests at 7 and 28 days from the SCHPC produced in the study. It was concluded that the replacement of CC and NL in SCHPC binary mixes reduced the fresh results enough for SCHPC production and gave a general improvement in the compressive strength and splitting tensile strength properties of the SCHPC mixture. SCHPC with 10% CC partial replacement of cement showed higher values of compressive and splitting tensile strength, compared to the reference mixture of SCHPC for all days, thus it was considered the best. Whereas, the strength of the concrete mixtures in the ternary cement mixtures was better than the strength of the mixing and control mortar systems for the same replacement levels in 7 , 28 and 56 days

    Adaptive Backtracking Search Strategy to Find Optimal Path for Artificial Intelligence Purposes

    Get PDF
    There are numerous of Artificial Intelligence (AI) search strategies that used for finding the solution path to a specific problem, but many of them produce one solution path with no attention if it is the optimal path or not. The aim of our work is to achieve the optimality by finding direct path from the start node to the goal node such that it is the shortest path with minimum cost .In this paper adaptive backtracking algorithm is produced to find the optimal solution path, such that all possible paths in the tree graph of the search problem that have an expected optimal solution is tested, also a heuristic function related to the actual cost of the moving from one node to another is used in order to reduce the search computation time. The adaptive algorithm ignored any path that it is not useful in finding the optimal solution path, our adaptive algorithm implemented using visual prolog 5.1, evaluated on tree diagram and produced good result in finding the optimal solution path with efficient search time equivalent to O(bd/2) and space complexity O(bd). Keywords: Backtracking Algorithm, Optimal solution Path, Heuristic function, Dead end, shortest path, Minimum cost

    Mobile ad hoc networks under wormhole attack: A simulation study

    Get PDF
    Security has become the main concern to grant protected communication between mobile nodes in an unfriendly environment.Wireless Ad Hoc network might be unprotected against attacks by malicious nodes.This paper evaluates the impact of some adversary attack on mobile Ad Hoc Network (MANET) system which has been tested using QualNet simulator.Moreover, it investigates the active and passive attack on MANET.At the same time, it measures the performance of MANET with and without these attacks.The simulation is done on data link layer and network layer of mobile nodes in wireless Ad Hoc network.The results of this evaluation are very important to estimate the deployment of the MANET nodes for security. Furthermore, this study analyzes the performance of MANET and performs “what-if” analyses to optimize them

    Perceptually Important Points-Based Data Aggregation Method for Wireless Sensor Networks

    Get PDF
    يستهلك إرسال واستقبال البيانات معظم الموارد في شبكات الاستشعار اللاسلكية (WSNs). تعد الطاقة التي توفرها البطارية أهم مورد يؤثر على عمر WSN في عقدة المستشعر. لذلك، نظرًا لأن عُقد المستشعر تعمل بالاعتماد على بطاريتها المحدودة ، فإن توفير الطاقة ضروري. يمكن تعريف تجميع البيانات كإجراء مطبق للقضاء على عمليات الإرسال الزائدة عن الحاجة ، ويوفر معلومات مدمجة إلى المحطات الأساسية ، مما يؤدي بدوره إلى تحسين فعالية الطاقة وزيادة عمر الشبكات اللاسلكية ذات للطاقة المحدودة. في هذا البحث ، تم اقتراح طريقة تجميع البيانات المستندة إلى النقاط المهمة إدراكيًا (PIP-DA) لشبكات المستشعرات اللاسلكية لتقليل البيانات الزائدة عن الحاجة قبل إرسالها إلى المحطة الاساسية. من خلال استخدام مجموعة بيانات Intel Berkeley Research Lab (IBRL) ، تم قياس كفاءة الطريقة المقترحة. توضح النتائج التجريبية فوائد الطريقة المقترحة حيث تعمل على تقليل الحمل على مستوى عقدة الاستشعار حتى 1.25٪ في البيانات المتبقية وتقليل استهلاك الطاقة حتى 93٪ مقارنة ببروتوكولات PFF و ATP.The transmitting and receiving of data consume the most resources in Wireless Sensor Networks (WSNs). The energy supplied by the battery is the most important resource impacting WSN's lifespan in the sensor node. Therefore, because sensor nodes run from their limited battery, energy-saving is necessary. Data aggregation can be defined as a procedure applied for the elimination of redundant transmissions, and it provides fused information to the base stations, which in turn improves the energy effectiveness and increases the lifespan of energy-constrained WSNs. In this paper, a Perceptually Important Points Based Data Aggregation (PIP-DA) method for Wireless Sensor Networks is suggested to reduce redundant data before sending them to the sink. By utilizing Intel Berkeley Research Lab (IBRL) dataset, the efficiency of the proposed method was measured. The experimental findings illustrate the benefits of the proposed method as it reduces the overhead on the sensor node level up to 1.25% in remaining data and reduces the energy consumption up to 93% compared to prefix frequency filtering (PFF) and ATP protocols
    corecore