15 research outputs found

    Servicing Delay Sensitive Pervasive Communication Through Adaptable Width Channelization for Supporting Mobile Edge Computing

    Get PDF
    Over the last fifteen years, wireless local area networks (WLANs) have been populated by large variety of pervasive devices hosting heterogeneous applications. Pervasive Edge computing encouraged more distributed network applications for these devices, eliminating the round-trip to help in achieving zero latency dream. However, These applications require significantly variable data rates for effective functioning, especially in pervasive computing. The static bandwidth of frequency channelization in current WLANs strictly restricts the maximum achievable data rate by a network station. This static behavior spawns two major drawbacks: under-utilization of scarce spectrum resources and less support to delay sensitive applications such as voice and video.To this point, if the computing is moved to the edge of the network WLANs to reduce the frequency of communication, the pervasive devices can be provided with better services during the communication and networking. Thus, we aim to distribute spectrum resources among pervasive resources based upon delay sensitivity of applications while simultaneously maintaining the fair channel access semantics of medium access control (MAC) layer of WLANs. Henceforth, ultra-low latency, efficiency and reliability of spectrum resources can be assured. In this paper, two novel algorithms have been proposed for adaptive channelization to offer rational distribution of spectrum resources among pervasive Edge nodes based on their bandwidth requirement and assorted ambient conditions. The proposed algorithms have been implemented on a real test bed of commercially available universal software radio peripheral (USRP) devices. Thorough investigations have been carried out to enumerate the effect of dynamic bandwidth channelization on parameters such as medium utilization, achievable throughput, service delay, channel access fairness and bit error rates. The achieved empirical results demonstrate that we can optimally enhance the network-wide throughput by almost 30% using channels of adaptable bandwidths

    Graphs Resemblance based Software Birthmarks through Data Mining for Piracy Control

    Get PDF
    The emergence of software artifacts greatly emphasizes the need for protecting intellectual property rights (IPR) hampered by software piracy requiring effective measures for software piracy control. Software birthmarking targets to counter ownership theft of software by identifying similarity of their origins. A novice birthmarking approach has been proposed in this paper that is based on hybrid of text-mining and graph-mining techniques. The code elements of a program and their relations with other elements have been identified through their properties (i.e code constructs) and transformed into Graph Manipulation Language (GML). The software birthmarks generated by exploiting the graph theoretic properties (through clustering coefficient) are used for the classifications of similarity or dissimilarity of two programs. The proposed technique has been evaluated over metrics of credibility, resilience, method theft, modified code detection and self-copy detection for programs asserting the effectiveness of proposed approach against software ownership theft. The comparative analysis of proposed approach with contemporary ones shows better results for having properties and relations of program nodes and for employing dynamic techniques of graph mining without adding any overhead (such as increased program size and processing cost)

    Context aware ontology‐based hybrid intelligent framework for vehicle driver categorization

    Get PDF
    In public vehicles, one of the major concerns is driver's level of expertise for its direct proportionality to safety of passengers. Hence, before a driver is subjected to certain type of vehicle, he should be thoroughly evaluated and categorized with respect to certain parameters instead of only one‐time metric of having driving license. These aspects may be driver's expertise, vigilance, aptitude, experience years, cognition, driving style, formal education, terrain, region, minor violations, major accidents, and age group. The purpose of this categorization is to ascertain suitability of a driver for certain vehicle type(s) to ensure passengers' safety. Currently, no driver categorization technique fully comprehends the implicit as well as explicit characteristics of drivers dynamically. In this paper, machine learning–based dynamic and adaptive technique named D‐CHAITs (driver categorization through hybrid of artificial intelligence techniques) is proposed for driver categorization with an objective focus on driver's attributes modeled in DriverOntology. A supervised mode of learning has been employed on a labeled dataset, having diverse profiles of drivers with attributes pertinent to drivers' perspectives of demographics, behaviors, expertise, and inclinations. A comparative analysis of D‐CHAIT with three other machine learning techniques (fuzzy logic, case‐based reasoning, and artificial neural networks) is also presented. The efficacy of all techniques was empirically measured while categorizing the drivers based on their profiles through metrics of accuracy, precision, recall, F‐measure performance, and associated costs. These empirical quantifications assert D‐CHAIT as a better technique than contemporary ones. The novelty of proposed technique is signified through preprocessing of feature attributes, quality of data, training of machine learning model on more relevant data, and adaptivity This is the peer reviewed version of the following article: Context aware ontology‐based hybrid intelligent framework for vehicle driver categorization, which has been published in final form at https://doi.org/10.1002/ett.3729. This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Use of Self-Archived Versions

    Ontology Evolution for Personalized and Adaptive Activity Recognition

    Get PDF
    Ontology-based knowledge driven Activity Recognition (AR) models play a vital role in realm of Internet of Things (IoTs). However, these models suffer the shortcomings of static nature, inability of self-evolution and lack of adaptivity. Also, AR models cannot be made comprehensive enough to cater all the activities and smart home inhabitants may not be restricted to only those activities contained in AR model. So, AR models may not rightly recognize or infer new activities. In this paper, a framework has been proposed for dynamically capturing the new knowledge from activity patterns to evolve behavioural changes in AR model (i.e. ontology based model). This ontology based framework adapts by learning the specialized and extended activities from existing user-performed activity patterns. Moreover, it can identify new activity patterns previously unknown in AR model, adapt the new properties in existing activity models and enrich ontology model by capturing change representation to enrich ontology model. The proposed framework has been evaluated comprehensively over the metrics of accuracy, statistical heuristics and Kappa Coefficient. A well-known dataset named DAMSH has been used for having an empirical insight to the effectiveness of proposed framework that shows a significant level of accuracy for AR models This paper is a postprint of a paper submitted to and accepted for publication in IET Wireless Sensor Systems and is subject to Institution of Engineering and Technology Copyright. The copy of record is available at the IET Digital Librar

    Shared Hybrid ARQ with Incremental Redundancy (SHARQ-IR) in Overloaded MIMO Systems to support Energy-Efficient Transmissions

    Get PDF
    © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. Multiple Input and Multiple Output (MIMO) is a technology through which data is transmitted over the channel through multiple antennas. However, during its deployment and implementation, some pragmatic issues arise such as interference, multipath fading and noise leading to potential packet losses and consume substantial energy. In order address such issues, Hybrid ARQ transmissions provide effective means for error correction, especially in a noisy wireless channel. More often few bits in packets are found to be in error and it is unnecessary to use the entire MIMO channel for retransmission to correct the remaining errors. So a novel approach has been proposed in this paper i.e. Shared Hybrid ARQ (SHARQ - IR) using piggyback technique in overloaded MIMO systems where the transmitting antennas (Nt) are more than the receiving antennas (Nr) and used the concept of simple retransmission method to transform an overloaded MIMO into the critically loaded system (Nt=Nr) or under loaded MIMO systems (Nt<Nr). Simulation results outperform the contemporary approaches through reduced BER and a 20% throughput gain is observed during the simulation analyses which will ultimately support energy-efficient transmissions to encourage for Green IoT Applications

    Reliable Data Analysis through Blockchain based Crowdsourcing in Mobile Ad-hoc Cloud

    Get PDF
    Mobile Ad-hoc Cloud (MAC) is the constellation of nearby mobile devices to serve the heavy computational needs of the resource constrained edge devices. One of the major challenges of MAC is to convince the mobile devices to offer their limited resources for the shared computational pool. Credit based rewarding system is considered as an effective way of incentivizing the arbitrary mobile devices for joining the MAC network and to earn the credits through computational crowdsourcing. The next challenge is to get the reliable computation as incentives attract the malicious devices to submit fake computational results for claiming their reward and we have used the blockchain based reputation system for identifying the malicious participants of MAC. This paper presents a malicious node identification algorithm integrated within the Iroha based permissioned blockchain. Iroha is a project of hyperledger which is focused on mobile devices and thus light-weight in nature. It is used for keeping the track of rewarding and reputation system driven by the malicious node detection algorithm. Experiments are conducted for evaluated the implemented test-bed and results show the effectiveness of algorithm in identifying the malicious devices and conducting the reliable data analysis through the blockchain based computational crowdsourcing in MAC. This is a post-peer-review, pre-copyedit version of an article published in Mobile Networks and Applications. The final authenticated version is available online at: https://link.springer.com/journal/1103

    Towards Reliable Computation Offloading in Mobile Ad-Hoc Clouds Using Blockchain

    Get PDF
    Mobile Ad-hoc Cloud (MAC) refers to the computation offloading of a mobile device among the multiple co-located mobile devices. However, it is difficult to convince the randomly participating mobile devices to offer their resources for performing the computation offloading of other mobile devices. These devices can be convinced for resource sharing by limiting the compute shedding of a device nearly equal to the computation that same device has already performed for other mobile devices. However, this cannot be achieved without establishing the trust among the randomly co-located mobile devices. Blockchain has been already proven for the trust-establishment between multiple independent stakeholders. However, to the best of our knowledge, no one has used blockchain for reliable computation offloading among the independently operating co-located mobile devices of MAC. In this position paper, we proposed the mapping of blockchain concepts for the realization of reliable computation offloading in MAC. We have also identified the future research directions that can be focused for improving the proposed integration of blockchain and MAC

    EkmEx - An Extended Framework for Labeling an Unlabeled Fault Dataset

    No full text
    Software fault prediction (SFP) is a quality assurance process that identifies if certain modules are faultprone (FP) or not-fault-prone (NFP). Hence, it minimizes the testing efforts incurred in terms of cost and time. Supervised machine learning techniques have capacity to spot-out the FP modules. However, such techniques require fault information from previous versions of software product. Such information, accumulated over the life-cycle of software, may neither be readily available nor reliable. Currently, clustering with experts’ opinions is a prudent choice for labeling the modules without any fault information. However, the asserted technique may not fully comprehend important aspects such as selection of experts, conflict in expert opinions, catering the diverse expertise of domain experts etc. In this paper, we propose a comprehensive framework namedEkmEx that extends the conventional fault prediction approaches while providing mathematical foundation through aspects not addressed so far. The EkmEx guides in selection of experts, furnishes an objective solution for resolve of verdictconflicts and manages the problem of diversity in expertise of domain experts. We performed expert-assisted module labeling through EkmEx and conventional clustering on seven public datasets of NASA. The empirical outcomes of research exhibit significant potential of the proposed framework in identifying FP modules across all seven dataset

    DocsChain: Blockchain based IoT Solution for Verficiation of Degree Documents

    Get PDF
    Degree verification is the process of verifying the academic credentials of successfully graduated students and universities annually spent millions of dollars for handling this process of degree verification. Hence, there is a dire need to minimize the degree verification cost and Massachusetts Institute of Technology has introduced the blockcerts, a blockchain based solution for freely handling the degree verification requests. Although Blockcerts eliminates the cost of degree verification but it also alters the existing workflow of degree issuance and verification. Blockerts is primarily focused on facilitating the students and there is a room for improvement from the prospective of educational institutes. Contribution of this paper is to introduce the solution of DocsChain which is focused on facilitating both students and education institutes by maintaining the existing social workflow of degree issuance and verification. In contrast to Blockcerts, DocsChain allows the educational institutes for bulk submission of multiple degree documents. For the verification, it operates over the photocopies of the degree documents and collect digital information from these copies using the OCR (Optical Character Recognition) enabled IoT/WoT cameras. Extracted digital information is then passed to a one-way hashing algorithm to collect the equivalent hash which is then used for searching from the DocsChain using the concept of PoE (Proof of Existence). © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other work

    Securing IoT based Maritime Transportation System through Entropy-based Dual-Stack Machine Learning Framework

    No full text
    Internet of Things (IoTs) is envisaged to widely capture the realm of logistics and transportation services in future. The applications of ubiquitous IoTs have been extended to Maritime Transportation Systems (MTS) that spawned increasing security threats; posing serious fiscal concerns to stakeholders involved. Among these threats, Distributed Denial of Service Attack (DDoS) is ranked very high and can wreak havoc on IoT artefacts of MTS network. Timely and effective detection of such attacks is imperative for necessary mitigation. Conventional approaches exploit entropy of attributes in network traffic for detecting DDoS attacks. However, majority of these approaches are static in nature and evaluate only a few network traffic parameters, limiting the number of DDoS attack detection to a few types and intensities. In current research, a novel framework named “Dual Stack Machine Learning (S2ML)” has been proposed to calculate distinct entropy-based varying 10-Tuple (T) features from network traffic features, three window sizes and associated Rate of Exponent Separation (RES). These features have been exploited for developing an intelligent model over MTS-IoT datasets to successfully detect multiple types of DDoS attacks in MTS. S2ML is an efficient framework that overcomes the shortcomings of prevalent DDoS detection approaches, as evident from the comparison with Multi-layer Perceptron (MLP), Alternating Decision Tree (ADT) and Simple Logistic Regression (SLR) over different evaluation metrics (Confusion metrics, ROCs). The proposed S2ML technique outperforms prevalent ones with 1.5% better results compared to asserted approaches on the distribution of normal/attack traffic. We look forward to enhancing the model performance through dynamic windowing, measuring packet drop rates and infrastructure of Software Defined Networks (SDNs
    corecore