52 research outputs found

    An efficient scalable scheduling mac protocol for underwater sensor networks

    Get PDF
    Underwater Sensor Networks (UWSNs) utilise acoustic waves with comparatively lower loss and longer range than those of electromagnetic waves. However, energy remains a challenging issue in addition to long latency, high bit error rate, and limited bandwidth. Thus, collision and retransmission should be efficiently handled at Medium Access Control (MAC) layer in order to reduce the energy cost and also to improve the throughput and fairness across the network. In this paper, we propose a new reservation-based distributed MAC protocol called ED-MAC, which employs a duty cycle mechanism to address the spatial-temporal uncertainty and the hidden node problem to effectively avoid collisions and retransmissions. ED-MAC is a conflict-free protocol, where each sensor schedules itself independently using local information. Hence, ED-MAC can guarantee conflict-free transmissions and receptions of data packets. Compared with other conflict-free MAC protocols, ED-MAC is distributed and more reliable, i.e., it schedules according to the priority of sensor nodes which based on their depth in the network. We then evaluate design choices and protocol performance through extensive simulation to study the load effects and network scalability in each protocol. The results show that ED-MAC outperforms the contention-based MAC protocols and achieves a significant improvement in terms of successful delivery ratio, throughput, energy consumption, and fairness under varying offered traffic and number of nodes

    Applications of ontology in the Internet of Things: a systematic analysis

    Get PDF
    Ontology has been increasingly implemented to facilitate the Internet of Things (IoT) activities, such as tracking and information discovery, storage, information exchange, and object addressing. However, a complete understanding of using ontology in the IoT mechanism remains lacking. The main goal of this research is to recognize the use of ontology in the IoT process and investigate the services of ontology in IoT activities. A systematic literature review (SLR) is conducted using predefined protocols to analyze the literature about the usage of ontologies in IoT. The following conclusions are obtained from the SLR. (1) Primary studies (i.e., selected 115 articles) have addressed the need to use ontologies in IoT for industries and the academe, especially to minimize interoperability and integration of IoT devices. (2) About 31.30% of extant literature discussed ontology development concerning the IoT interoperability issue, while IoT privacy and integration issues are partially discussed in the literature. (3) IoT styles of modeling ontologies are diverse, whereas 35.65% of total studies adopted the OWL style. (4) The 32 articles (i.e., 27.83% of the total studies) reused IoT ontologies to handle diverse IoT methodologies. (5) A total of 45 IoT ontologies are well acknowledged, but the IoT community has widely utilized none. An in-depth analysis of different IoT ontologies suggests that the existing ontologies are beneficial in designing new IoT ontology or achieving three main requirements of the IoT field: interoperability, integration, and privacy. This SLR is finalized by identifying numerous validity threats and future directions

    Longitudinal performance analysis of machine learning based Android malware detectors

    Get PDF
    This paper presents a longitudinal study of the performance of machine learning classifiers for Android malware detection. The study is undertaken using features extracted from Android applications first seen between 2012 and 2016. The aim is to investigate the extent of performance decay over time for various machine learning classifiers trained with static features extracted from date-labelled benign and malware application sets. Using date-labelled apps allows for true mimicking of zero-day testing, thus providing a more realistic view of performance than the conventional methods of evaluation that do not take date of appearance into account. In this study, all the investigated machine learning classifiers showed progressive diminishing performance when tested on sets of samples from a later time period. Overall, it was found that false positive rate (misclassifying benign samples as malicious) increased more substantially compared to the fall in True Positive rate (correct classification of malicious apps) when older models were tested on newer app samples

    Improved capacity and fairness of massive machine type communications in millimetre wave 5G network

    Get PDF
    In the Fifth Generation (5G) wireless standard, the Internet of Things (IoT) will interconnect billions of Machine Type Communications (MTC) devices. Fixed and mobile wearable devices and sensors are expected to contribute to the majority of IoT traffic. MTC device mobility has been considered with three speeds, namely zero (fixed) and medium and high speeds of 30 and 100 kmph. Different values for device mobility are used to simulate the impact of device mobility on MTC traffic. This work demonstrates the gain of using distributed antennas on MTC traffic in terms of spectral efficiency and fairness among MTC devices, which affects the number of devices that can be successfully connected. The mutual use of Distributed Base Stations (DBS) with Remote Radio Units (RRU) and the adoption of the millimetre wave band, particularly in the 26 GHz range, have been considered the key enabling technologies for addressing MTC traffic growth. An algorithm has been set to schedule this type of traffic and to show whether MTC devices completed their traffic upload or failed to reach the margin. The gains of the new architecture have been demonstrated in terms of spectral efficiency, data throughput and the fairness index

    Applications of ontology in the internet of things: A systematic analysis

    Get PDF
    Ontology has been increasingly implemented to facilitate the Internet of Things (IoT) activities, such as tracking and information discovery, storage, information exchange, and object addressing. However, a complete understanding of using ontology in the IoT mechanism remains lacking. The main goal of this research is to recognize the use of ontology in the IoT process and investigate the services of ontology in IoT activities. A systematic literature review (SLR) is conducted using predefined protocols to analyze the literature about the usage of ontologies in IoT. The following conclusions are obtained from the SLR. (1) Primary studies (i.e., selected 115 articles) have addressed the need to use ontologies in IoT for industries and the academe, especially to minimize interoperability and integration of IoT devices. (2) About 31.30% of extant literature discussed ontology development concerning the IoT interoperability issue, while IoT privacy and integration issues are partially discussed in the literature. (3) IoT styles of modeling ontologies are diverse, whereas 35.65% of total studies adopted the OWL style. (4) The 32 articles (i.e., 27.83% of the total studies) reused IoT ontologies to handle diverse IoT methodologies. (5) A total of 45 IoT ontologies are well acknowledged, but the IoT community has widely utilized none. An in-depth analysis of different IoT ontologies suggests that the existing ontologies are beneficial in designing new IoT ontology or achieving three main requirements of the IoT field: interoperability, integration, and privacy. This SLR is finalized by identifying numerous validity threats and future directions

    Survey of Transportation of Adaptive Multimedia Streaming service in Internet

    Full text link
    [DE] World Wide Web is the greatest boon towards the technological advancement of modern era. Using the benefits of Internet globally, anywhere and anytime, users can avail the benefits of accessing live and on demand video services. The streaming media systems such as YouTube, Netflix, and Apple Music are reining the multimedia world with frequent popularity among users. A key concern of quality perceived for video streaming applications over Internet is the Quality of Experience (QoE) that users go through. Due to changing network conditions, bit rate and initial delay and the multimedia file freezes or provide poor video quality to the end users, researchers across industry and academia are explored HTTP Adaptive Streaming (HAS), which split the video content into multiple segments and offer the clients at varying qualities. The video player at the client side plays a vital role in buffer management and choosing the appropriate bit rate for each such segment of video to be transmitted. A higher bit rate transmitted video pauses in between whereas, a lower bit rate video lacks in quality, requiring a tradeoff between them. The need of the hour was to adaptively varying the bit rate and video quality to match the transmission media conditions. Further, The main aim of this paper is to give an overview on the state of the art HAS techniques across multimedia and networking domains. A detailed survey was conducted to analyze challenges and solutions in adaptive streaming algorithms, QoE, network protocols, buffering and etc. It also focuses on various challenges on QoE influence factors in a fluctuating network condition, which are often ignored in present HAS methodologies. Furthermore, this survey will enable network and multimedia researchers a fair amount of understanding about the latest happenings of adaptive streaming and the necessary improvements that can be incorporated in future developments.Abdullah, MTA.; Lloret, J.; Canovas Solbes, A.; GarcĂ­a-GarcĂ­a, L. (2017). Survey of Transportation of Adaptive Multimedia Streaming service in Internet. Network Protocols and Algorithms. 9(1-2):85-125. doi:10.5296/npa.v9i1-2.12412S8512591-

    HW/SW Co-design and Prototyping Approach for Embedded Smart Camera: ADAS Case Study

    Get PDF
    In 1968, Volkswagen integrated an electronic circuit as a new control fuel injection system, called the “Little Black Box”, it is considered as the first embedded system in the automotive industry. Currently, automobile constructors integrate several embedded systems into any of their new model vehicles. Behind these automobile’s electronics systems, a sophisticated Hardware/Software (HW/SW) architecture, which is based on heterogeneous components, and multiple CPUs is built. At present, they are more oriented toward visionbased systems using tiny embedded smart camera. This visionbased system in real time aspects represents one of the most challenging issues, especially in the domain of automobile’s applications. On the design side, one of the optimal solutions adopted by embedded systems designer for system performance, is to associate CPUs and hardware accelerators in the same design, in order to reduce the computational burden on the CPU and to speed-up the data processing. In this paper, we present a hardware platform-based design approach for fast embedded smart Advanced Driver Assistant System (ADAS) design and prototyping, as an alternative for the pure time-consuming simulation technique. Based on a Multi-CPU/FPGA platform, we introduced a new methodology/flow to design the different HW and SW parts of the ADAS system. Then, we shared our experience in designing and prototyping a HW/SW vision based on smart embedded system as an ADAS that helps to increase the safety of car’s drivers. We presented a real HW/SW prototype of the vision ADAS based on a Zynq FPGA. The system detects the fatigue/drowsiness state of the driver by monitoring the eyes closure and generates a real time alert. A new HW Skin Segmentation step to locate the eyes/face is proposed. Our new approach migrates the skin segmentation step from processing system (SW) to programmable logic (HW) taking the advantage of High-Level Synthesis (HLS) tool flow to accelerate the implementation, and the prototyping of the Vision based ADAS on a hardware platform

    DroidFusion: A Novel Multilevel Classifier Fusion Approach for Android Malware Detection

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI linkAndroid malware has continued to grow in volume and complexity posing significant threats to the security of mobile devices and the services they enable. This has prompted increasing interest in employing machine learning to improve Android malware detection. In this paper, we present a novel classifier fusion approach based on a multilevel architecture that enables effective combination of machine learning algorithms for improved accuracy. The framework (called DroidFusion), generates a model by training base classifiers at a lower level and then applies a set of ranking-based algorithms on their predictive accuracies at the higher level in order to derive a final classifier. The induced multilevel DroidFusion model can then be utilized as an improved accuracy predictor for Android malware detection. We present experimental results on four separate datasets to demonstrate the effectiveness of our proposed approach. Furthermore, we demonstrate that the DroidFusion method can also effectively enable the fusion of ensemble learning algorithms for improved accuracy. Finally, we show that the prediction accuracy of DroidFusion, despite only utilizing a computational approach in the higher level, can outperform stacked generalization, a well-known classifier fusion method that employs a meta-classifier approach in its higher level
    • …
    corecore