33 research outputs found

    Multiple streaming at the network edge

    Get PDF
    Streaming video over the Internet, including cellular networks, has now become commonplace. Network operators typically use multicasting or variations of multiple unicasting to deliver streams to the user terminal in a controlled fashion. An emerging alternative is P2P streaming, which is theoretically more scalable but suffers from other issues arising from the dynamic nature of the system. User’s terminals become streaming nodes but these are not constantly connected. Another issue is that they are based on logical overlays, which are not optimized for the physical underlay infrastructure. An important proposition is that of finding effective ways to increase the resilience of the overlay whilst at the same time not conflicting with the network. In this article we look at the combination of two techniques, multi-streaming (redundancy) and locality (network efficiency) in the context of both live and video-on-demand streaming. We introduce a new technique and assess it via a comparative, simulation-based study. We find that redundancy affects network utilization only marginally if traffic is kept at the edges via localization technique

    Crops leaf diseases recognition: a framework of optimum deep learning features

    Get PDF
    Manual diagnosis of crops diseases is not an easy process; thus, a computerized method is widely used. From a couple of years, advancements in the domain of machine learning, such as deep learning, have shown substantial success. However, they still faced some challenges such as similarity in disease symptoms and irrelevant features extraction. In this article, we proposed a new deep learning architecture with optimization algorithm for cucumber and potato leaf diseases recognition. The proposed architecture consists of five steps. In the first step, data augmentation is performed to increase the numbers of training samples. In the second step, pre-trained DarkNet19 deep model is opted and fine-tuned that later utilized for the training of fine-tuned model through transfer learning. Deep features are extracted from the global pooling layer in the next step that is refined using Improved Cuckoo search algorithm. The best selected features are finally classified using machine learning classifiers such as SVM, and named a few more for final classification results. The proposed architecture is tested using publicly available datasets–Cucumber National Dataset and Plant Village. The proposed architecture achieved an accuracy of 100.0%, 92.9%, and 99.2%, respectively. A comparison with recent techniques is also performed, revealing that the proposed method achieved improved accuracy while consuming less computational time

    Two-stream deep learning architecture-based human action recognition

    Get PDF
    Human action recognition (HAR) based on Artificial intelligence reasoning is the most important research area in computer vision. Big breakthroughs in this field have been observed in the last few years; additionally, the interest in research in this field is evolving, such as understanding of actions and scenes, studying human joints, and human posture recognition. Many HAR techniques are introduced in the literature. Nonetheless, the challenge of redundant and irrelevant features reduces recognition accuracy. They also faced a few other challenges, such as differing perspectives, environmental conditions, and temporal variations, among others. In this work, a deep learning and improved whale optimization algorithm based framework is proposed for HAR. The proposed framework consists of a few core stages i.e., frames initial preprocessing, fine-tuned pre-trained deep learning models through transfer learning (TL), features fusion using modified serial based approach, and improved whale optimization based best features selection for final classification. Two pre-trained deep learning models such as InceptionV3 and Resnet101 are fine-tuned and TL is employed to train on action recognition datasets. The fusion process increases the length of feature vectors; therefore, improved whale optimization algorithm is proposed and selects the best features. The best selected features are finally classified using machine learning (ML) classifiers. Four publicly accessible datasets such as Ut-interaction, Hollywood, Free Viewpoint Action Recognition using Motion History Volumes (IXMAS), and centre of computer vision (UCF) Sports, are employed and achieved the testing accuracy of 100%, 99.9%, 99.1%, and 100% respectively. Comparison with state of the art techniques (SOTA), the proposed method showed the improved accuracy

    Resilient P2P streaming

    No full text
    P2P streaming has shown an alternative way of broadcasting media to the end users. It is theoretically more scalable than its client-server based counterpart but suffers from other issues arising from the dynamic nature of the system. This is built on top of the internet by forming an overlay network. End-users (peers) are the main sources of the overlay network, sharing their bandwidth, storage and memory. Peers join and leave freely, which dramatically affects, both on QoS and QoE. Furthermore, the interconnections among the peers are based on logical overlays, which are not harmonized with the physical underlay infrastructure. This article presents combinations of different techniques, namely stream redundancy, multi-source streaming and locality-awareness (network efficiency), in the context of live and video-on-demand broadcasting. A new technique is introduced to improve P2P performance and assess it via a comparative, simulation-based study. It is found that redundancy affects network utilization only marginally if traffic is kept at the edges via localization techniques; multisource streaming improves throughput, delay, and minimizing the streaming time. Keywords: P2P; Multimedia; Redundancy; Multi-source; Locality-awarenes

    Streaming layered video over P2P networks

    No full text
    Peer-to-Peer streaming has been increasingly deployed recently. This comes out from its ability to convey the stream over the IP network to a large number of end-users (or peers). However, due to the heterogeneous nature among the peers, some of them will not be capable to relay or upload the original stream because of bandwidth limitations. Different internet connections these days can be initiated from different devices such as 3G mobile phones or WiFi-connected PDAs. Most of the existing P2P streaming systems are based on video coding techniques which cannot cope with this level of heterogeneity at network and terminal level. Layered video coding techniques are being introduced in simple streaming scenarios, due to their ability to deliver streams at different scales (temporal, spatial and SNR). This eases transmission in case of limited bandwidth as the devices can pick and decode the minimum bit rate base layer. Layered coding is preferred over single-layer coding for its flexibility to be transmitted over heterogeneous networks. In this paper we take a step further and analyze layered video in the context of P2P. We study such an approach in combination with simple cross-layer optimization techniques, comparing the resulting performance with a state-of-the-art P2P TV platform. We identify considerable benefits in terms latency, jitter, throughput, and packet loss

    Flexible macroblock ordering for video over P2P

    No full text
    Peer-to-peer (P2P) is a promising technology for video streaming, and offers advantages in terms of re-configurability and scalability. It gains advantage from and share the resources owned by the end-users who are distributed around the Internet. P2P has shown an alternative solution for the traditional Client-Server approach limitations. However, due to the churn of peers, issues of video quality arise such as packet loss. This in turn degrades the QoS, then the QoE. Moreover, in current networking conditions, congestions and bottlenecks cannot be circumvented easily due to the increase in Internet traffic. Therefore, this paper introduces a novel combination of two well known techniques known as "locality awareness" and "Flexible Macroblock Ordering" (FMO). Locality-awareness plays a vital role in reducing the transmission cost among the peers whilst FMO is shown to be superior to other error resilience techniques in case of packet loss. However these two approaches have not been studied in conjunction. A comparative simulation-based study has been carried out for the proposed approach against a benchmark system, i.e., without introducing any error resilience technique. The results have shown better performance of the proposed approach in terms of End-to-End delay and video quality, as measured by PSNR

    Flexible macroblock ordering for video over P2P

    No full text
    Peer-to-peer (P2P) is a promising technology for video streaming, and offers advantages in terms of re-configurability and scalability. It gains advantage from and share the resources owned by the end-users who are distributed around the Internet. P2P has shown an alternative solution for the traditional Client-Server approach limitations. However, due to the churn of peers, issues of video quality arise such as packet loss. This in turn degrades the QoS, then the QoE. Moreover, in current networking conditions, congestions and bottlenecks cannot be circumvented easily due to the increase in Internet traffic. Therefore, this paper introduces a novel combination of two well known techniques known as "locality awareness" and "Flexible Macroblock Ordering" (FMO). Locality-awareness plays a vital role in reducing the transmission cost among the peers whilst FMO is shown to be superior to other error resilience techniques in case of packet loss. However these two approaches have not been studied in conjunction. A comparative simulation-based study has been carried out for the proposed approach against a benchmark system, i.e., without introducing any error resilience technique. The results have shown better performance of the proposed approach in terms of End-to-End delay and video quality, as measured by PSNR

    Load balancing efficiency in P2P media streaming

    No full text
    Recently, P2PTV has started up-and-coming as a prospective alternate to the well known client-server applications such as IPTV, Video on demand and real time TV services; because it overcomes fundamental client-server issues and introduces some new features that help improving performance. The success of such P2PTV platforms has lead to several commercial applications showing good performance in terms of quality of Service (QoS) and Quality of Experience (QoE). Both types of services (i.e., live TV and Video-On-Demand) have been provided by most of the P2PTV systems. Examples of such applications include Zattoo, Joost, Sopcast, and PPlive. However, P2P native support gives to these applications extra scalability, resilience, and the ability to harness computational resources. However, since end users offer their bandwidth, storage and memory to participate on the P2P networking, therefore, peers will be loaded as they contributing for many peers. Therefore, load evenly distribution is needed to be considered. So, in this paper, we are surveying most of the well know P2PTV applications in terms of the load distribution and balancing between computing and network resources. According to the findings, we propose different techniques to maintain load balancing between the end users. The proposed approach has been assessed using ns 2 simulator. Initial results show that the proposed technique is effective on distributing the load evenly

    Improving P2P streaming methods for IPTV

    No full text
    Peer-to-Peer (P2P) IPTV applications have increasingly been considered as a potential approach to online broadcasting. These overcome fundamental client-server issues and introduce new, self-management features that help improving performance. Recently, many applications such as PPlive, PPStream, Sopcast and Joost have been deployed to deliver live and Video-on-Demand streaming via P2P. However, the P2P approach has also shown some points of failure and limitations. In this paper we analyze, assess and compare two popular Live and Video-on-Demand P2P streaming applications, Sopcast and Joost. Fundamental shortcomings of existing applications are then countered by our approach where we employ a cross-layered method aimed at improving network efficiency and quality of experience. We show how simple modifications to existing protocols have the potential to lead to significant benefits in terms of latency, jitter, throughput, packet loss, and PSNR

    Scalable P2P video streaming

    No full text
    P2P networks are a technology able to deliver real time and video-on-demand services over IP networks. Layered video coding techniques are being introduced due to their ability to deliver streams at different scales (temporal, spatial and SNR) that solve the heterogeneity problem. This eases transmission in the case of limited bandwidth, as the devices can pick and decode the minimum bit rate base layer. Existing work examines layered video in client-server scenarios. In contrast, this paper analyzes scalable coding H.264/SVC over P2P networks based on an SNR-temporal Codec. Due to the interdependency between the different SVC layers, issues of reliability and quality of experience arise unless proper measures are taken to protect the base layer. The authors explore the effectiveness of a combination of P2P strategies, for example, hybrid P2P architecture, P2P locality, and P2P redundancy, to assess the viability and benefits of scalable video coding over P2P. The resulting performance is compared with a state-of-the-art P2P TV platform
    corecore