111 research outputs found

    SoftCast

    Get PDF
    The focus of this demonstration is the performance of streaming video over the mobile wireless channel. We compare two schemes: the standard approach to video which transmits H.264/AVC-encoded stream over 802.11-like PHY, and SoftCast -- a clean-slate design for wireless video where the source transmits one video stream that each receiver decodes to a video quality commensurate with its specific instantaneous channel quality

    SoftCast: Clean-slate Scalable Wireless Video

    Get PDF
    Video broadcast and mobile video challenge the conventional wireless design. In broadcast and mobile scenarios the bit rate supported by the channel differs across receivers and varies quickly over time. The conventional design however forces the source to pick a single bit rate and degrades sharply when the channel cannot not support the chosen bit rate. This paper presents SoftCast, a clean-slate design for wireless video where the source transmits one video stream that each receiver decodes to a video quality commensurate with its specific instantaneous channel quality. To do so, SoftCast ensures the samples of the digital video signal transmitted on the channel are linearly related to the pixels' luminance. Thus, when channel noise perturbs the transmitted signal samples, the perturbation naturally translates into approximation in the original video pixels. Hence, a receiver with a good channel (low noise) obtains a high fidelity video, and a receiver with a bad channel (high noise) obtains a low fidelity video. We implement SoftCast using the GNURadio software and the USRP platform. Results from a 20-node testbed show that SoftCast improves the average video quality (i.e., PSNR) across broadcast receivers in our testbed by up to 5.5dB. Even for a single receiver, it eliminates video glitches caused by mobility and increases robustness to packet loss by an order of magnitude

    Routing and video streaming in drone networks

    Get PDF
    PhDDrones can be used for several civil applications including search and rescue, coverage, and aerial imaging. Newer applications like construction and delivery of goods are also emerging. Performing tasks as a team of drones is often beneficial but requires coordination through communication. In this thesis, the communication requirements of video streaming drone applications based on existing works are studied. The existing communication technologies are then analyzed to understand if the communication requirements posed by these drone applications can be met by the available technologies. The shortcomings of existing technologies with respect to drone applications are identified and potential requirements for future technologies are suggested. The existing communication and routing protocols including ad-hoc on-demand distance vector (AODV), location-aided routing (LAR), and greedy perimeter stateless routing (GPSR) protocols are studied to identify their limitations in context to the drone networks. An application scenario where a team of drones covers multiple areas of interest is considered, where the drones follow known trajectories and transmit continuous streams of sensed traffic (images or video) to a ground station. A route switching (RS) algorithm is proposed that utilizes both the location and the trajectory information of the drones to schedule and update routes to overcome route discovery and route error overhead. Simulation results show that the RS scheme outperforms LAR and AODV by achieving higher network performance in terms of throughput and delay. Video streaming drone applications such as search and rescue, surveillance, and disaster management, benefit from multicast wireless video streaming to transmit identical data to multiple users. Video multicast streaming using IEEE 802.11 poses challenges of reliability, performance, and fairness under tight delay bounds. Because of the mobility of the video sources and the high data-rate of the videos, the transmission rate should be adapted based on receivers' link conditions. Rate-adaptive video multicast streaming in IEEE 802.11 requires wireless link estimation as well as frequent feedback from multiple receivers. A contribution to this thesis is an application-layer rate-adaptive video multicast streaming framework using an 802.11 ad-hoc network that is applicable when both the sender and the receiver nodes are mobile. The receiver nodes of a multicast group are assigned with roles dynamically based on their link conditions. An application layer video multicast gateway (ALVM-GW) adapts the transmission rate and the video encoding rate based on the received feedback. Role switching between multiple receiver nodes (designated nodes) cater for mobility and rate adaptation addresses the challenges of performance and fairness. The reliability challenge is addressed through re-transmission of lost packets while delays under given bounds are achieved through video encoding rate adaptation. Emulation and experimental results show that the proposed approach outperforms legacy multicast in terms of packet loss and video quality

    Exposing a waveform interface to the wireless channel for scalable video broadcast

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 157-167).Video broadcast and mobile video challenge the conventional wireless design. In broadcast and mobile scenarios the bit-rate supported by the channel differs across receivers and varies quickly over time. The conventional design however forces the source to pick a single bit-rate and degrades sharply when the channel cannot support it. This thesis presents SoftCast, a clean-slate design for wireless video where the source transmits one video stream that each receiver decodes to a video quality commensurate with its specific instantaneous channel quality. To do so, SoftCast ensures the samples of the digital video signal transmitted on the channel are linearly related to the pixels' luminance. Thus, when channel noise perturbs the transmitted signal samples, the perturbation naturally translates into approximation in the original video pixels. Hence, a receiver with a good channel (low noise) obtains a high fidelity video, and a receiver with a bad channel (high noise) obtains a low fidelity video. SoftCast's linear design in essence resembles the traditional analog approach to communication, which was abandoned in most major communication systems, as it does not enjoy the theoretical opimality of the digital separate design in point-topoint channels nor its effectiveness at compressing the source data. In this thesis, I show that in combination with decorrelating transforms common to modern digital video compression, the analog approach can achieve performance competitive with the prevalent digital design for a wide variety of practical point-to-point scenarios, and outperforms it in the broadcast and mobile scenarios. Since the conventional bit-pipe interface of the wireless physical layer (PHY) forces the separation of source and channel coding, to realize SoftCast, architectural changes to the wireless PHY are necessary. This thesis discusses the design of RawPHY, a reorganization of the PHY which exposes a waveform interface to the channel while shielding the designers of the higher layers from much of the perplexity of the wireless channel. I implement SoftCast and RawPHY using the GNURadio software and the USRP platform. Results from a 20-node testbed show that SoftCast improves the average video quality (i.e., PSNR) across diverse broadcast receivers in our testbed by up to 5.5 dB in comparison to conventional single- or multi-layer video. Even for a single receiver, it eliminates video glitches caused by mobility and increases robustness to packet loss by an order of magnitude.by Szymon Kazimierz Jakubczak.Ph.D

    Smart Sensor Technologies for IoT

    Get PDF
    The recent development in wireless networks and devices has led to novel services that will utilize wireless communication on a new level. Much effort and resources have been dedicated to establishing new communication networks that will support machine-to-machine communication and the Internet of Things (IoT). In these systems, various smart and sensory devices are deployed and connected, enabling large amounts of data to be streamed. Smart services represent new trends in mobile services, i.e., a completely new spectrum of context-aware, personalized, and intelligent services and applications. A variety of existing services utilize information about the position of the user or mobile device. The position of mobile devices is often achieved using the Global Navigation Satellite System (GNSS) chips that are integrated into all modern mobile devices (smartphones). However, GNSS is not always a reliable source of position estimates due to multipath propagation and signal blockage. Moreover, integrating GNSS chips into all devices might have a negative impact on the battery life of future IoT applications. Therefore, alternative solutions to position estimation should be investigated and implemented in IoT applications. This Special Issue, “Smart Sensor Technologies for IoT” aims to report on some of the recent research efforts on this increasingly important topic. The twelve accepted papers in this issue cover various aspects of Smart Sensor Technologies for IoT

    Towards video streaming in IoT environments: vehicular communication perspective

    Get PDF
    Multimedia oriented Internet of Things (IoT) enables pervasive and real-time communication of video, audio and image data among devices in an immediate surroundings. Today's vehicles have the capability of supporting real time multimedia acquisition. Vehicles with high illuminating infrared cameras and customized sensors can communicate with other on-road devices using dedicated short-range communication (DSRC) and 5G enabled communication technologies. Real time incidence of both urban and highway vehicular traffic environment can be captured and transmitted using vehicle-to-vehicle and vehicle-to-infrastructure communication modes. Video streaming in vehicular IoT (VSV-IoT) environments is in growing stage with several challenges that need to be addressed ranging from limited resources in IoT devices, intermittent connection in vehicular networks, heterogeneous devices, dynamism and scalability in video encoding, bandwidth underutilization in video delivery, and attaining application-precise quality of service in video streaming. In this context, this paper presents a comprehensive review on video streaming in IoT environments focusing on vehicular communication perspective. Specifically, significance of video streaming in vehicular IoT environments is highlighted focusing on integration of vehicular communication with 5G enabled IoT technologies, and smart city oriented application areas for VSV-IoT. A taxonomy is presented for the classification of related literature on video streaming in vehicular network environments. Following the taxonomy, critical review of literature is performed focusing on major functional model, strengths and weaknesses. Metrics for video streaming in vehicular IoT environments are derived and comparatively analyzed in terms of their usage and evaluation capabilities. Open research challenges in VSV-IoT are identified as future directions of research in the area. The survey would benefit both IoT and vehicle industry practitioners and researchers, in terms of augmenting understanding of vehicular video streaming and its IoT related trends and issues

    Robust and Scalable Transmission of Arbitrary 3D Models over Wireless Networks

    Get PDF
    We describe transmission of 3D objects represented by texture and mesh over unreliable networks, extending our earlier work for regular mesh structure to arbitrary meshes and considering linear versus cubic interpolation. Our approach to arbitrary meshes considers stripification of the mesh and distributing nearby vertices into different packets, combined with a strategy that does not need texture or mesh packets to be retransmitted. Only the valence (connectivity) packets need to be retransmitted; however, storage of valence information requires only 10% space compared to vertices and even less compared to photorealistic texture. Thus, less than 5% of the packets may need to be retransmitted in the worst case to allow our algorithm to successfully reconstruct an acceptable object under severe packet loss. Even though packet loss during transmission has received limited research attention in the past, this topic is important for improving quality under lossy conditions created by shadowing and interference. Results showing the implementation of the proposed approach using linear, cubic, and Laplacian interpolation are described, and the mesh reconstruction strategy is compared with other methods
    corecore