4,632 research outputs found

    Precise and fast error tracking for error-resilient transmission of H.263 video

    Full text link

    ROBUST ADAPTIVE INTRA REFRESH FOR MULTIVIEW VIDEO

    Get PDF
    ABSTRAC

    Error resilient packet switched H.264 video telephony over third generation networks.

    Get PDF
    Real-time video communication over wireless networks is a challenging problem because wireless channels suffer from fading, additive noise and interference, which translate into packet loss and delay. Since modern video encoders deliver video packets with decoding dependencies, packet loss and delay can significantly degrade the video quality at the receiver. Many error resilience mechanisms have been proposed to combat packet loss in wireless networks, but only a few were specifically designed for packet switched video telephony over Third Generation (3G) networks. The first part of the thesis presents an error resilience technique for packet switched video telephony that combines application layer Forward Error Correction (FEC) with rateless codes, Reference Picture Selection (RPS) and cross layer optimization. Rateless codes have lower encoding and decoding computational complexity compared to traditional error correcting codes. One can use them on complexity constrained hand-held devices. Also, their redundancy does not need to be fixed in advance and any number of encoded symbols can be generated on the fly. Reference picture selection is used to limit the effect of spatio-temporal error propagation. Limiting the effect of spatio-temporal error propagation results in better video quality. Cross layer optimization is used to minimize the data loss at the application layer when data is lost at the data link layer. Experimental results on a High Speed Packet Access (HSPA) network simulator for H.264 compressed standard video sequences show that the proposed technique achieves significant Peak Signal to Noise Ratio (PSNR) and Percentage Degraded Video Duration (PDVD) improvements over a state of the art error resilience technique known as Interactive Error Control (IEC), which is a combination of Error Tracking and feedback based Reference Picture Selection. The improvement is obtained at a cost of higher end-to-end delay. The proposed technique is improved by making the FEC (Rateless code) redundancy channel adaptive. Automatic Repeat Request (ARQ) is used to adjust the redundancy of the Rateless codes according to the channel conditions. Experimental results show that the channel adaptive scheme achieves significant PSNR and PDVD improvements over the static scheme for a simulated Long Term Evolution (LTE) network. In the third part of the thesis, the performance of the previous two schemes is improved by making the transmitter predict when rateless decoding will fail. In this case, reference picture selection is invoked early and transmission of encoded symbols for that source block is aborted. Simulations for an LTE network show that this results in video quality improvement and bandwidth savings. In the last part of the thesis, the performance of the adaptive technique is improved by exploiting the history of the wireless channel. In a Rayleigh fading wireless channel, the RLC-PDU losses are correlated under certain conditions. This correlation is exploited to adjust the redundancy of the Rateless code and results in higher Rateless code decoding success rate and higher video quality. Simulations for an LTE network show that the improvement was significant when the packet loss rate in the two wireless links was 10%. To facilitate the implementation of the proposed error resilience techniques in practical scenarios, RTP/UDP/IP level packetization schemes are also proposed for each error resilience technique. Compared to existing work, the proposed error resilience techniques provide better video quality. Also, more emphasis is given to implementation issues in 3G networks

    NASA Tech Briefs Index, 1977, volume 2, numbers 1-4

    Get PDF
    Announcements of new technology derived from the research and development activities of NASA are presented. Abstracts, and indexes for subject, personal author, originating center, and Tech Brief number are presented for 1977

    Object-based video representations: shape compression and object segmentation

    Get PDF
    Object-based video representations are considered to be useful for easing the process of multimedia content production and enhancing user interactivity in multimedia productions. Object-based video presents several new technical challenges, however. Firstly, as with conventional video representations, compression of the video data is a requirement. For object-based representations, it is necessary to compress the shape of each video object as it moves in time. This amounts to the compression of moving binary images. This is achieved by the use of a technique called context-based arithmetic encoding. The technique is utilised by applying it to rectangular pixel blocks and as such it is consistent with the standard tools of video compression. The blockbased application also facilitates well the exploitation of temporal redundancy in the sequence of binary shapes. For the first time, context-based arithmetic encoding is used in conjunction with motion compensation to provide inter-frame compression. The method, described in this thesis, has been thoroughly tested throughout the MPEG-4 core experiment process and due to favourable results, it has been adopted as part of the MPEG-4 video standard. The second challenge lies in the acquisition of the video objects. Under normal conditions, a video sequence is captured as a sequence of frames and there is no inherent information about what objects are in the sequence, not to mention information relating to the shape of each object. Some means for segmenting semantic objects from general video sequences is required. For this purpose, several image analysis tools may be of help and in particular, it is believed that video object tracking algorithms will be important. A new tracking algorithm is developed based on piecewise polynomial motion representations and statistical estimation tools, e.g. the expectationmaximisation method and the minimum description length principle

    Survey and Systematization of Secure Device Pairing

    Full text link
    Secure Device Pairing (SDP) schemes have been developed to facilitate secure communications among smart devices, both personal mobile devices and Internet of Things (IoT) devices. Comparison and assessment of SDP schemes is troublesome, because each scheme makes different assumptions about out-of-band channels and adversary models, and are driven by their particular use-cases. A conceptual model that facilitates meaningful comparison among SDP schemes is missing. We provide such a model. In this article, we survey and analyze a wide range of SDP schemes that are described in the literature, including a number that have been adopted as standards. A system model and consistent terminology for SDP schemes are built on the foundation of this survey, which are then used to classify existing SDP schemes into a taxonomy that, for the first time, enables their meaningful comparison and analysis.The existing SDP schemes are analyzed using this model, revealing common systemic security weaknesses among the surveyed SDP schemes that should become priority areas for future SDP research, such as improving the integration of privacy requirements into the design of SDP schemes. Our results allow SDP scheme designers to create schemes that are more easily comparable with one another, and to assist the prevention of persisting the weaknesses common to the current generation of SDP schemes.Comment: 34 pages, 5 figures, 3 tables, accepted at IEEE Communications Surveys & Tutorials 2017 (Volume: PP, Issue: 99

    Content-Aware Multimedia Communications

    Get PDF
    The demands for fast, economic and reliable dissemination of multimedia information are steadily growing within our society. While people and economy increasingly rely on communication technologies, engineers still struggle with their growing complexity. Complexity in multimedia communication originates from several sources. The most prominent is the unreliability of packet networks like the Internet. Recent advances in scheduling and error control mechanisms for streaming protocols have shown that the quality and robustness of multimedia delivery can be improved significantly when protocols are aware of the content they deliver. However, the proposed mechanisms require close cooperation between transport systems and application layers which increases the overall system complexity. Current approaches also require expensive metrics and focus on special encoding formats only. A general and efficient model is missing so far. This thesis presents efficient and format-independent solutions to support cross-layer coordination in system architectures. In particular, the first contribution of this work is a generic dependency model that enables transport layers to access content-specific properties of media streams, such as dependencies between data units and their importance. The second contribution is the design of a programming model for streaming communication and its implementation as a middleware architecture. The programming model hides the complexity of protocol stacks behind simple programming abstractions, but exposes cross-layer control and monitoring options to application programmers. For example, our interfaces allow programmers to choose appropriate failure semantics at design time while they can refine error protection and visibility of low-level errors at run-time. Based on some examples we show how our middleware simplifies the integration of stream-based communication into large-scale application architectures. An important result of this work is that despite cross-layer cooperation, neither application nor transport protocol designers experience an increase in complexity. Application programmers can even reuse existing streaming protocols which effectively increases system robustness.Der Bedarf unsere Gesellschaft nach kostengĂŒnstiger und zuverlĂ€ssiger Kommunikation wĂ€chst stetig. WĂ€hrend wir uns selbst immer mehr von modernen Kommunikationstechnologien abhĂ€ngig machen, mĂŒssen die Ingenieure dieser Technologien sowohl den Bedarf nach schneller EinfĂŒhrung neuer Produkte befriedigen als auch die wachsende KomplexitĂ€t der Systeme beherrschen. Gerade die Übertragung multimedialer Inhalte wie Video und Audiodaten ist nicht trivial. Einer der prominentesten GrĂŒnde dafĂŒr ist die UnzuverlĂ€ssigkeit heutiger Netzwerke, wie z.B.~dem Internet. Paketverluste und schwankende Laufzeiten können die DarstellungsqualitĂ€t massiv beeintrĂ€chtigen. Wie jĂŒngste Entwicklungen im Bereich der Streaming-Protokolle zeigen, sind jedoch QualitĂ€t und Robustheit der Übertragung effizient kontrollierbar, wenn Streamingprotokolle Informationen ĂŒber den Inhalt der transportierten Daten ausnutzen. Existierende AnsĂ€tze, die den Inhalt von Multimediadatenströmen beschreiben, sind allerdings meist auf einzelne Kompressionsverfahren spezialisiert und verwenden berechnungsintensive Metriken. Das reduziert ihren praktischen Nutzen deutlich. Außerdem erfordert der Informationsaustausch eine enge Kooperation zwischen Applikationen und Transportschichten. Da allerdings die Schnittstellen aktueller Systemarchitekturen nicht darauf vorbereitet sind, mĂŒssen entweder die Schnittstellen erweitert oder alternative Architekturkonzepte geschaffen werden. Die Gefahr beider Varianten ist jedoch, dass sich die KomplexitĂ€t eines Systems dadurch weiter erhöhen kann. Das zentrale Ziel dieser Dissertation ist es deshalb, schichtenĂŒbergreifende Koordination bei gleichzeitiger Reduzierung der KomplexitĂ€t zu erreichen. Hier leistet die Arbeit zwei BetrĂ€ge zum aktuellen Stand der Forschung. Erstens definiert sie ein universelles Modell zur Beschreibung von Inhaltsattributen, wie Wichtigkeiten und AbhĂ€ngigkeitsbeziehungen innerhalb eines Datenstroms. Transportschichten können dieses Wissen zur effizienten Fehlerkontrolle verwenden. Zweitens beschreibt die Arbeit das Noja Programmiermodell fĂŒr multimediale Middleware. Noja definiert Abstraktionen zur Übertragung und Kontrolle multimedialer Ströme, die die Koordination von Streamingprotokollen mit Applikationen ermöglichen. Zum Beispiel können Programmierer geeignete Fehlersemantiken und Kommunikationstopologien auswĂ€hlen und den konkreten Fehlerschutz dann zur Laufzeit verfeinern und kontrolliere

    Internet of Underwater Things and Big Marine Data Analytics -- A Comprehensive Survey

    Full text link
    The Internet of Underwater Things (IoUT) is an emerging communication ecosystem developed for connecting underwater objects in maritime and underwater environments. The IoUT technology is intricately linked with intelligent boats and ships, smart shores and oceans, automatic marine transportations, positioning and navigation, underwater exploration, disaster prediction and prevention, as well as with intelligent monitoring and security. The IoUT has an influence at various scales ranging from a small scientific observatory, to a midsized harbor, and to covering global oceanic trade. The network architecture of IoUT is intrinsically heterogeneous and should be sufficiently resilient to operate in harsh environments. This creates major challenges in terms of underwater communications, whilst relying on limited energy resources. Additionally, the volume, velocity, and variety of data produced by sensors, hydrophones, and cameras in IoUT is enormous, giving rise to the concept of Big Marine Data (BMD), which has its own processing challenges. Hence, conventional data processing techniques will falter, and bespoke Machine Learning (ML) solutions have to be employed for automatically learning the specific BMD behavior and features facilitating knowledge extraction and decision support. The motivation of this paper is to comprehensively survey the IoUT, BMD, and their synthesis. It also aims for exploring the nexus of BMD with ML. We set out from underwater data collection and then discuss the family of IoUT data communication techniques with an emphasis on the state-of-the-art research challenges. We then review the suite of ML solutions suitable for BMD handling and analytics. We treat the subject deductively from an educational perspective, critically appraising the material surveyed.Comment: 54 pages, 11 figures, 19 tables, IEEE Communications Surveys & Tutorials, peer-reviewed academic journa
    • 

    corecore