80 research outputs found

    High throughput image compression and decompression on GPUs

    Get PDF
    Diese Arbeit befasst sich mit der Entwicklung eines GPU-freundlichen, intra-only, Wavelet-basierten Videokompressionsverfahrens mit hohem Durchsatz, das für visuell verlustfreie Anwendungen optimiert ist. Ausgehend von der Beobachtung, dass der JPEG 2000 Entropie-Kodierer ein Flaschenhals ist, werden verschiedene algorithmische Änderungen vorgeschlagen und bewertet. Zunächst wird der JPEG 2000 Selective Arithmetic Coding Mode auf der GPU realisiert, wobei sich die Erhöhung des Durchsatzes hierdurch als begrenzt zeigt. Stattdessen werden zwei nicht standard-kompatible Änderungen vorgeschlagen, die (1) jede Bitebebene in nur einem einzelnen Pass verarbeiten (Single-Pass-Modus) und (2) einen echten Rohcodierungsmodus einführen, der sample-weise parallelisierbar ist und keine aufwendige Kontextmodellierung erfordert. Als nächstes wird ein alternativer Entropiekodierer aus der Literatur, der Bitplane Coder with Parallel Coefficient Processing (BPC-PaCo), evaluiert. Er gibt Signaladaptivität zu Gunsten von höherer Parallelität auf und daher wird hier untersucht und gezeigt, dass ein aus verschiedensten Testsequenzen gemitteltes statisches Wahrscheinlichkeitsmodell eine kompetitive Kompressionseffizienz erreicht. Es wird zudem eine Kombination von BPC-PaCo mit dem Single-Pass-Modus vorgeschlagen, der den Speedup gegenüber dem JPEG 2000 Entropiekodierer von 2,15x (BPC-PaCo mit zwei Pässen) auf 2,6x (BPC-PaCo mit Single-Pass-Modus) erhöht auf Kosten eines um 0,3 dB auf 1,0 dB erhöhten Spitzen-Signal-Rausch-Verhältnis (PSNR). Weiter wird ein paralleler Algorithmus zur Post-Compression Ratenkontrolle vorgestellt sowie eine parallele Codestream-Erstellung auf der GPU. Es wird weiterhin ein theoretisches Laufzeitmodell formuliert, das es durch Benchmarking von einer GPU ermöglicht die Laufzeit einer Routine auf einer anderen GPU vorherzusagen. Schließlich wird der erste JPEG XS GPU Decoder vorgestellt und evaluiert. JPEG XS wurde als Low Complexity Codec konzipiert und forderte erstmals explizit GPU-Freundlichkeit bereits im Call for Proposals. Ab Bitraten über 1 bpp ist der Decoder etwa 2x schneller im Vergleich zu JPEG 2000 und 1,5x schneller als der schnellste hier vorgestellte Entropiekodierer (BPC-PaCo mit Single-Pass-Modus). Mit einer GeForce GTX 1080 wird ein Decoder Durchsatz von rund 200 fps für eine UHD-4:4:4-Sequenz erreicht.This work investigates possibilities to create a high throughput, GPU-friendly, intra-only, Wavelet-based video compression algorithm optimized for visually lossless applications. Addressing the key observation that JPEG 2000’s entropy coder is a bottleneck and might be overly complex for a high bit rate scenario, various algorithmic alterations are proposed. First, JPEG 2000’s Selective Arithmetic Coding mode is realized on the GPU, but the gains in terms of an increased throughput are shown to be limited. Instead, two independent alterations not compliant to the standard are proposed, that (1) give up the concept of intra-bit plane truncation points and (2) introduce a true raw-coding mode that is fully parallelizable and does not require any context modeling. Next, an alternative block coder from the literature, the Bitplane Coder with Parallel Coefficient Processing (BPC-PaCo), is evaluated. Since it trades signal adaptiveness for increased parallelism, it is shown here how a stationary probability model averaged from a set of test sequences yields competitive compression efficiency. A combination of BPC-PaCo with the single-pass mode is proposed and shown to increase the speedup with respect to the original JPEG 2000 entropy coder from 2.15x (BPC-PaCo with two passes) to 2.6x (proposed BPC-PaCo with single-pass mode) at the marginal cost of increasing the PSNR penalty by 0.3 dB to at most 1 dB. Furthermore, a parallel algorithm is presented that determines the optimal code block bit stream truncation points (given an available bit rate budget) and builds the entire code stream on the GPU, reducing the amount of data that has to be transferred back into host memory to a minimum. A theoretical runtime model is formulated that allows, based on benchmarking results on one GPU, to predict the runtime of a kernel on another GPU. Lastly, the first ever JPEG XS GPU-decoder realization is presented. JPEG XS was designed to be a low complexity codec and for the first time explicitly demanded GPU-friendliness already in the call for proposals. Starting at bit rates above 1 bpp, the decoder is around 2x faster compared to the original JPEG 2000 and 1.5x faster compared to JPEG 2000 with the fastest evaluated entropy coder (BPC-PaCo with single-pass mode). With a GeForce GTX 1080, a decoding throughput of around 200 fps is achieved for a UHD 4:4:4 sequence

    A Note on the Wide-Band Gaussian Broadcast Channel

    Get PDF
    Recently, Posner noted that on a wide-band Gaussian broadcast channel, ordinary time-shared coding performs almost as well as more sophisticated broadcast coding strategies. In this note, we shall give a quantitative version of Posner's result and argue that for certain realistic broadcast channels time sharing may suffice

    Development of functional safety applications for Autec products. Study of protocols: CANopen, CANopen Safety, FSOE and ProfiSafe

    Get PDF
    This thesis has the principal goal of developing intrinsic safety applications in distributed real-time industrial systems, mainly based on fieldbuses and RTE networks. To achieve this important objective the first part of this elaborate provides an introduction of the principal protocols, such as CANopen Safety, Fail safe Over Ethercat (FSOE) and Profisafe, used for the safety relevant applications in the automation environment,analysing properties,story and the use of them by industry

    Joint source and channel coding

    Get PDF

    Design and application of variable-to-variable length codes

    Get PDF
    This work addresses the design of minimum redundancy variable-to-variable length (V2V) codes and studies their suitability for using them in the probability interval partitioning entropy (PIPE) coding concept as an alternative to binary arithmetic coding. Several properties and new concepts for V2V codes are discussed and a polynomial-based principle for designing V2V codes is proposed. Various minimum redundancy V2V codes are derived and combined with the PIPE coding concept. Their redundancy is compared to the binary arithmetic coder of the video compression standard H.265/HEVC

    An Introduction to Computer Networks

    Get PDF
    An open textbook for undergraduate and graduate courses on computer networks

    DOCSIS 3.1 cable modem and upstream channel simulation in MATLAB

    Get PDF
    The cable television (CATV) industry has grown significantly since its inception in the late 1940’s. Originally, a CATV network was comprised of several homes that were connected to community antennae via a network of coaxial cables. The only signal processing done was by an analogue amplifier, and transmission only occurred in one direction (i.e. from the antennae/head-end to the subscribers). However, as CATV grew in popularity, demand for services such as pay-per-view television increased, which lead to supporting transmission in the upstream direction (i.e. from subscriber to the head-end). This greatly increased the signal processing to include frequency diplexers. CATV service providers began to expand the bandwidth of their networks in the late 90’s by switching from analogue to digital technology. In an effort to regulate the manufacturing of new digital equipment and ensure interoperability of products from different manufacturers, several cable service providers formed a non-for-profit consortium to develop a data-over-cable service interface specification (DOCSIS). The consortium, which is named CableLabs, released the first DOCSIS standard in 1997. The DOCSIS standard has been upgraded over the years to keep up with increased consumer demand for large bandwidths and faster transmission speeds, particularly in the upstream direction. The latest version of the DOCSIS standard, DOCSIS 3.1, utilizes orthogonal frequency-division multiple access (OFDMA) technology to provide upstream transmission speeds of up to 1 Gbps. As cable service providers begin the process of upgrading their upstream receivers to comply with the new DOCSIS 3.1 standard, they require a means of testing the various functions that an upstream receiver may employ. It is convenient for service providers to employ cable modem (CM) plus channel emulator to perform these tests in-house during the product development stage. Constructing the emulator in digital technology is an attractive option for testing. This thesis approaches digital emulation by developing a digital model of the CMs and upstream channel in a DOCSIS 3.1 network. The first step in building the emulator is to simulate its operations in MATLAB, specifically upstream transmission over the network. The MATLAB model is capable of simulating transmission from multiple CMs, each of which transmits using a specific “transmission mode.” The three transmission modes described in the DOCSIS 3.1 standard are included in the model. These modes are “traffic mode,” which is used during regular data transmission; “fine ranging mode,” which is used to perform fine timing and power offset corrections; and “probing” mode, which is presumably used for estimating the frequency response of the channel, but also is used to further correct the timing and power offsets. The MATLAB model is also capable of simulating the channel impairments a signal may encounter when traversing the upstream channel. Impairments that are specific to individual CMs include integer and fractional timing offsets, micro-reflections, carrier phase offset (CPO), fractional carrier frequency offset (CFO), and network gain/attenuation. Impairments common to all CMs include carrier hum modulation, AM/FM ingress noise, and additive white Gaussian noise (AWGN). It is the hope that the MATLAB scripts that make up the simulation be translated to Verilog HDL to implement the emulator on a field-programmable gate array (FPGA) in the near future. In the event that an FPGA implementation is pursued, research was conducted into designing efficient fractional delay filters (FDFs), which are essential in the simulation of micro-reflections. After performing an FPGA implementation cost analysis between various FDF designs, it was determined that a Kaiser-windowed sinc function FDF with roll-off parameter β = 3.88 was the most cost-efficient choice, requiring at total of 24 multipliers when implemented using an optimized structure

    Data Communications and Network Technologies

    Get PDF
    This open access book is written according to the examination outline for Huawei HCIA-Routing Switching V2.5 certification, aiming to help readers master the basics of network communications and use Huawei network devices to set up enterprise LANs and WANs, wired networks, and wireless networks, ensure network security for enterprises, and grasp cutting-edge computer network technologies. The content of this book includes: network communication fundamentals, TCP/IP protocol, Huawei VRP operating system, IP addresses and subnetting, static and dynamic routing, Ethernet networking technology, ACL and AAA, network address translation, DHCP server, WLAN, IPv6, WAN PPP and PPPoE protocol, typical networking architecture and design cases of campus networks, SNMP protocol used by network management, operation and maintenance, network time protocol NTP, SND and NFV, programming, and automation. As the world’s leading provider of ICT (information and communication technology) infrastructure and smart terminals, Huawei’s products range from digital data communication, cyber security, wireless technology, data storage, cloud-computing, and smart computing to artificial intelligence

    Secure CAN logging and data analysis

    Get PDF
    2020 Fall.Includes bibliographical references.Controller Area Network (CAN) communications are an essential element of modern vehicles, particularly heavy trucks. However, CAN protocols are vulnerable from a cybersecurity perspective in that they have no mechanism for authentication or authorization. Attacks on vehicle CAN systems present a risk to driver privacy and possibly driver safety. Therefore, developing new tools and techniques to detect cybersecurity threats within CAN networks is a critical research topic. A key component of this research is compiling a large database of representative CAN data from operational vehicles on the road. This database will be used to develop methods for detecting intrusions or other potential threats. In this paper, an open-source CAN logger was developed that used hardware and software following the industry security standards to securely log and transmit heavy vehicle CAN data. A hardware prototype demonstrated the ability to encrypt data at over 6 Megabits per second (Mbps) and successfully log all data at 100% bus load on a 1 Mbps baud CAN network in a laboratory setting. An AES-128 Cipher Block Chaining (CBC) encryption mode was chosen. A Hardware Security Module (HSM) was used to generate and securely store asymmetric key pairs for cryptographic communication with a third-party cloud database. It also implemented Elliptic-Curve Cryptography (ECC) algorithms to perform key exchange and sign the data for integrity verification. This solution ensures secure data collection and transmission because only encrypted data is ever stored or transmitted, and communication with the third-party cloud server uses shared, asymmetric secret keys as well as Transport Layer Security (TLS)
    • …
    corecore