849 research outputs found

    Challenges and solutions in H.265/HEVC for integrating consumer electronics in professional video systems

    Get PDF

    Guided Transcoding for Next-Generation Video Coding (HEVC)

    Get PDF
    Video content is the dominant traffic type on mobile networks today and this portion is only expected to increase in the future. In this thesis we investigate ways of reducing bit rates for adaptive streaming applications in the latest video coding standard, H.265 / High Efficiency Video Coding (HEVC). The current models for offering different-resolution versions of video content in a dynamic way, so called adaptive streaming, require either large amounts of storage capacity where full encodings of the material is kept at all times, or extremely high computational power in order to regenerate content on-demand. Guided transcoding aims at finding a middle-ground were we can store and transmit less data, at full or near-full quality, while still keeping computational complexity low. This is achieved by shifting the computationally heavy operations to a preprocessing step where so called side-information is generated. The side-information can then be used to quickly reconstruct sequences on-demand -- even when running on generic, non-specialized, hardware. Two method for generating side-information, pruning and deflation, are compared on a varying set of standardized HEVC test sequences and the respective upsides and downsides of each method are discussed.Genom att slänga bort viss information från en komprimerad video och sedan återskapa sekvensen i realtid kan vi minska behovet av lagringsutrymme för adaptiv videostreaming med 20–30%. Detta med helt bibehållen bildkvalité eller endast små försämringar. ==================== Adaptiv streaming Streaming är ett populärt sätt att skicka video över internet där en sekvens delas upp i korta segment som skickas kontinuerligt till användaren. Dessa segment kan skickas med varierande kvalité, och en modell där vi automatiskt känner av nätverkets belastning och dynamiskt anpassar kvalitén kallas för adaptiv streaming. Detta är ett system som används av SVT Play, TV4 Play och YouTube. HD- eller UltraHD-video måste komprimeras för att kunna skickas över ett nätverk – den tar helt enkelt för stor plats annars. Video som kodas med den senaste komprimeringsstandarden, HEVC/H.265, blir upp emot 700 gånger mindre med minimala försämringar av bildkvalitén. Ett segment på tio sekunder som tar 1,5 GB att skicka i rå form kan då komprimeras till strax över 2 MB. För att kunna erbjuda tittaren en videosekvens – en film eller ett TV-program – i varierande kvalité, skapar man olika kodningar av materialet. Generellt har vi inte möjlighet att förändra kvalitén på en sekvens i efterhand – omkodning av även en kort HD-video tar timmar att genomföra – så för att adaptiv streaming ska kunna fungera i praktiken genereras alla versioner på förhand och sparas undan. Men detta kräver stort lagringsutrymme. Guided transcoding Guided transcoding (”guidad omkodning”) erbjuder ett sätt att minska behovet av lagringsutrymme genom att slänga bort viss information och sedan återskapa den vid behov i ett senare skede. Vi gör detta för varje sekvens av lägre kvalité, men behåller högsta kvalitén som den är. En stympad lågkvalité-video tillsammans med videon av högsta kvalitén kan sedan användas för att exakt återskapa sekvensen. Denna process är mycket snabb i jämförelse med vanlig omkodning, så vi kan med kort varsel generera videokodningar av varierande kvalité. Vi har undersökt två metoder för plocka bort och återskapa videoinformation: pruning och deflation. Den första ger små försämringar i bildkvalitén och sparar närmare 30% lagringsutrymme. Den senare har ingen påverkan på bildkvalitén men sparar bara drygt 20% i utrymme

    Spatiotemporal adaptive quantization for the perceptual video coding of RGB 4:4:4 data

    Get PDF
    Due to the spectral sensitivity phenomenon of the Human Visual System (HVS), the color channels of raw RGB 4:4:4 sequences contain significant psychovisual redundancies; these redundancies can be perceptually quantized. The default quantization systems in the HEVC standard are known as Uniform Reconstruction Quantization (URQ) and Rate Distortion Optimized Quantization (RDOQ); URQ and RDOQ are not perceptually optimized for the coding of RGB 4:4:4 video data. In this paper, we propose a novel spatiotemporal perceptual quantization technique named SPAQ. With application for RGB 4:4:4 video data, SPAQ exploits HVS spectral sensitivity-related color masking in addition to spatial masking and temporal masking; SPAQ operates at the Coding Block (CB) level and the Prediction Unit (PU) level. The proposed technique perceptually adjusts the Quantization Step Size (QStep) at the CB level if high variance spatial data in G, B and R CBs is detected and also if high motion vector magnitudes in PUs are detected. Compared with anchor 1 (HEVC HM 16.17 RExt), SPAQ considerably reduces bitrates with a maximum reduction of approximately 80%. The Mean Opinion Score (MOS) in the subjective evaluations, in addition to the SSIM scores, show that SPAQ successfully achieves perceptually lossless compression compared with anchors

    Pinging the brain to reveal hidden working memory states

    Get PDF
    Maintaining information for short periods of time in working memory, without its existence in the outer world, is crucial for everyday life, allowing us to move beyond simple, reflexive actions, and towards complex, goal-directed behaviours. It has been the consensus that the continuous activity of specific neurons are responsible to keep these information “online” until they are no longer required. However, this classic theory has been questioned more recently. Working memories that are not actively rehearsed seem to be maintained in an “activity-silent” network, eliciting no measurable neural activity, suggesting that it is the short-term changes in the neural wiring patterns that is responsible for their maintenance. These memories are thus hidden from conventional measuring techniques making it difficult to research them.This thesis proposes an approach to reveal hidden working memories that is analogues to active sonar: Hidden structures can be inferred from the echo of a “ping”. Similarly, by pushing a wave of activity through the silent neural network via external stimulation (for example a white flash), the resulting recording patterns expose the previously hidden memories held in said network. This approach is demonstrated in a series of experiments where both visual and auditory working memories are revealed. It is also used to reconstruct specific working memories with high-fidelity after different maintenance periods, showing that the maintenance of even a single piece of information is by no means perfect, as it tends to randomly and gradually transform within 1 to 2 seconds (for example purple becomes blue)

    Spectral-PQ : a novel spectral sensitivity-orientated perceptual compression technique for RGB 4:4:4 video data

    Get PDF
    There exists an intrinsic relationship between the spectral sensitivity of the Human Visual System (HVS) and colour perception; these intertwined phenomena are often overlooked in perceptual compression research. In general, most previously proposed visually lossless compression techniques exploit luminance (luma) masking including luma spatiotemporal masking, luma contrast masking and luma texture/edge masking. The perceptual relevance of color in a picture is often overlooked, which constitutes a gap in the literature. With regard to the spectral sensitivity phenomenon of the HVS, the color channels of raw RGB 4:4:4 data contain significant color-based psychovisual redundancies. These perceptual redundancies can be quantized via color channel-level perceptual quantization. In this paper, we propose a novel spatiotemporal visually lossless coding method named Spectral Perceptual Quantization (Spectral-PQ). With application for RGB 4:4:4 video data, Spectral-PQ exploits HVS spectral sensitivity-related color masking in addition to spatial masking and temporal masking; the proposed method operates at the Coding Block (CB) level and the Prediction Unit (PU) level in the HEVC standard. Spectral-PQ perceptually adjusts the Quantization Step Size (QStep) at the CB level if high variance spatial data in G, B and R CBs is detected and also if high motion vector magnitudes in PUs are detected. Compared with anchor 1 (HEVC HM 16.17 RExt), Spectral-PQ considerably reduces bitrates with a maximum reduction of approximately 81%. The Mean Opinion Score (MOS) in the subjective evaluations show that Spectral-PQ successfully achieves perceptually lossless quality

    High dynamic range video compression exploiting luminance masking

    Get PDF

    Multi-frame reconstruction using super-resolution, inpainting, segmentation and codecs

    Get PDF
    In this thesis, different aspects of video and light field reconstruction are considered such as super-resolution, inpainting, segmentation and codecs. For this purpose, each of these strategies are analyzed based on a specific goal and a specific database. Accordingly, databases which are relevant to film industry, sport videos, light fields and hyperspectral videos are used for the sake of improvement. This thesis is constructed around six related manuscripts, in which several approaches are proposed for multi-frame reconstruction. Initially, a novel multi-frame reconstruction strategy is proposed for lightfield super-resolution in which graph-based regularization is applied along with edge preserving filtering for improving the spatio-angular quality of lightfield. Second, a novel video reconstruction is proposed which is built based on compressive sensing (CS), Gaussian mixture models (GMM) and sparse 3D transform-domain block matching. The motivation of the proposed technique is the improvement in visual quality performance of the video frames and decreasing the reconstruction error in comparison with the former video reconstruction methods. In the next approach, student-t mixture models and edge preserving filtering are applied for the purpose of video super-resolution. Student-t mixture model has a heavy tail which makes it robust and suitable as a video frame patch prior and rich in terms of log likelihood for information retrieval. In another approach, a hyperspectral video database is considered, and a Bayesian dictionary learning process is used for hyperspectral video super-resolution. To that end, Beta process is used in Bayesian dictionary learning and a sparse coding is generated regarding the hyperspectral video super-resolution. The spatial super-resolution is followed by a spectral video restoration strategy, and the whole process leveraged two different dictionary learnings, in which the first one is trained for spatial super-resolution and the second one is trained for the spectral restoration. Furthermore, in another approach, a novel framework is proposed for replacing advertisement contents in soccer videos in an automatic way by using deep learning strategies. For this purpose, a UNET architecture is applied (an image segmentation convolutional neural network technique) for content segmentation and detection. Subsequently, after reconstructing the segmented content in the video frames (considering the apparent loss in detection), the unwanted content is replaced by new one using a homography mapping procedure. In addition, in another research work, a novel video compression framework is presented using autoencoder networks that encode and decode videos by using less chroma information than luma information. For this purpose, instead of converting Y'CbCr 4:2:2/4:2:0 videos to and from RGB 4:4:4, the video is kept in Y'CbCr 4:2:2/4:2:0 and merged the luma and chroma channels after the luma is downsampled to match the chroma size. An inverse function is performed for the decoder. The performance of these models is evaluated by using CPSNR, MS-SSIM, and VMAF metrics. The experiments reveal that, as compared to video compression involving conversion to and from RGB 4:4:4, the proposed method increases the video quality by about 5.5% for Y'CbCr 4:2:2 and 8.3% for Y'CbCr 4:2:0 while reducing the amount of computation by nearly 37% for Y'CbCr 4:2:2 and 40% for Y'CbCr 4:2:0. The thread that ties these approaches together is reconstruction of the video and light field frames based on different aspects of problems such as having loss of information, blur in the frames, existing noise after reconstruction, existing unpleasant content, excessive size of information and high computational overhead. In three of the proposed approaches, we have used Plug-and-Play ADMM model for the first time regarding reconstruction of videos and light fields in order to address both information retrieval in the frames and tackling noise/blur at the same time. In two of the proposed models, we applied sparse dictionary learning to reduce the data dimension and demonstrate them as an efficient linear combination of basis frame patches. Two of the proposed approaches are developed in collaboration with industry, in which deep learning frameworks are used to handle large set of features and to learn high-level features from the data
    corecore