144 research outputs found

    ビット深度・色域・知覚品質スケーラビリティのための映像符号化手法

    Get PDF
    早大学位記番号:新8421早稲田大

    Local Inverse Tone Curve Learning for High Dynamic Range Image Scalable Compression

    Get PDF
    International audienceThis paper presents a scalable high dynamic range (HDR) image coding scheme in which the base layer is a lowdynamic range (LDR) version of the image that may have been generated by an arbitrary Tone Mapping Operator (TMO). No restriction is imposed on the TMO, which can be either global or local, so as to fully respect the artistic intent of the producer. Our method successfully handles the case of complex local TMOs thanks to a block-wise and non-linear approach. A novel template based Inter Layer Prediction (ILP) is designed in order to perform the inverse tone mapping of a block without the need to transmit any additional parameter to the decoder. This method enables the use of a more accurate inverse tone mapping model than the simple linear regression commonly used for blockwise ILP. In addition, this paper shows that a linear adjustment of the initially predicted block can further improve the overall coding performance by using an efficient encoding scheme of the scaling parameters. Our experiments have shown an average bitrate saving of 47% on the HDR enhancement layer, compared to previous local ILP methods

    A reduced reference video quality assessment method for provision as a service over SDN/NFV-enabled networks

    Get PDF
    139 p.The proliferation of multimedia applications and services has generarted a noteworthy upsurge in network traffic regarding video content and has created the need for trustworthy service quality assessment methods. Currently, predominent position among the technological trends in telecommunication networkds are Network Function Virtualization (NFV), Software Defined Networking (SDN) and 5G mobile networks equipped with small cells. Additionally Video Quality Assessment (VQA) methods are a very useful tool for both content providers and network operators, to understand of how users perceive quality and this study the feasibility of potential services and adapt the network available resources to satisfy the user requirements

    A reduced reference video quality assessment method for provision as a service over SDN/NFV-enabled networks

    Get PDF
    139 p.The proliferation of multimedia applications and services has generarted a noteworthy upsurge in network traffic regarding video content and has created the need for trustworthy service quality assessment methods. Currently, predominent position among the technological trends in telecommunication networkds are Network Function Virtualization (NFV), Software Defined Networking (SDN) and 5G mobile networks equipped with small cells. Additionally Video Quality Assessment (VQA) methods are a very useful tool for both content providers and network operators, to understand of how users perceive quality and this study the feasibility of potential services and adapt the network available resources to satisfy the user requirements

    Development of Some Spatial-domain Preprocessing and Post-processing Algorithms for Better 2-D Up-scaling

    Get PDF
    Image super-resolution is an area of great interest in recent years and is extensively used in applications like video streaming, multimedia, internet technologies, consumer electronics, display and printing industries. Image super-resolution is a process of increasing the resolution of a given image without losing its integrity. Its most common application is to provide better visual effect after resizing a digital image for display or printing. One of the methods of improving the image resolution is through the employment of a 2-D interpolation. An up-scaled image should retain all the image details with very less degree of blurring meant for better visual quality. In literature, many efficient 2-D interpolation schemes are found that well preserve the image details in the up-scaled images; particularly at the regions with edges and fine details. Nevertheless, these existing interpolation schemes too give blurring effect in the up-scaled images due to the high frequency (HF) degradation during the up-sampling process. Hence, there is a scope to further improve their performance through the incorporation of various spatial domain pre-processing, post-processing and composite algorithms. Therefore, it is felt that there is sufficient scope to develop various efficient but simple pre-processing, post-processing and composite schemes to effectively restore the HF contents in the up-scaled images for various online and off-line applications. An efficient and widely used Lanczos-3 interpolation is taken for further performance improvement through the incorporation of various proposed algorithms. The various pre-processing algorithms developed in this thesis are summarized here. The term pre-processing refers to processing the low-resolution input image prior to image up-scaling. The various pre-processing algorithms proposed in this thesis are: Laplacian of Laplacian based global pre-processing (LLGP) scheme; Hybrid global pre-processing (HGP); Iterative Laplacian of Laplacian based global pre-processing (ILLGP); Unsharp masking based pre-processing (UMP); Iterative unsharp masking (IUM); Error based up-sampling(EU) scheme. The proposed algorithms: LLGP, HGP and ILLGP are three spatial domain preprocessing algorithms which are based on 4th, 6th and 8th order derivatives to alleviate nonuniform blurring in up-scaled images. These algorithms are used to obtain the high frequency (HF) extracts from an image by employing higher order derivatives and perform precise sharpening on a low resolution image to alleviate the blurring in its 2-D up-sampled counterpart. In case of unsharp masking based pre-processing (UMP) scheme, the blurred version of a low resolution image is used for HF extraction from the original version through image subtraction. The weighted version of the HF extracts are superimposed with the original image to produce a sharpened image prior to image up-scaling to counter blurring effectively. IUM makes use of many iterations to generate an unsharp mask which contains very high frequency (VHF) components. The VHF extract is the result of signal decomposition in terms of sub-bands using the concept of analysis filter bank. Since the degradation of VHF components is maximum, restoration of such components would produce much better restoration performance. EU is another pre-processing scheme in which the HF degradation due to image upscaling is extracted and is called prediction error. The prediction error contains the lost high frequency components. When this error is superimposed on the low resolution image prior to image up-sampling, blurring is considerably reduced in the up-scaled images. Various post-processing algorithms developed in this thesis are summarized in following. The term post-processing refers to processing the high resolution up-scaled image. The various post-processing algorithms proposed in this thesis are: Local adaptive Laplacian (LAL); Fuzzy weighted Laplacian (FWL); Legendre functional link artificial neural network(LFLANN). LAL is a non-fuzzy, local based scheme. The local regions of an up-scaled image with high variance are sharpened more than the region with moderate or low variance by employing a local adaptive Laplacian kernel. The weights of the LAL kernel are varied as per the normalized local variance so as to provide more degree of HF enhancement to high variance regions than the low variance counterpart to effectively counter the non-uniform blurring. Furthermore, FWL post-processing scheme with a higher degree of non-linearity is proposed to further improve the performance of LAL. FWL, being a fuzzy based mapping scheme, is highly nonlinear to resolve the blurring problem more effectively than LAL which employs a linear mapping. Another LFLANN based post-processing scheme is proposed here to minimize the cost function so as to reduce the blurring in a 2-D up-scaled image. Legendre polynomials are used for functional expansion of the input pattern-vector and provide high degree of nonlinearity. Therefore, the requirement of multiple layers can be replaced by single layer LFLANN architecture so as to reduce the cost function effectively for better restoration performance. With single layer architecture, it has reduced the computational complexity and hence is suitable for various real-time applications. There is a scope of further improvement of the stand-alone pre-processing and postprocessing schemes by combining them through composite schemes. Here, two spatial domain composite schemes, CS-I and CS-II are proposed to tackle non-uniform blurring in an up-scaled image. CS-I is developed by combining global iterative Laplacian (GIL) preprocessing scheme with LAL post-processing scheme. Another highly nonlinear composite scheme, CS-II is proposed which combines ILLGP scheme with a fuzzy weighted Laplacian post-processing scheme for more improved performance than the stand-alone schemes. Finally, it is observed that the proposed algorithms: ILLGP, IUM, FWL, LFLANN and CS-II are better algorithms in their respective categories for effectively reducing blurring in the up-scaled images

    Research on digital image watermark encryption based on hyperchaos

    Get PDF
    The digital watermarking technique embeds meaningful information into one or more watermark images hidden in one image, in which it is known as a secret carrier. It is difficult for a hacker to extract or remove any hidden watermark from an image, and especially to crack so called digital watermark. The combination of digital watermarking technique and traditional image encryption technique is able to greatly improve anti-hacking capability, which suggests it is a good method for keeping the integrity of the original image. The research works contained in this thesis include: (1)A literature review the hyperchaotic watermarking technique is relatively more advantageous, and becomes the main subject in this programme. (2)The theoretical foundation of watermarking technologies, including the human visual system (HVS), the colour space transform, discrete wavelet transform (DWT), the main watermark embedding algorithms, and the mainstream methods for improving watermark robustness and for evaluating watermark embedding performance. (3) The devised hyperchaotic scrambling technique it has been applied to colour image watermark that helps to improve the image encryption and anti-cracking capabilities. The experiments in this research prove the robustness and some other advantages of the invented technique. This thesis focuses on combining the chaotic scrambling and wavelet watermark embedding to achieve a hyperchaotic digital watermark to encrypt digital products, with the human visual system (HVS) and other factors taken into account. This research is of significant importance and has industrial application value

    Designing for adaptability in architecture

    Get PDF
    The research is framed on the premise that designing buildings that can adapt by accommodating change easier and more cost-effectively provides an effective means to a desired end a more sustainable built environment. In this context, adaptability can be viewed as a means to decrease the amount of new construction (reduce), (re)activate underused or vacant building stock (reuse) and enhance disassembly/ deconstruction of components (reuse, recycle) - prolonging the useful life of buildings (reduce, reuse, recycle). The aim of the research is to gain a holistic overview of the concept of adaptability in the construction industry and provide an improved framework to design for, deploy and implement adaptability. An over-arching research question was posited to guide the inquiry: how can architects understand, communicate, design for and test the concept of adaptability in the context of the design process? The research followed Dubois and Gadde s (2002) systematic combining as an over-arching approach that continuously moves between the empirical world and theoretical models allowing the co-evolution of data collection and theory from the beginning as part of a non-linear process with the objective of matching theory with reality. An initial framework was abducted from a preliminary collection of data from which a set of mixed research methods was deployed to explore adaptability (interviews, building case studies, dependency structural matrices, practitioner surveys and workshop). Emergent from the data is an expanded and revised theory on designing for adaptability consisting of concepts, models and propositions. The models illustrate many of the casual links between the physical design structure of the building (e.g. plan depth, storey height) and the soft contingencies of a messy design/construction/occupation process (e.g. procurement route, funding methods, stakeholder mindsets). In an effort to enhance building adaptability, the abducted propositions suggest a shift in the way the industry values buildings and conducts aspects of the design process and how designer s approach designing for adaptability
    corecore