11 research outputs found

    Adaptive-Rate Sparse Signal Reconstruction With Application in Compressive Foreground Subtraction

    Get PDF
    We propose and analyze an online algorithm for reconstructing a sequence of signals from a limited number of linear measurements. The signals are assumed sparse, with unknown support, and evolve over time according to a generic nonlinear dynamical model. Our algorithm, based on recent theoretical results for ℓ1-ℓ1 minimization, is recursive and computes the number of measurements to be taken at each time on-thefly. As an example, we apply the algorithm to compressive video background subtraction, a problem that can be stated as follows: given a set of measurements of a sequence of images with a static background, simultaneously reconstruct each image while separating its foreground from the background. The performance of our method is illustrated on sequences of real images: we observe that it allows a dramatic reduction in the number of measurements with respect to state-of-the-art compressive background subtraction schemes. Index Terms—State estimation, compressive video, background subtraction, sparsity, ℓ1 minimization, motion estimation

    Adaptively Weighted Vector-Median Filters For Motion-Fields Smoothing

    No full text
    In the field of video coding recent issues of backward prediction and standard conversion have focused an increasing attention towards techniques for an effective estimation of the true interframe motion. In this paper the problem of restoration of motion vector-fields computed by means of a standard Block Matching algorithm is addressed. The restoration must be carried out carefully by exploiting both the spatial correlation of the vector-field, and the significance of the obtained vectors as measures of the reliability of the previous estimation step. In this paper a novel approach matching both the above requirements is presented. Based on the theory of vector-median filters an adaptive scheme is developed and results are discussed. 1. INTRODUCTION Motion estimation and compensation is performed in order to reduce the redundancy due to inter-frame correlation [1]. However, in a system featuring a highcompression capability, the coding of the displacement vector field should be not..

    MAP based motion field refinement methods for motion-compensated frame interpolation

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2014. 2. 김태정.In this dissertation, maximum a posteriori probability (MAP) based motion refinement methods are proposed for block-based motion-compensated frame interpolation (MCFI). The first method, called single hypothesis Bayesian approach (SHBA), is aiming at estimating the true MVF of a video frame from its observed MVF, which is the result of a block-based motion estimation (BME), by maximizing the posterior probability of the true MVF. For the estimation, the observed MVF is assumed to be a degraded version of the true MVF by locally stationary additive Gaussian noise (AGN), so the variance of the noise represents the unreliability of the observed MV. The noise variance is directly estimated from the observation vector and its select neighbors. The prior distribution of the true MVF is designed to rely on the distances between the MV and its neighbors and to properly smooth false MVs in the observation. The second algorithm, called multiple hypotheses Bayesian approach (MHBA), estimates the true MVF of a video frame from its multiple observations by maximizing the posterior probability of the true. The multiple observations, which are the results of a BME incorporating blocks of different sizes for matching, are assumed to be degraded versions of the true MVF by locally stationary AGN. The noise variances for the observations are first estimated independently and then adaptively adjusted by block-matching errors in order to solve motion boundary problem. Finally, a method, called single hypothesis Bayesian approach in a bidirectional framework (SHBA-BF), that simultaneously estimates the true forward and backward MVFs of two consecutive frames from the observed forward and backward MVFs is proposed. The observed MVFs are assumed to be degraded versions of the corresponding true MVFs by locally stationary AGN. The true forward and backward MVFs are assumed to follow the proposed joint prior distribution, which is designed such that it adaptively relies on not only the resemblance between spatially neighboring MVs but also the resemblance between the MV and its dual MV so the proposed simultaneous estimation can fully exploit duality of MVF. Experimental results show that the proposed algorithms obtain better or comparable performances as compared to the state-of-the-art BME algorithms at much lower computational complexity.Abstract i Contents iii List of Figures v List of Tables x 1 Introudction 1 1.1 Motion-Compensated Frame Interpolation . . . . . . . . . . . . . . . 1 1.2 Previous Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2.1 Exploit spatio-temporal correlation during ME . . . . . . . . 3 1.2.2 Utilize multiple block sizes for matching in ME . . . . . . . . 6 1.2.3 Correct false motion vectors in given MVFs . . . . . . . . . . 7 1.3 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2 Single Hypothesis Bayesian Approach 11 2.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.1.1 Proposed observation likelihood . . . . . . . . . . . . . . . . 12 2.1.2 New prior distribution for true motion vector field . . . . . . . 15 2.2 Estimation of AGN Variance . . . . . . . . . . . . . . . . . . . . . . 23 2.2.1 Proposed covariance matrix estimation method . . . . . . . . 24 iii 2.2.2 Performance of the proposed reliability measure . . . . . . . 31 2.3 Solution to MAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.4 Relation to Previous Works . . . . . . . . . . . . . . . . . . . . . . . 34 2.5 Properties of Proposed Prior Distribution . . . . . . . . . . . . . . . . 36 3 Multiple Hypotheses Bayesian Approach 38 3.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3.1.1 Proposed observation likelihood . . . . . . . . . . . . . . . . 39 3.1.2 Prior distribution of true motion vector field . . . . . . . . . . 43 3.2 MAP Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 3.3 Adaptive Adjustment of Estimated Noise Variances . . . . . . . . . . 45 4 Single Hypothesis Bayesian Approach in a Bidirectional Framework 50 4.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . 50 4.1.1 Observation likelihood . . . . . . . . . . . . . . . . . . . . . 51 4.1.2 Joint prior distribution of true motion vector fields . . . . . . 52 4.2 MAP Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 5 Experimental Results 56 5.1 Experimental Settings . . . . . . . . . . . . . . . . . . . . . . . . . . 56 5.2 Performance Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . 60 5.2.1 Performance of SHBA . . . . . . . . . . . . . . . . . . . . . 60 5.2.2 Performance of MHBA . . . . . . . . . . . . . . . . . . . . . 71 5.2.3 Performance of SHBA-BF . . . . . . . . . . . . . . . . . . . 72 6 Conclusion 89 Abstract In Korean 98Docto

    Motion estimation based frame rate conversion hardware designs

    Get PDF
    Frame Rate Up-Conversion (FRC) is the conversion of a lower frame rate video signal to a higher frame rate video signal. FRC algorithms using Motion Estimation (ME) obtain better quality results. Among the block matching ME algorithms, Full Search (FS) achieves the best performance since it searches all search locations in a given search range. However, its computational complexity, especially for the recently available High Definition (HD) video formats, is very high. Therefore, in this thesis, we proposed new ME algorithms for real-time processing of HD video and designed efficient hardware architectures for implementing these ME algorithms. These algorithms perform very close to FS by searching much fewer search locations than FS algorithm. We implemented the proposed hardware architectures in VHDL and mapped them to a Xilinx FPGA. ME for FRC requires finding the true motion among consecutive frames. In order to find the true motion, Vector Median Filter (VMF) is used to smooth the motion vector field obtained by block matching ME. However, VMFs are difficult to implement in real-time due to their high computational complexity. Therefore, in this thesis, we proposed several techniques to reduce the computational complexity of VMFs by using data reuse methodology and by exploiting the spatial correlations in the vector field. In addition, we designed an efficient VMF hardware including the computation reduction techniques exploiting the spatial correlations in the motion vector field. We implemented the proposed hardware architecture in Verilog and mapped it to a Xilinx FPGA. ME based FRC requires interpolation of frames using the motion vectors found by ME. Frame interpolation algorithms also have high computational complexity. Therefore, in this thesis, we proposed a low cost hardware architecture for real-time implementation of frame interpolation algorithms. The proposed hardware architecture is reconfigurable and it allows adaptive selection of frame interpolation algorithms for each Macroblock. We implemented the proposed hardware architecture in VHDL and mapped it to a low cost Xilinx FPGA

    Research and developments of distributed video coding

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The recent developed Distributed Video Coding (DVC) is typically suitable for the applications such as wireless/wired video sensor network, mobile camera etc. where the traditional video coding standard is not feasible due to the constrained computation at the encoder. With DVC, the computational burden is moved from encoder to decoder. The compression efficiency is achieved via joint decoding at the decoder. The practical application of DVC is referred to Wyner-Ziv video coding (WZ) where the side information is available at the decoder to perform joint decoding. This join decoding inevitably causes a very complex decoder. In current WZ video coding issues, many of them emphasise how to improve the system coding performance but neglect the huge complexity caused at the decoder. The complexity of the decoder has direct influence to the system output. The beginning period of this research targets to optimise the decoder in pixel domain WZ video coding (PDWZ), while still achieves similar compression performance. More specifically, four issues are raised to optimise the input block size, the side information generation, the side information refinement process and the feedback channel respectively. The transform domain WZ video coding (TDWZ) has distinct superior performance to the normal PDWZ due to the exploitation in spatial direction during the encoding. However, since there is no motion estimation at the encoder in WZ video coding, the temporal correlation is not exploited at all at the encoder in all current WZ video coding issues. In the middle period of this research, the 3D DCT is adopted in the TDWZ to remove redundancy in both spatial and temporal direction thus to provide even higher coding performance. In the next step of this research, the performance of transform domain Distributed Multiview Video Coding (DMVC) is also investigated. Particularly, three types transform domain DMVC frameworks which are transform domain DMVC using TDWZ based 2D DCT, transform domain DMVC using TDWZ based on 3D DCT and transform domain residual DMVC using TDWZ based on 3D DCT are investigated respectively. One of the important applications of WZ coding principle is error-resilience. There have been several attempts to apply WZ error-resilient coding for current video coding standard e.g. H.264/AVC or MEPG 2. The final stage of this research is the design of WZ error-resilient scheme for wavelet based video codec. To balance the trade-off between error resilience ability and bandwidth consumption, the proposed scheme emphasises the protection of the Region of Interest (ROI) area. The efficiency of bandwidth utilisation is achieved by mutual efforts of WZ coding and sacrificing the quality of unimportant area. In summary, this research work contributed to achieves several advances in WZ video coding. First of all, it is targeting to build an efficient PDWZ with optimised decoder. Secondly, it aims to build an advanced TDWZ based on 3D DCT, which then is applied into multiview video coding to realise advanced transform domain DMVC. Finally, it aims to design an efficient error-resilient scheme for wavelet video codec, with which the trade-off between bandwidth consumption and error-resilience can be better balanced

    Advanced distributed video coding techniques

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    New model of partial filtering in implementation of algorithms for edge detection and digital image segmetation

    Get PDF
    Ova disertacija je doprinos digitalnoj analizi i obradi slike. Problematika koja je obrađena u disertaciji pokriva oblasti ocene kvaliteta, detekcije ivica, restauracije, klaster filtriranja, klasifikacije, superrezolucije, dizajna filtera i filtriranja digitalne slike. Za primenu u svim pomenutim oblastima razvijen je, a u disertaciji detaljno opisan novi metod parcijalnog filtriranja digitalne slike ‒ metod mozaika. Takođe, predstavljen je i model detekcije ivica ‒ hibridni metod ‒ koji čini sastavni deo metoda mozaika. Detaljno su analizirani parametri ocene kvaliteta. Na taj način rezultati disertacije predstavljeni su na adekvatan i sa drugim radovima merljiv način. Zbog preciznosti ocene filtriranja razvijen je model za ocenu sličnosti slike po kanalima – CSI. Dobijeni rezultati u disertaciji vrednovani su numerički na osnovu relevantnih parametara za ocenu kvaliteta multimedijalnih signala kao što su: PSNR, MSE, SNR, entropije, LoD, SSIM, MSSIM, DSSIM i CSI. Zasnovan na detaljnoj analizi algoritama detekcije ivica, kao još jedan doprinos disertacije, predložen je hibridni metod detekcije ivica. Upotrebom metoda mozaika izvršena je restauracija digitalne slike različitim klaster filtriranjem. Rezultati su prikazani nad slikama snimljenim niskim stepenom osvetljenja, kao i nad defokusiranim i zamućenim slikama. Adekvatnom analizom i obradom izvršena je klasifikacija segmenata u odnosu na parametar nivoa detalja. Praktična primena urađena je na BI-RADS medicinskim slikama. Superrezolucija digitalne slike izvršena je segmentacijom i klasifikacijom segmenata u okviru metoda mozaika. Analizom statističke vrednosti okoline piksela predložen je model za procenu koncentracije Snow & Rain šuma i dizajnirani su filteri za Snow & Rain i Salt & Papper šum. Modeli opisani u disertaciji testirani su korišćenjem poslednjih verzija softverskih rešenja kao što su Matlab, VCDemo, CVIPTools, Gimp, ImageQualityMeasurement, NeatImagePro i SofAS

    Microarray image processing : a novel neural network framework

    Get PDF
    Due to the vast success of bioengineering techniques, a series of large-scale analysis tools has been developed to discover the functional organization of cells. Among them, cDNA microarray has emerged as a powerful technology that enables biologists to cDNA microarray technology has enabled biologists to study thousands of genes simultaneously within an entire organism, and thus obtain a better understanding of the gene interaction and regulation mechanisms involved. Although microarray technology has been developed so as to offer high tolerances, there exists high signal irregularity through the surface of the microarray image. The imperfection in the microarray image generation process causes noises of many types, which contaminate the resulting image. These errors and noises will propagate down through, and can significantly affect, all subsequent processing and analysis. Therefore, to realize the potential of such technology it is crucial to obtain high quality image data that would indeed reflect the underlying biology in the samples. One of the key steps in extracting information from a microarray image is segmentation: identifying which pixels within an image represent which gene. This area of spotted microarray image analysis has received relatively little attention relative to the advances in proceeding analysis stages. But, the lack of advanced image analysis, including the segmentation, results in sub-optimal data being used in all downstream analysis methods. Although there is recently much research on microarray image analysis with many methods have been proposed, some methods produce better results than others. In general, the most effective approaches require considerable run time (processing) power to process an entire image. Furthermore, there has been little progress on developing sufficiently fast yet efficient and effective algorithms the segmentation of the microarray image by using a highly sophisticated framework such as Cellular Neural Networks (CNNs). It is, therefore, the aim of this thesis to investigate and develop novel methods processing microarray images. The goal is to produce results that outperform the currently available approaches in terms of PSNR, k-means and ICC measurements.EThOS - Electronic Theses Online ServiceAleppo University, SyriaGBUnited Kingdo
    corecore