54,374 research outputs found

    Target Detection through Robust Motion Segmentation and Tracking Restrictions in Aerial FLIR images

    Get PDF
    An efficient automatic moving target detection and tracking system in airborne forward looking infrared (FLIR) imagery is presented in this paper. Due to camera ego-motion, these detection and tracking tasks are challenging problems. Besides, previously proposed techniques are not suitable for aerial images, as the predominant regions are non-textured. The proposed system efficiently estimates not only the camera motion but also the target motion, by means of an accurate motion vector field computation and robust motion parameters estimation technique. This information allows accurately to segment each target, and tracking them with ego-motion compensation. Verification of tracking restrictions helps detecting true targets while reducing very significantly the false alarm rate. Excellent results have been obtained over real FLIR sequences

    Review And Comparative Study Of Motion Estimation Techniques To Reduce Complexity

    Get PDF
    ABSTRACT: Block matching motion estimation is a key Component in video compression because of its high computational complexity. The process of motion estimation has become a bottleneck problem in many video applications. Typical applications include HDTV, multimedia communications, video conferencing, etc. Motion estimation is a useful in estimating the motion of any object. Motion estimation has been conventionally used in the application of video encoding but nowadays researchers from various fields other than video encoding are turning towards motion estimation to solve various real life problems in their respective fields. In this paper, we present a review of block matching based motion estimation algorithms, reduced complexity of motion estimation techniques and a comparative study across all different algorithms. Also the aim of this study is to provide the reader with a feel of the relative performance of the algorithms, with particular attention to the important trade-off between computational complexity, prediction quality, result quality and other various applications. Keywords: Fixed size block motion estimation (FSBME), Block-based motion estimation (BMME), Peak-Signal-toNoise-Ratio (PSNR), Hybrid block matching algorithm (HBMA). I.INTRODUCTION Motion compensated transform coding forms the basis of the existing video compression Standards H.26 1/H.262 and MPEG-1 /MPEG-2, where the compression algorithm tries to exploit the temporal and spatial redundancies by using some form of motion compensation followed by a transform coding, respectively. The key step in removing temporal redundancy is the motion estimation where a motion Vector is predicted between the current frame and a reference frame. Following the motion estimation, a Motion compensation stage is applied to obtain the residual image, i.e. the pixel differences between the current frame and a reference frame. Later this residual is compressed using transform coding or a combination of transform and entropy coding. The above Video compression standards employs block motion estimation techniques. The main advantages of FSBME (fixed size block motion estimation) are simplicity of the algorithm and the fact that no segmentation information needs to be transmitted In block motion compensated video coding; first image frames are divided into square blocks (FIXED SIZE). The next step is to apply a three-step procedure, consisting of Motion Detection, Motion Estimation and Motion Compensation. Motion detection is used for classifying blocks as moving or non-moving based on a predefined distance or similarity measure. This similarity measure is usually done by MSE (minimum mean square error) criteria or minimum SAD (sum of absolute different) criteria. The output of the motion-estimation algorithm comprises the motion vector for each block, and the pixel value differences between the blocks in the current frame and the "matched" blocks in the reference frame. We call this difference signal the motion compensation error, or simply block error. Many techniques have been proposed for motion estimation for video compression so far. All the methods are proposed keeping any one or more of the three directions aimed that 1.reducing computational complexity 2.representing true motion (proving good quality) 3.reducing bit rate(high compression ratio)

    A noise detection, noise-motion separation and a cancer recognition theory and algorithm

    Full text link
    In this thesis we describe a noise detection and a motion-noise separation algorithm, as well as the stochastic properties of the noise. The difference between corresponding pixels subject to one type of noise, of two frames, has mean vector equal to (0,0,0), and variance covariance matrix with relatively small variances, for the (R, G, B) difference values. The other type of noise is a result of disturbance of the light equilibrium due to motion in neighboring or nearby pixels. In this type of noise the mean of the difference is non-zero. Every pixel not included in the one type of noise or the other is part of the motion set between the two frames. The pixels are organized in macroblocks, so macroblocks containing pixels with motion are applied motion estimation and motion compensation methods first and subsequently the difference between the corresponding macroblocks of the two frames is obtained; This thesis furthermore describes an algorithm of cancer recognition of ultrasound images. (Abstract shortened by UMI.)

    Estimation of the rigid-body motion from three-dimensional images using a generalized center-of-mass points approach

    Get PDF
    We present an analytical method for the estimation of rigid-body motion in sets of three-dimensional (3-D) SPECT and PET slices. This method utilizes mathematically defined generalized center-of-mass points in images, requiring no segmentation. It can be applied to compensation of the rigid-body motion in both SPECT and PET, once a series of 3-D tomographic images are available. We generalized the formula for the center-of-mass to obtain a family of points comoving with the object\u27s rigid-body motion. From the family of possible points we chose the best three points which resulted in the minimum root-mean-square difference between images as the generalized center-of-mass points for use in estimating motion. The estimated motion was used to sum the sets of tomographic images, or incorporated in the iterative reconstruction to correct for motion during reconstruction of the combined projection data. For comparison, the principle-axes method was also applied to estimate the rigid-body motion from the same tomographic images. To evaluate our method for different noise levels, we performed simulations with the MCAT phantom. We observed that though noise degraded the motion-detection accuracy, our method helped in reducing the motion artifact both visually and quantitatively. We also acquired four sets of the emission and transmission data of the Data Spectrum Anthropomorphic Phantom positioned at four different locations and/or orientations. From these we generated a composite acquisition simulating periodic phantom movements during acquisition. The simulated motion was calculated from the generalized center-of-mass points calculated from the tomographic images reconstructed from individual acquisitions. We determined that motion-compensation greatly reduced the motion artifact. Finally, in a simulation with the gated MCAT phantom, an exaggerated rigid-body motion was applied to the end-systolic frame. The motion was estimated from the end-diastolic and end-systolic images, and used to sum them into a summed image without obvious artifact. Compared to the principle-axes method, in two of the three comparisons with anthropomorphic phantom data our method estimated the motion in closer agreement to the Polaris system than the principal-axes method, while the principle-axes method gave a more accurate estimation of motion in most cases for the MCAT simulations. As an image-driven approach, our method assumes angularly com plete data sets for each state of motion. We expert this method to be applied in correction of respiratory motion in respiratory gated SPECT, and respiratory or other rigid-body motion in PET. © 2006 IEEE

    Video frame interpolation based on bilateral motion estimation

    Get PDF
    Tema završnog rada je interpolacija slika primjenom bilateralne procjene pokreta u postupku povećanja brzine izmjene slika video sekvenci. U radu su opisane metode interpolacije primjenom estimacije i kompenzacije pokreta te načini pretrage u postupku procjene vektora pokreta. Objašnjene su razlike između bilateralne i unilateralne interpolacije. U praktičnom dijelu zadatka napravljen je kod u programskom paketu Matlab za interpolaciju slika primjenom bilateralne procjene pokreta. Kvaliteta slika interpoliranih implementiranom metodom izmjerena je PSNR metrikom za 5 sekvenci različitog sadržaja. Ti su rezultati uspoređeni s rezultatima interpolacije napravljene programom ffmpeg i to metodom usrednjavanja i metodom MCI-EPZS. Pokazano je da uspješnost metoda interpolacije ovisi o sadržajima video sekvenci, ali i o metodama poboljšanja kvalitete videa koje se primjenjuju nakon interpolacije.The topic of this final paper is video frame interpolation based on bilateral motion estimation in the process of frame rate upscaling. This paper describes methods of motion interpolation by using motion estimation and motion compensation, also various algorithms that are used for adequate motion detection and obtaining motion vectors. It explains the key differences between bilateral and unilateral interpolation. As a practical part of this final thesis, a code for frame interpolation with bilateral motion estimation was made in Matlab. The quality of the interpolated pictures that are obtained by the implemented method is measured with PSNR metrics on 5 video sequences with different content. These results are compared to the results of interpolation made with ffmpeg software by using two methods, frame averaging and MCI-EPZS. The results show that successful motion interpolation depends on the content of the video sequences, but also on the methods for image improvement that are applied after the interpolation itself

    Motion Correction and Pharmacokinetic Analysis in Dynamic Positron Emission Tomography

    Get PDF
    This thesis will focus on two important aspects of dynamic Positron Emission Tomography (PET): (i) Motion-compensation , and (ii) Pharmacokinetic analysis (also called parametric imaging) of dynamic PET images. Both are required to enable fully quantitative PET imaging which is increasingly finding applications in the clinic. Motion-compensation in Dynamic Brain PET Imaging: Dynamic PET images are degraded by inter-frame and intra-frame motion artifacts that can a ffect the quantitative and qualitative analysis of acquired PET data. We propose a Generalized Inter-frame and Intra-frame Motion Correction (GIIMC) algorithm that uni fies in one framework the inter-frame motion correction capability of Multiple Acquisition Frames and the intra-frame motion correction feature of (MLEM)-type deconvolution methods. GIIMC employs a fairly simple but new approach of using time-weighted average of attenuation sinograms to reconstruct dynamic frames. Extensive validation studies show that GIIMC algorithm outperforms conventional techniques producing images with superior quality and quantitative accuracy. Parametric Myocardial Perfusion PET Imaging: We propose a novel framework of robust kinetic parameter estimation applied to absolute flow quantification in dynamic PET imaging. Kinetic parameter estimation is formulated as nonlinear least squares with spatial constraints problem where the spatial constraints are computed from a physiologically driven clustering of dynamic images, and used to reduce noise contamination. The proposed framework is shown to improve the quantitative accuracy of Myocardial Perfusion (MP) PET imaging, and in turn, has the long-term potential to enhance capabilities of MP PET in the detection, staging and management of coronary artery disease

    Refined estimation of time-varying baseline errors in airborne SAR interferometry

    Get PDF
    The processing of airborne synthetic aperture radar (SAR) data requires a precise compensation of the deviations of the platform movement from a straight line. This is usually carried out by recording the trajectory with a high-precision navigation system and correcting them during SAR focusing. However, due to the lack of accuracy in current navigation systems, residual motion errors persist in the images. Such residual motion errors are mainly noticeable in repeat-pass systems, where they are causing time-varying baseline errors, visible as artefacts in the derived phase maps. In this letter, a refined method for the estimation of time-varying baseline errors is presented. An improved multisquint processing approach is used for obtaining robust estimates of higher order baseline errors over the entire scene, even if parts of the scene are heavily decorrelated. In a subsequent step, the proposed method incorporates an external digital elevation model for detection of linear and constant components of the baseline error along azimuth. Calibration targets in the scene are not necessary.Peer Reviewe

    Aerial moving target detection based on motion vector field analysis

    Get PDF
    An efficient automatic detection strategy for aerial moving targets in airborne forward-looking infrared (FLIR) imagery is presented in this paper. Airborne cameras induce a global motion over all objects in the image, that invalidates motion-based segmentation techniques for static cameras. To overcome this drawback, previous works compensate the camera ego-motion. However, this approach is too much dependent on the quality of the ego-motion compensation, tending towards an over-detection. In this work, the proposed strategy estimates a robust motion vector field, free of erroneous vectors. Motion vectors are classified into different independent moving objects, corresponding to background objects and aerial targets. The aerial targets are directly segmented using their associated motion vectors. This detection strategy has a low computational cost, since no compensation process or motion-based technique needs to be applied. Excellent results have been obtained over real FLIR sequences
    corecore