2,042 research outputs found

    Block Matching Algorithms for the Estimation of Motion in Image Sequences: Analysis

    Get PDF
    Several video coding standards and techniques have been introduced for multimedia applications, particularly the h.26x series for video processing. These standards employ motion estimation processing to reduce the amount of data that is required to store or transmit the video. The motion estimation process is an inextricable part of the video coding as it removes the temporal redundancy between successive frames of video sequences. This paper is about these motion estimation algorithms, their search procedures, complexity, advantages, and limitations. A survey of motion estimation algorithms including full search, many fast, and fast full search block-based algorithms has been presented. An evaluation of up-to-date motion estimation algorithms, based on several empirical results on several test video sequences, is presented as well

    Two-Dimensional Gel Electrophoresis Image Registration Using Block-Matching Techniques and Deformation Models

    Get PDF
    [Abstract] Block-matching techniques have been widely used in the task of estimating displacement in medical images, and they represent the best approach in scenes with deformable structures such as tissues, fluids, and gels. In this article, a new iterative block-matching technique—based on successive deformation, search, fitting, filtering, and interpolation stages—is proposed to measure elastic displacements in two-dimensional polyacrylamide gel electrophoresis (2D–PAGE) images. The proposed technique uses different deformation models in the task of correlating proteins in real 2D electrophoresis gel images, obtaining an accuracy of 96.6% and improving the results obtained with other techniques. This technique represents a general solution, being easy to adapt to different 2D deformable cases and providing an experimental reference for block-matching algorithms.Galicia. Consellería de Economía e Industria; 10MDS014CTGalicia. Consellería de Economía e Industria; 10SIN105004PRInstituto de Salud Carlos III; PI13/0028

    Data Hiding in Digital Video

    Get PDF
    With the rapid development of digital multimedia technologies, an old method which is called steganography has been sought to be a solution for data hiding applications such as digital watermarking and covert communication. Steganography is the art of secret communication using a cover signal, e.g., video, audio, image etc., whereas the counter-technique, detecting the existence of such as a channel through a statistically trained classifier, is called steganalysis. The state-of-the art data hiding algorithms utilize features; such as Discrete Cosine Transform (DCT) coefficients, pixel values, motion vectors etc., of the cover signal to convey the message to the receiver side. The goal of embedding algorithm is to maximize the number of bits sent to the decoder side (embedding capacity) with maximum robustness against attacks while keeping the perceptual and statistical distortions (security) low. Data Hiding schemes are characterized by these three conflicting requirements: security against steganalysis, robustness against channel associated and/or intentional distortions, and the capacity in terms of the embedded payload. Depending upon the application it is the designer\u27s task to find an optimum solution amongst them. The goal of this thesis is to develop a novel data hiding scheme to establish a covert channel satisfying statistical and perceptual invisibility with moderate rate capacity and robustness to combat steganalysis based detection. The idea behind the proposed method is the alteration of Video Object (VO) trajectory coordinates to convey the message to the receiver side by perturbing the centroid coordinates of the VO. Firstly, the VO is selected by the user and tracked through the frames by using a simple region based search strategy and morphological operations. After the trajectory coordinates are obtained, the perturbation of the coordinates implemented through the usage of a non-linear embedding function, such as a polar quantizer where both the magnitude and phase of the motion is used. However, the perturbations made to the motion magnitude and phase were kept small to preserve the semantic meaning of the object motion trajectory. The proposed method is well suited to the video sequences in which VOs have smooth motion trajectories. Examples of these types could be found in sports videos in which the ball is the focus of attention and exhibits various motion types, e.g., rolling on the ground, flying in the air, being possessed by a player, etc. Different sports video sequences have been tested by using the proposed method. Through the experimental results, it is shown that the proposed method achieved the goal of both statistical and perceptual invisibility with moderate rate embedding capacity under AWGN channel with varying noise variances. This achievement is important as the first step for both active and passive steganalysis is the detection of the existence of covert channel. This work has multiple contributions in the field of data hiding. Firstly, it is the first example of a data hiding method in which the trajectory of a VO is used. Secondly, this work has contributed towards improving steganographic security by providing new features: the coordinate location and semantic meaning of the object

    Spread spectrum-based video watermarking algorithms for copyright protection

    Get PDF
    Merged with duplicate record 10026.1/2263 on 14.03.2017 by CS (TIS)Digital technologies know an unprecedented expansion in the last years. The consumer can now benefit from hardware and software which was considered state-of-the-art several years ago. The advantages offered by the digital technologies are major but the same digital technology opens the door for unlimited piracy. Copying an analogue VCR tape was certainly possible and relatively easy, in spite of various forms of protection, but due to the analogue environment, the subsequent copies had an inherent loss in quality. This was a natural way of limiting the multiple copying of a video material. With digital technology, this barrier disappears, being possible to make as many copies as desired, without any loss in quality whatsoever. Digital watermarking is one of the best available tools for fighting this threat. The aim of the present work was to develop a digital watermarking system compliant with the recommendations drawn by the EBU, for video broadcast monitoring. Since the watermark can be inserted in either spatial domain or transform domain, this aspect was investigated and led to the conclusion that wavelet transform is one of the best solutions available. Since watermarking is not an easy task, especially considering the robustness under various attacks several techniques were employed in order to increase the capacity/robustness of the system: spread-spectrum and modulation techniques to cast the watermark, powerful error correction to protect the mark, human visual models to insert a robust mark and to ensure its invisibility. The combination of these methods led to a major improvement, but yet the system wasn't robust to several important geometrical attacks. In order to achieve this last milestone, the system uses two distinct watermarks: a spatial domain reference watermark and the main watermark embedded in the wavelet domain. By using this reference watermark and techniques specific to image registration, the system is able to determine the parameters of the attack and revert it. Once the attack was reverted, the main watermark is recovered. The final result is a high capacity, blind DWr-based video watermarking system, robust to a wide range of attacks.BBC Research & Developmen

    Power Quality

    Get PDF
    Electrical power is becoming one of the most dominant factors in our society. Power generation, transmission, distribution and usage are undergoing signifi cant changes that will aff ect the electrical quality and performance needs of our 21st century industry. One major aspect of electrical power is its quality and stability – or so called Power Quality. The view on Power Quality did change over the past few years. It seems that Power Quality is becoming a more important term in the academic world dealing with electrical power, and it is becoming more visible in all areas of commerce and industry, because of the ever increasing industry automation using sensitive electrical equipment on one hand and due to the dramatic change of our global electrical infrastructure on the other. For the past century, grid stability was maintained with a limited amount of major generators that have a large amount of rotational inertia. And the rate of change of phase angle is slow. Unfortunately, this does not work anymore with renewable energy sources adding their share to the grid like wind turbines or PV modules. Although the basic idea to use renewable energies is great and will be our path into the next century, it comes with a curse for the power grid as power fl ow stability will suff er. It is not only the source side that is about to change. We have also seen signifi cant changes on the load side as well. Industry is using machines and electrical products such as AC drives or PLCs that are sensitive to the slightest change of power quality, and we at home use more and more electrical products with switching power supplies or starting to plug in our electric cars to charge batt eries. In addition, many of us have begun installing our own distributed generation systems on our rooft ops using the latest solar panels. So we did look for a way to address this severe impact on our distribution network. To match supply and demand, we are about to create a new, intelligent and self-healing electric power infrastructure. The Smart Grid. The basic idea is to maintain the necessary balance between generators and loads on a grid. In other words, to make sure we have a good grid balance at all times. But the key question that you should ask yourself is: Does it also improve Power Quality? Probably not! Further on, the way how Power Quality is measured is going to be changed. Traditionally, each country had its own Power Quality standards and defi ned its own power quality instrument requirements. But more and more international harmonization efforts can be seen. Such as IEC 61000-4-30, which is an excellent standard that ensures that all compliant power quality instruments, regardless of manufacturer, will produce of measurement instruments so that they can also be used in volume applications and even directly embedded into sensitive loads. But work still has to be done. We still use Power Quality standards that have been writt en decades ago and don’t match today’s technology any more, such as fl icker standards that use parameters that have been defi ned by the behavior of 60-watt incandescent light bulbs, which are becoming extinct. Almost all experts are in agreement - although we will see an improvement in metering and control of the power fl ow, Power Quality will suff er. This book will give an overview of how power quality might impact our lives today and tomorrow, introduce new ways to monitor power quality and inform us about interesting possibilities to mitigate power quality problems. Regardless of any enhancements of the power grid, “Power Quality is just compatibility” like my good old friend and teacher Alex McEachern used to say. Power Quality will always remain an economic compromise between supply and load. The power available on the grid must be suffi ciently clean for the loads to operate correctly, and the loads must be suffi ciently strong to tolerate normal disturbances on the grid

    Large-area visually augmented navigation for autonomous underwater vehicles

    Get PDF
    Submitted to the Joint Program in Applied Ocean Science & Engineering in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution June 2005This thesis describes a vision-based, large-area, simultaneous localization and mapping (SLAM) algorithm that respects the low-overlap imagery constraints typical of autonomous underwater vehicles (AUVs) while exploiting the inertial sensor information that is routinely available on such platforms. We adopt a systems-level approach exploiting the complementary aspects of inertial sensing and visual perception from a calibrated pose-instrumented platform. This systems-level strategy yields a robust solution to underwater imaging that overcomes many of the unique challenges of a marine environment (e.g., unstructured terrain, low-overlap imagery, moving light source). Our large-area SLAM algorithm recursively incorporates relative-pose constraints using a view-based representation that exploits exact sparsity in the Gaussian canonical form. This sparsity allows for efficient O(n) update complexity in the number of images composing the view-based map by utilizing recent multilevel relaxation techniques. We show that our algorithmic formulation is inherently sparse unlike other feature-based canonical SLAM algorithms, which impose sparseness via pruning approximations. In particular, we investigate the sparsification methodology employed by sparse extended information filters (SEIFs) and offer new insight as to why, and how, its approximation can lead to inconsistencies in the estimated state errors. Lastly, we present a novel algorithm for efficiently extracting consistent marginal covariances useful for data association from the information matrix. In summary, this thesis advances the current state-of-the-art in underwater visual navigation by demonstrating end-to-end automatic processing of the largest visually navigated dataset to date using data collected from a survey of the RMS Titanic (path length over 3 km and 3100 m2 of mapped area). This accomplishment embodies the summed contributions of this thesis to several current SLAM research issues including scalability, 6 degree of freedom motion, unstructured environments, and visual perception.This work was funded in part by the CenSSIS ERC of the National Science Foundation under grant EEC-9986821, in part by the Woods Hole Oceanographic Institution through a grant from the Penzance Foundation, and in part by a NDSEG Fellowship awarded through the Department of Defense

    Power conversion for a modular lightweight direct-drive wind turbine generator

    Get PDF
    A power conversion system for a modular lightweight direct-drive wind turbine generator has been proposed, based on a modular cascaded multilevel voltage-source inverter. Each module of the inverter is connected to two generator coils, which eliminates the problem of DC-link voltage balancing found in multilevel inverters with a large number of levels.The slotless design of the generator, and modular inverter, means that a high output voltage can be achieved from the inverter, while using standard components in the modules. Analysis of the high voltage issues shows that isolating the modules to a high voltage is easily possible, but insulating the generator coils could result in a signicant increase in the airgap size, reducing the generator effciency. A boost rectier input to the modules was calculated to have the highest electrical effciency of all the rectier systems tested, as well as the highest annual power extraction, while having a competitive cost. A rectier control system, based on estimating the generator EMF from the coil current and drawing a sinusoidal current in phase with the EMF, was developed. The control system can mitigate the problem of airgap eccentricity, likely to be present in a lightweight generator. A laboratory test rig was developed, based on two 2.5kW generators, with 12 coils each. A single phase of the inverter, with 12 power modules, was implemented, with each module featuring it's own microcontroller. The system is able to produce a good quality AC voltage waveform, and is able to tolerate the fault of a single module during operation. A decentralised inverter control system was developed, based on all modules estimating the grid voltage position and synchronising their estimates. Distributed output current limiting was also implemented, and the system is capable of riding through grid faults
    corecore