922 research outputs found
A PatchMatch-based Dense-field Algorithm for Video Copy-Move Detection and Localization
We propose a new algorithm for the reliable detection and localization of
video copy-move forgeries. Discovering well crafted video copy-moves may be
very difficult, especially when some uniform background is copied to occlude
foreground objects. To reliably detect both additive and occlusive copy-moves
we use a dense-field approach, with invariant features that guarantee
robustness to several post-processing operations. To limit complexity, a
suitable video-oriented version of PatchMatch is used, with a multiresolution
search strategy, and a focus on volumes of interest. Performance assessment
relies on a new dataset, designed ad hoc, with realistic copy-moves and a wide
variety of challenging situations. Experimental results show the proposed
method to detect and localize video copy-moves with good accuracy even in
adverse conditions
Zernike velocity moments for sequence-based description of moving features
The increasing interest in processing sequences of images motivates development of techniques for sequence-based object analysis and description. Accordingly, new velocity moments have been developed to allow a statistical description of both shape and associated motion through an image sequence. Through a generic framework motion information is determined using the established centralised moments, enabling statistical moments to be applied to motion based time series analysis. The translation invariant Cartesian velocity moments suffer from highly correlated descriptions due to their non-orthogonality. The new Zernike velocity moments overcome this by using orthogonal spatial descriptions through the proven orthogonal Zernike basis. Further, they are translation and scale invariant. To illustrate their benefits and application the Zernike velocity moments have been applied to gait recognition—an emergent biometric. Good recognition results have been achieved on multiple datasets using relatively few spatial and/or motion features and basic feature selection and classification techniques. The prime aim of this new technique is to allow the generation of statistical features which encode shape and motion information, with generic application capability. Applied performance analyses illustrate the properties of the Zernike velocity moments which exploit temporal correlation to improve a shape's description. It is demonstrated how the temporal correlation improves the performance of the descriptor under more generalised application scenarios, including reduced resolution imagery and occlusion
On the validation of solid mechanics models using optical measurements and data decomposition
Engineering simulation has a significant role in the process of design and analysis of most
engineered products at all scales and is used to provide elegant, light-weight, optimized
designs. A major step in achieving high confidence in computational models with good predictive
capabilities is model validation. It is normal practice to validate simulation models
by comparing their numerical results to experimental data. However, current validation
practices tend to focus on identifying hot-spots in the data and checking that the experimental
and modeling results have a satisfactory agreement in these critical zones. Often
the comparison is restricted to a single or a few points where the maximum stress/strain
is predicted by the model. The objective of the present paper is to demonstrate a step-bystep
approach for performing model validation by combining full-field optical measurement
methodologies with computational simulation techniques. Two important issues of
the validation procedure are discussed, i.e. effective techniques to perform data compression
using the principles of orthogonal decomposition, as well as methodologies to quantify
the quality of simulations and make decisions about model validity. An I-beam with open holes under three-point bending loading is selected as an exemplar of the methodology.
Orthogonal decomposition by Zernike shape descriptors is performed to compress large amounts of numerical and experimental data in selected regions of interest (ROI)
by reducing its dimensionality while preserving information; and different comparison techniques including traditional error norms, a linear comparison methodology and a concordance coefficient correlation are used in order to make decisions about the validity of the simulation
An Evaluation of Popular Copy-Move Forgery Detection Approaches
A copy-move forgery is created by copying and pasting content within the same
image, and potentially post-processing it. In recent years, the detection of
copy-move forgeries has become one of the most actively researched topics in
blind image forensics. A considerable number of different algorithms have been
proposed focusing on different types of postprocessed copies. In this paper, we
aim to answer which copy-move forgery detection algorithms and processing steps
(e.g., matching, filtering, outlier detection, affine transformation
estimation) perform best in various postprocessing scenarios. The focus of our
analysis is to evaluate the performance of previously proposed feature sets. We
achieve this by casting existing algorithms in a common pipeline. In this
paper, we examined the 15 most prominent feature sets. We analyzed the
detection performance on a per-image basis and on a per-pixel basis. We created
a challenging real-world copy-move dataset, and a software framework for
systematic image manipulation. Experiments show, that the keypoint-based
features SIFT and SURF, as well as the block-based DCT, DWT, KPCA, PCA and
Zernike features perform very well. These feature sets exhibit the best
robustness against various noise sources and downsampling, while reliably
identifying the copied regions.Comment: Main paper: 14 pages, supplemental material: 12 pages, main paper
appeared in IEEE Transaction on Information Forensics and Securit
Moment-based metrics for molecules computable from cryo-EM images
Single particle cryogenic electron microscopy (cryo-EM) is an imaging
technique capable of recovering the high-resolution 3-D structure of biological
macromolecules from many noisy and randomly oriented projection images. One
notable approach to 3-D reconstruction, known as Kam's method, relies on the
moments of the 2-D images. Inspired by Kam's method, we introduce a
rotationally invariant metric between two molecular structures, which does not
require 3-D alignment. Further, we introduce a metric between a stack of
projection images and a molecular structure, which is invariant to rotations
and reflections and does not require performing 3-D reconstruction.
Additionally, the latter metric does not assume a uniform distribution of
viewing angles. We demonstrate uses of the new metrics on synthetic and
experimental datasets, highlighting their ability to measure structural
similarity.Comment: 21 Pages, 9 Figures, 2 Algorithms, and 3 Table
Neural network correction of astrometric chromaticity
In this paper we deal with the problem of chromaticity, i.e. apparent
position variation of stellar images with their spectral distribution, using
neural networks to analyse and process astronomical images. The goal is to
remove this relevant source of systematic error in the data reduction of high
precision astrometric experiments, like Gaia. This task can be accomplished
thanks to the capability of neural networks to solve a nonlinear approximation
problem, i.e. to construct an hypersurface that approximates a given set of
scattered data couples. Images are encoded associating each of them with
conveniently chosen moments, evaluated along the y axis. The technique
proposed, in the current framework, reduces the initial chromaticity of few
milliarcseconds to values of few microarcseconds.Comment: 9 pages, 8 figures Accepted by Monthly Notices of the Royal
Astronomical Societ
- …