242 research outputs found

    On the detection of self-similarities in vibro-acoustic signals

    Get PDF
    In this paper, a novel idea of organizing the search for similar fragments in vibro-acoustic signals (and not only) is proposed. The key point of this proposal is the task-oriented use of the signal smoothness parameter values (smoothness estimates). Firstly, the notion of smoothness of the signal is introduced; secondly, an iterative procedure for finding the signal smoothness estimates is presented; finally, some important properties of the signal smoothness estimates are formulated and proved. A special attention is paid to the necessary signal similarity condition; it is proved that small fragmentary changes in the vibro-acoustic signal lead to small changes in the signal smoothness parameter value. Preliminary experimental analysis results showed that the use of the necessary signal similarity condition might be of service in gear fault diagnosis, in fractal forecasting of real acoustical time series, in speeding-up some computational processes associated with interpolation of vibro-acoustic signals, data mining, etc

    Comparative Analysis of Techniques Used to Detect Copy-Move Tampering for Real-World Electronic Images

    Get PDF
    Evolution of high computational powerful computers, easy availability of several innovative editing software package and high-definition quality-based image capturing tools follows to effortless result in producing image forgery. Though, threats for security and misinterpretation of digital images and scenes have been observed to be happened since a long period and also a lot of research has been established in developing diverse techniques to authenticate the digital images. On the contrary, the research in this region is not limited to checking the validity of digital photos but also to exploring the specific signs of distortion or forgery. This analysis would not require additional prior information of intrinsic content of corresponding digital image or prior embedding of watermarks. In this paper, recent growth in the area of digital image tampering identification have been discussed along with benchmarking study has been shown with qualitative and quantitative results. With variety of methodologies and concepts, different applications of forgery detection have been discussed with corresponding outcomes especially using machine and deep learning methods in order to develop efficient automated forgery detection system. The future applications and development of advanced soft-computing based techniques in digital image forgery tampering has been discussed

    Comparative Analysis of Techniques Used to Detect Copy-Move Tampering for Real-World Electronic Images

    Get PDF
    Evolution of high computational powerful computers, easy availability of several innovative editing software package and high-definition quality-based image capturing tools follows to effortless result in producing image forgery. Though, threats for security and misinterpretation of digital images and scenes have been observed to be happened since a long period and also a lot of research has been established in developing diverse techniques to authenticate the digital images. On the contrary, the research in this region is not limited to checking the validity of digital photos but also to exploring the specific signs of distortion or forgery. This analysis would not require additional prior information of intrinsic content of corresponding digital image or prior embedding of watermarks. In this paper, recent growth in the area of digital image tampering identification have been discussed along with benchmarking study has been shown with qualitative and quantitative results. With variety of methodologies and concepts, different applications of forgery detection have been discussed with corresponding outcomes especially using machine and deep learning methods in order to develop efficient automated forgery detection system. The future applications and development of advanced soft-computing based techniques in digital image forgery tampering has been discussed

    Connected Attribute Filtering Based on Contour Smoothness

    Get PDF

    Novel Texture-based Probabilistic Object Recognition and Tracking Techniques for Food Intake Analysis and Traffic Monitoring

    Get PDF
    More complex image understanding algorithms are increasingly practical in a host of emerging applications. Object tracking has value in surveillance and data farming; and object recognition has applications in surveillance, data management, and industrial automation. In this work we introduce an object recognition application in automated nutritional intake analysis and a tracking application intended for surveillance in low quality videos. Automated food recognition is useful for personal health applications as well as nutritional studies used to improve public health or inform lawmakers. We introduce a complete, end-to-end system for automated food intake measurement. Images taken by a digital camera are analyzed, plates and food are located, food type is determined by neural network, distance and angle of food is determined and 3D volume estimated, the results are cross referenced with a nutritional database, and before and after meal photos are compared to determine nutritional intake. We compare against contemporary systems and provide detailed experimental results of our system\u27s performance. Our tracking systems consider the problem of car and human tracking on potentially very low quality surveillance videos, from fixed camera or high flying \acrfull{uav}. Our agile framework switches among different simple trackers to find the most applicable tracker based on the object and video properties. Our MAPTrack is an evolution of the agile tracker that uses soft switching to optimize between multiple pertinent trackers, and tracks objects based on motion, appearance, and positional data. In both cases we provide comparisons against trackers intended for similar applications i.e., trackers that stress robustness in bad conditions, with competitive results

    Fractal image compression and the self-affinity assumption : a stochastic signal modelling perspective

    Get PDF
    Bibliography: p. 208-225.Fractal image compression is a comparatively new technique which has gained considerable attention in the popular technical press, and more recently in the research literature. The most significant advantages claimed are high reconstruction quality at low coding rates, rapid decoding, and "resolution independence" in the sense that an encoded image may be decoded at a higher resolution than the original. While many of the claims published in the popular technical press are clearly extravagant, it appears from the rapidly growing body of published research that fractal image compression is capable of performance comparable with that of other techniques enjoying the benefit of a considerably more robust theoretical foundation. . So called because of the similarities between the form of image representation and a mechanism widely used in generating deterministic fractal images, fractal compression represents an image by the parameters of a set of affine transforms on image blocks under which the image is approximately invariant. Although the conditions imposed on these transforms may be shown to be sufficient to guarantee that an approximation of the original image can be reconstructed, there is no obvious theoretical reason to expect this to represent an efficient representation for image coding purposes. The usual analogy with vector quantisation, in which each image is considered to be represented in terms of code vectors extracted from the image itself is instructive, but transforms the fundamental problem into one of understanding why this construction results in an efficient codebook. The signal property required for such a codebook to be effective, termed "self-affinity", is poorly understood. A stochastic signal model based examination of this property is the primary contribution of this dissertation. The most significant findings (subject to some important restrictions} are that "self-affinity" is not a natural consequence of common statistical assumptions but requires particular conditions which are inadequately characterised by second order statistics, and that "natural" images are only marginally "self-affine", to the extent that fractal image compression is effective, but not more so than comparable standard vector quantisation techniques

    Physically based adaptive preconditioning for early vision

    Full text link

    Multiscale Geometric Methods for Data Sets II: Geometric Multi-Resolution Analysis

    Get PDF
    Data sets are often modeled as point clouds in RDR^D, for DD large. It is often assumed that the data has some interesting low-dimensional structure, for example that of a dd-dimensional manifold MM, with dd much smaller than DD. When MM is simply a linear subspace, one may exploit this assumption for encoding efficiently the data by projecting onto a dictionary of dd vectors in RDR^D (for example found by SVD), at a cost (n+D)d(n+D)d for nn data points. When MM is nonlinear, there are no "explicit" constructions of dictionaries that achieve a similar efficiency: typically one uses either random dictionaries, or dictionaries obtained by black-box optimization. In this paper we construct data-dependent multi-scale dictionaries that aim at efficient encoding and manipulating of the data. Their construction is fast, and so are the algorithms that map data points to dictionary coefficients and vice versa. In addition, data points are guaranteed to have a sparse representation in terms of the dictionary. We think of dictionaries as the analogue of wavelets, but for approximating point clouds rather than functions.Comment: Re-formatted using AMS styl
    • …
    corecore