809 research outputs found

    Adaptive multiresolution search: How to beat brute force?

    Get PDF
    AbstractMultiresolution and wavelet-based search methods are suited to problems for which acceptable solutions are in regions of high average local fitness. In this paper, two different approaches are presented. In the Markov-based approach, the sampling resolution is chosen adaptively depending on the fitness of the last sample(s). The advantage of this method, behind its simplicity, is that it allows the computation of the discovery probability of a target sample for quite large search spaces. This permits to “reverse-engineer” search-and-optimization problems. Starting from some prototypic examples of fitness functions the discovery rate can be computed as a function of the free parameters. The second approach is a wavelet-based multiresolution search using a memory to store local average values of the fitness functions. The sampling density probability is chosen per design proportional to a low-resolution approximation of the fitness function. High average fitness regions are sampled more often, and at a higher resolution, than low average fitness regions. If splines are used as scaling mother functions, a fuzzy description of the search strategy can be given within the framework of the Takagi–Sugeno model

    Multiscale Transforms for Signals on Simplicial Complexes

    Full text link
    Our previous multiscale graph basis dictionaries/graph signal transforms -- Generalized Haar-Walsh Transform (GHWT); Hierarchical Graph Laplacian Eigen Transform (HGLET); Natural Graph Wavelet Packets (NGWPs); and their relatives -- were developed for analyzing data recorded on nodes of a given graph. In this article, we propose their generalization for analyzing data recorded on edges, faces (i.e., triangles), or more generally κ\kappa-dimensional simplices of a simplicial complex (e.g., a triangle mesh of a manifold). The key idea is to use the Hodge Laplacians and their variants for hierarchical partitioning of a set of κ\kappa-dimensional simplices in a given simplicial complex, and then build localized basis functions on these partitioned subsets. We demonstrate their usefulness for data representation on both illustrative synthetic examples and real-world simplicial complexes generated from a co-authorship/citation dataset and an ocean current/flow dataset.Comment: 19 Pages, Comments welcom

    Fractional - order system modeling and its applications

    Get PDF
    In order to control or operate any system in a closed-loop, it is important to know its behavior in the form of mathematical models. In the last two decades, a fractional-order model has received more attention in system identification instead of classical integer-order model transfer function. Literature shows recently that some techniques on fractional calculus and fractional-order models have been presenting valuable contributions to real-world processes and achieved better results. Such new developments have impelled research into extensions of the classical identification techniques to advanced fields of science and engineering. This article surveys the recent methods in the field and other related challenges to implement the fractional-order derivatives and miss-matching with conventional science. The comprehensive discussion on available literature would help the readers to grasp the concept of fractional-order modeling and can facilitate future investigations. One can anticipate manifesting recent advances in fractional-order modeling in this paper and unlocking more opportunities for research

    Application of Wavelet Packet Transform to detect genetic polymorphisms by the analysis of inter-Alu PCR patterns

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The analysis of Inter-Alu PCR patterns obtained from human genomic DNA samples is a promising technique for a simultaneous analysis of many genomic loci flanked by Alu repetitive sequences in order to detect the presence of genetic polymorphisms. Inter-Alu PCR products may be separated and analyzed by capillary electrophoresis using an automatic sequencer that generates a complex pattern of peaks. We propose an algorithmic method based on the Haar-Walsh Wavelet Packet Transformation (WPT) for an efficient detection of fingerprint-type patterns generated by PCR-based methodologies. We have tested our algorithmic approach on inter-Alu patterns obtained from the genomic DNA of three couples of monozygotic twins, expecting that the inter-Alu patterns of each twins couple will show differences due to unavoidable experimental variability. On the contrary the differences among samples of different twins are supposed to originate from genetic variability. Our goal is to automatically detect regions in the inter-Alu pattern likely associated to the presence of genetic polymorphisms.</p> <p>Results</p> <p>We show that the WPT algorithm provides a reliable tool to identify sample to sample differences in complex peak patterns, reducing the possible errors and limits associated to a subjective evaluation. The redundant decomposition of the WPT algorithm allows for a procedure of best basis selection which maximizes the pattern differences at the lowest possible scale. Our analysis points out few classifying signal regions that could indicate the presence of possible genetic polymorphisms.</p> <p>Conclusions</p> <p>The WPT algorithm based on the Haar-Walsh wavelet is an efficient tool for a non-supervised pattern classification of inter-ALU signals provided by a genetic analyzer, even if it was not possible to estimate the power and false positive rate due to the lacking of a suitable data base. The identification of non-reproducible peaks is usually accomplished comparing different experimental replicates of each sample. Moreover, we remark that, albeit we developed and optimized an algorithm able to analyze patterns obtained through inter-Alu PCR, the method is theoretically applicable to whatever fingerprint-type pattern obtained analyzing anonymous DNA fragments through capillary electrophoresis, and it could be usefully applied on a wide range of fingerprint-type methodologies.</p

    A single-step identification strategy for the coupled TITO process using fractional calculus

    Get PDF
    The reliable performance of a complete control system depends on accurate model information being used to represent each subsystem. The identification and modelling of multivariable systems are complex and challenging due to cross-coupling. Such a system may require multiple steps and decentralized testing to obtain full system models effectively. In this paper, a direct identification strategy is proposed for the coupled two-input two-output (TITO) system with measurable input–output signals. A well-known closed-loop relay test is utilized to generate a set of inputs–outputs data from a single run. Based on the collected data, four individual fractional-order transfer functions, two for main paths and two for cross-paths, are estimated from single-run test signals. The orthogonal series-based algebraic approach is adopted, namely the Haar wavelet operational matrix, to handle the fractional derivatives of the signal in a simple manner. A single-step strategy yields faster identification with accurate estimation. The simulation and experimental studies depict the efficiency and applicability of the proposed identification technique. The demonstrated results on the twin rotor multiple-input multiple- output (MIMO) system (TRMS) clearly reveal that the presented idea works well with the highly coupled system even in the presence of measurement noise

    Techniques for enhancing digital images

    Get PDF
    The images obtain from either research studies or optical instruments are often corrupted with noise. Image denoising involves the manipulation of image data to produce a visually high quality image. This thesis reviews the existing denoising algorithms and the filtering approaches available for enhancing images and/or data transmission. Spatial-domain and Transform-domain digital image filtering algorithms have been used in the past to suppress different noise models. The different noise models can be either additive or multiplicative. Selection of the denoising algorithm is application dependent. It is necessary to have knowledge about the noise present in the image so as to select the appropriated denoising algorithm. Noise models may include Gaussian noise, Salt and Pepper noise, Speckle noise and Brownian noise. The Wavelet Transform is similar to the Fourier transform with a completely different merit function. The main difference between Wavelet transform and Fourier transform is that, in the Wavelet Transform, Wavelets are localized in both time and frequency. In the standard Fourier Transform, Wavelets are only localized in frequency. Wavelet analysis consists of breaking up the signal into shifted and scales versions of the original (or mother) Wavelet. The Wiener Filter (mean squared estimation error) finds implementations as a LMS filter (least mean squares), RLS filter (recursive least squares), or Kalman filter. Quantitative measure (metrics) of the comparison of the denoising algorithms is provided by calculating the Peak Signal to Noise Ratio (PSNR), the Mean Square Error (MSE) value and the Mean Absolute Error (MAE) evaluation factors. A combination of metrics including the PSNR, MSE, and MAE are often required to clearly assess the model performance

    Optimisation via encodings: a renormalisation group perspective

    Full text link
    The traditional way of tackling discrete optimization problems is by using local search on suitably defined cost or fitness landscapes. Such approaches are however limited by the slowing down that occurs when local minima, that are a feature of the typically rugged landscapes encountered, arrest the progress of the search process. Another way of tackling optimization problems is by the use of heuristic approximations to estimate a global cost minimum. Here we present a combination of these two approaches by using cover-encoding maps which map processes from a larger search space to subsets of the original search space. The key idea is to construct cover-encoding maps with the help of suitable heuristics that single out near-optimal solutions and result in landscapes on the larger search space that no longer exhibit trapping local minima. The processes that are typically employed involve some form of coarse-graining, and we suggest here that they can be viewed as avatars of renormalisation group transformations.Comment: 17 pages, 2 figures. arXiv admin note: text overlap with arXiv:1806.0524
    corecore