2,944 research outputs found

    Practical implementation of nonlinear time series methods: The TISEAN package

    Full text link
    Nonlinear time series analysis is becoming a more and more reliable tool for the study of complicated dynamics from measurements. The concept of low-dimensional chaos has proven to be fruitful in the understanding of many complex phenomena despite the fact that very few natural systems have actually been found to be low dimensional deterministic in the sense of the theory. In order to evaluate the long term usefulness of the nonlinear time series approach as inspired by chaos theory, it will be important that the corresponding methods become more widely accessible. This paper, while not a proper review on nonlinear time series analysis, tries to make a contribution to this process by describing the actual implementation of the algorithms, and their proper usage. Most of the methods require the choice of certain parameters for each specific time series application. We will try to give guidance in this respect. The scope and selection of topics in this article, as well as the implementational choices that have been made, correspond to the contents of the software package TISEAN which is publicly available from http://www.mpipks-dresden.mpg.de/~tisean . In fact, this paper can be seen as an extended manual for the TISEAN programs. It fills the gap between the technical documentation and the existing literature, providing the necessary entry points for a more thorough study of the theoretical background.Comment: 27 pages, 21 figures, downloadable software at http://www.mpipks-dresden.mpg.de/~tisea

    Multi-resolution dental image registration based on genetic algorithm

    Get PDF
    The Automated Dental Identification System (ADIS) is a Post Mortem Dental Identification System. This thesis presents dental image registration, required for the preprocessing steps of the image comparison component of ADIS. We proposed a multi resolution dental image registration based on genetic algorithms. The main objective of this research is to develop techniques for registration of extracted subject regions of interest with corresponding reference regions of interest.;We investigated and implemented registration using two multi resolution techniques namely image sub sampling and wavelet decomposition. Multi resolution techniques help in the reduction of search data since initial registration is carried at lower levels and results are updated as the levels of resolutions increase. We adopted edges as image features that needed to be aligned. Affine transformations were selected to transform the subject dental region of interest to achieve better alignment with the reference region of interest. These transformations are known to capture complex image distortions. The similarity between subject and reference image has been computed using Oriented Hausdorff Similarity measure that is robust to severe noise and image degradations. A genetic algorithm was adopted to search for the best transformation parameters that give maximum similarity score.;Testing results show that the developed registration algorithm yielded reasonable results in accuracy for dental test cases that contained slight misalignments. The relative percentage errors between the known and estimated transformation parameters were less than 20% with a termination criterion of a ten minute time limit. Further research is needed for dental cases that contain high degree of misalignment, noise and distortions

    Coarse-graining the dynamics of network evolution: the rise and fall of a networked society

    Full text link
    We explore a systematic approach to studying the dynamics of evolving networks at a coarse-grained, system level. We emphasize the importance of finding good observables (network properties) in terms of which coarse grained models can be developed. We illustrate our approach through a particular social network model: the "rise and fall" of a networked society [1]: we implement our low-dimensional description computationally using the equation-free approach and show how it can be used to (a) accelerate simulations and (b) extract system-level stability/bifurcation information from the detailed dynamic model. We discuss other system-level tasks that can be enabled through such a computer-assisted coarse graining approach.Comment: 18 pages, 11 figure

    Disentangling agglomeration and network externalities : a conceptual typology

    Get PDF
    Agglomeration and network externalities are fuzzy concepts. When different meanings are (un)intentionally juxtaposed in analyses of the agglomeration/network externalities-menagerie, researchers may reach inaccurate conclusions about how they interlock. Both externality types can be analytically combined, but only when one adopts a coherent approach to their conceptualization and operationalization, to which end we provide a combinatorial typology. We illustrate the typology by applying a state-of-the-art bipartite network projection detailing the presence of globalized producer services firms in cities in 2012. This leads to two one-mode graphs that can be validly interpreted as topological renderings of agglomeration and network externalities

    Coarse-Grained Kinetic Computations for Rare Events: Application to Micelle Formation

    Full text link
    We discuss a coarse-grained approach to the computation of rare events in the context of grand canonical Monte Carlo (GCMC) simulations of self-assembly of surfactant molecules into micelles. The basic assumption is that the {\it computational} system dynamics can be decomposed into two parts -- fast (noise) and slow (reaction coordinates) dynamics, so that the system can be described by an effective, coarse grained Fokker-Planck (FP) equation. While such an assumption may be valid in many circumstances, an explicit form of FP equation is not always available. In our computations we bypass the analytic derivation of such an effective FP equation. The effective free energy gradient and the state-dependent magnitude of the random noise, which are necessary to formulate the effective Fokker-Planck equation, are obtained from ensembles of short bursts of microscopic simulations {\it with judiciously chosen initial conditions}. The reaction coordinate in our micelle formation problem is taken to be the size of a cluster of surfactant molecules. We test the validity of the effective FP description in this system and reconstruct a coarse-grained free energy surface in good agreement with full-scale GCMC simulations. We also show that, for very small clusters, the cluster size seizes to be a good reaction coordinate for a one-dimensional effective description. We discuss possible ways to improve the current model and to take higher-dimensional coarse-grained dynamics into account

    Sliding Mode Control of Two-Level Quantum Systems

    Full text link
    This paper proposes a robust control method based on sliding mode design for two-level quantum systems with bounded uncertainties. An eigenstate of the two-level quantum system is identified as a sliding mode. The objective is to design a control law to steer the system's state into the sliding mode domain and then maintain it in that domain when bounded uncertainties exist in the system Hamiltonian. We propose a controller design method using the Lyapunov methodology and periodic projective measurements. In particular, we give conditions for designing such a control law, which can guarantee the desired robustness in the presence of the uncertainties. The sliding mode control method has potential applications to quantum information processing with uncertainties.Comment: 29 pages, 4 figures, accepted by Automatic

    Geometric uncertainty models for correspondence problems in digital image processing

    Get PDF
    Many recent advances in technology rely heavily on the correct interpretation of an enormous amount of visual information. All available sources of visual data (e.g. cameras in surveillance networks, smartphones, game consoles) must be adequately processed to retrieve the most interesting user information. Therefore, computer vision and image processing techniques gain significant interest at the moment, and will do so in the near future. Most commonly applied image processing algorithms require a reliable solution for correspondence problems. The solution involves, first, the localization of corresponding points -visualizing the same 3D point in the observed scene- in the different images of distinct sources, and second, the computation of consistent geometric transformations relating correspondences on scene objects. This PhD presents a theoretical framework for solving correspondence problems with geometric features (such as points and straight lines) representing rigid objects in image sequences of complex scenes with static and dynamic cameras. The research focuses on localization uncertainty due to errors in feature detection and measurement, and its effect on each step in the solution of a correspondence problem. Whereas most other recent methods apply statistical-based models for spatial localization uncertainty, this work considers a novel geometric approach. Localization uncertainty is modeled as a convex polygonal region in the image space. This model can be efficiently propagated throughout the correspondence finding procedure. It allows for an easy extension toward transformation uncertainty models, and to infer confidence measures to verify the reliability of the outcome in the correspondence framework. Our procedure aims at finding reliable consistent transformations in sets of few and ill-localized features, possibly containing a large fraction of false candidate correspondences. The evaluation of the proposed procedure in practical correspondence problems shows that correct consistent correspondence sets are returned in over 95% of the experiments for small sets of 10-40 features contaminated with up to 400% of false positives and 40% of false negatives. The presented techniques prove to be beneficial in typical image processing applications, such as image registration and rigid object tracking

    Geometric noise reduction for multivariate time series

    Get PDF
    We propose an algorithm for the reduction of observational noise in chaotic multivariate time series. The algorithm is based on a maximum likelihood criterion, and its goal is to reduce the mean distance of the points of the cleaned time series to the attractor. We give evidence of the convergence of the empirical measure associated with the cleaned time series to the underlying invariant measure, implying the possibility to predict the long run behavior of the true dynamics
    corecore