9,692 research outputs found

    Two-parameter nonsmooth grazing bifurcations of limit cycles: classification and open problems

    Get PDF
    This paper proposes a strategy for the classification of codimension-two grazing bifurcations of limit cycles in piecewise smooth systems of ordinary differential equations. Such nonsmooth transitions (C-bifurcations) occur when the cycle interacts with a discontinuity boundary of phase space in a non-generic way. Several such codimension-one events have recently been identified, causing for example period-adding or sudden onset of chaos. Here, the focus is on codimension-two grazings that are local in the sense that the dynamics can be fully described by an appropriate Poincaré map from a neighbourhood of the grazing point (or points) of the critical cycle to itself. It is proposed that codimension-two grazing bifurcations can be divided into three distinct types: either the grazing point is degenerate, or the the grazing cycle is itself degenerate (e.g. non-hyperbolic) or we have the simultaneous occurrence of two grazing events. A careful distinction is drawn between their occurrence in systems with discontinuous states, discontinuous vector fields, or that have discontinuity in some derivative of the vector field. Examples of each kind of bifurcation are presented, mostly derived from mechanical applications. For each example, where possible, principal bifurcation curves characteristic to the codimension-two scenario are presented and general features of the dynamics discussed. Many avenues for future research are opened.

    Graph Signal Processing: Overview, Challenges and Applications

    Full text link
    Research in Graph Signal Processing (GSP) aims to develop tools for processing data defined on irregular graph domains. In this paper we first provide an overview of core ideas in GSP and their connection to conventional digital signal processing. We then summarize recent developments in developing basic GSP tools, including methods for sampling, filtering or graph learning. Next, we review progress in several application areas using GSP, including processing and analysis of sensor network data, biological data, and applications to image processing and machine learning. We finish by providing a brief historical perspective to highlight how concepts recently developed in GSP build on top of prior research in other areas.Comment: To appear, Proceedings of the IEE

    The solution path of the generalized lasso

    Full text link
    We present a path algorithm for the generalized lasso problem. This problem penalizes the â„“1\ell_1 norm of a matrix D times the coefficient vector, and has a wide range of applications, dictated by the choice of D. Our algorithm is based on solving the dual of the generalized lasso, which greatly facilitates computation of the path. For D=ID=I (the usual lasso), we draw a connection between our approach and the well-known LARS algorithm. For an arbitrary D, we derive an unbiased estimate of the degrees of freedom of the generalized lasso fit. This estimate turns out to be quite intuitive in many applications.Comment: Published in at http://dx.doi.org/10.1214/11-AOS878 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Theory and algorithms for swept manifold intersections

    Get PDF
    Recent developments in such fields as computer aided geometric design, geometric modeling, and computational topology have generated a spate of interest towards geometric objects called swept volumes. Besides their great applicability in various practical areas, the mere geometry and topology of these entities make them a perfect testbed for novel approaches aimed at analyzing and representing geometric objects. The structure of swept volumes reveals that it is also important to focus on a little simpler, although a very similar type of objects - swept manifolds. In particular, effective computability of swept manifold intersections is of major concern. The main goal of this dissertation is to conduct a study of swept manifolds and, based on the findings, develop methods for computing swept surface intersections. The twofold nature of this goal prompted a division of the work into two distinct parts. At first, a theoretical analysis of swept manifolds is performed, providing a better insight into the topological structure of swept manifolds and unveiling several important properties. In the course of the investigation, several subclasses of swept manifolds are introduced; in particular, attention is focused on regular and critical swept manifolds. Because of the high applicability, additional effort is put into analysis of two-dimensional swept manifolds - swept surfaces. Some of the valuable properties exhibited by such surfaces are generalized to higher dimensions by introducing yet another class of swept manifolds - recursive swept manifolds. In the second part of this work, algorithms for finding swept surface intersections are developed. The need for such algorithms is necessitated by a specific structure of swept surfaces that precludes direct employment of existing intersection methods. The new algorithms are designed by utilizing the underlying ideas of existing intersection techniques and making necessary technical modifications. Such modifications are achieved by employing properties of swept surfaces obtained in the course of the theoretical study. The intersection problems is also considered from a little different prospective. A novel, homology based approach to local characterization of intersections of submanifolds and s-subvarieties of a Euclidean space is presented. It provides a method for distinguishing between transverse and tangential intersection points and determining, in some cases, whether the intersection point belongs to a boundary. At the end, several possible applications of the obtained results are described, including virtual sculpting and modeling of heterogeneous materials

    The Topology ToolKit

    Full text link
    This system paper presents the Topology ToolKit (TTK), a software platform designed for topological data analysis in scientific visualization. TTK provides a unified, generic, efficient, and robust implementation of key algorithms for the topological analysis of scalar data, including: critical points, integral lines, persistence diagrams, persistence curves, merge trees, contour trees, Morse-Smale complexes, fiber surfaces, continuous scatterplots, Jacobi sets, Reeb spaces, and more. TTK is easily accessible to end users due to a tight integration with ParaView. It is also easily accessible to developers through a variety of bindings (Python, VTK/C++) for fast prototyping or through direct, dependence-free, C++, to ease integration into pre-existing complex systems. While developing TTK, we faced several algorithmic and software engineering challenges, which we document in this paper. In particular, we present an algorithm for the construction of a discrete gradient that complies to the critical points extracted in the piecewise-linear setting. This algorithm guarantees a combinatorial consistency across the topological abstractions supported by TTK, and importantly, a unified implementation of topological data simplification for multi-scale exploration and analysis. We also present a cached triangulation data structure, that supports time efficient and generic traversals, which self-adjusts its memory usage on demand for input simplicial meshes and which implicitly emulates a triangulation for regular grids with no memory overhead. Finally, we describe an original software architecture, which guarantees memory efficient and direct accesses to TTK features, while still allowing for researchers powerful and easy bindings and extensions. TTK is open source (BSD license) and its code, online documentation and video tutorials are available on TTK's website

    Master of Science

    Get PDF
    thesisAnalysis and visualization of flow is an important part of many scientific endeavors. Computation of streamlines is fundamental to many of these analysis and visualization tasks. A streamline is the path a massless particle traces under the instantenous velocities of a given vector field. Flow data are often stored as a sampled vector field over a mesh. We propose a new representation of flow defined by such a vector field. Given a triangulation and a vector field defined over its vertices, we represent flow in the form of its transversal behavior over the edges of the triangulation. A streamline is represented as a set of discrete jumps over these edges. Any information about the actual path taken through the interior of the triangles is discarded. We eliminate the necessity to compute actual paths of streamlines through the interior of each triangle while maintaining the aggregate behavior of flow within each of them. We discretize each edge uniformly into a fixed number of bins and use this discretization to form a combinatorial representation of flow in the form of a directed graph whose nodes are the set of all bins and its edges represent the discrete jumps between these bins. This representation is a combinatorial structure that provides robustness and consistency in expressing flow features like the critical points, streamlines, separatrices and closed streamlines which are otherwise hard to compute consistently

    Cluster, Classify, Regress: A General Method For Learning Discountinous Functions

    Full text link
    This paper presents a method for solving the supervised learning problem in which the output is highly nonlinear and discontinuous. It is proposed to solve this problem in three stages: (i) cluster the pairs of input-output data points, resulting in a label for each point; (ii) classify the data, where the corresponding label is the output; and finally (iii) perform one separate regression for each class, where the training data corresponds to the subset of the original input-output pairs which have that label according to the classifier. It has not yet been proposed to combine these 3 fundamental building blocks of machine learning in this simple and powerful fashion. This can be viewed as a form of deep learning, where any of the intermediate layers can itself be deep. The utility and robustness of the methodology is illustrated on some toy problems, including one example problem arising from simulation of plasma fusion in a tokamak.Comment: 12 files,6 figure
    • …
    corecore