7,393 research outputs found

    A systematic analysis of equivalence in multistage networks

    Get PDF
    Many approaches to switching in optoelectronic and optical networks decompose the switching function across multiple stages or hops. This paper addresses the problem of determining whether two multistage or multihop networks are functionally equivalent. Various ad-hoc methods have been used in the past to establish such equivalences. A systematic method for determining equivalence is presented based on properties of the link permutations used to interconnect stages of the network. This method is useful in laying out multistage networks, in determining optimal channel assignments for multihop networks, and in establishing the routing required in such networks. A purely graphical variant of the method, requiring no mathematics or calculations, is also described

    Data-driven discovery of coordinates and governing equations

    Full text link
    The discovery of governing equations from scientific data has the potential to transform data-rich fields that lack well-characterized quantitative descriptions. Advances in sparse regression are currently enabling the tractable identification of both the structure and parameters of a nonlinear dynamical system from data. The resulting models have the fewest terms necessary to describe the dynamics, balancing model complexity with descriptive ability, and thus promoting interpretability and generalizability. This provides an algorithmic approach to Occam's razor for model discovery. However, this approach fundamentally relies on an effective coordinate system in which the dynamics have a simple representation. In this work, we design a custom autoencoder to discover a coordinate transformation into a reduced space where the dynamics may be sparsely represented. Thus, we simultaneously learn the governing equations and the associated coordinate system. We demonstrate this approach on several example high-dimensional dynamical systems with low-dimensional behavior. The resulting modeling framework combines the strengths of deep neural networks for flexible representation and sparse identification of nonlinear dynamics (SINDy) for parsimonious models. It is the first method of its kind to place the discovery of coordinates and models on an equal footing.Comment: 25 pages, 6 figures; added acknowledgment

    Optimized Sparse Matrix Operations for Reverse Mode Automatic Differentiation

    Full text link
    Sparse matrix representations are ubiquitous in computational science and machine learning, leading to significant reductions in compute time, in comparison to dense representation, for problems that have local connectivity. The adoption of sparse representation in leading ML frameworks such as PyTorch is incomplete, however, with support for both automatic differentiation and GPU acceleration missing. In this work, we present an implementation of a CSR-based sparse matrix wrapper for PyTorch with CUDA acceleration for basic matrix operations, as well as automatic differentiability. We also present several applications of the resulting sparse kernels to optimization problems, demonstrating ease of implementation and performance measurements versus their dense counterparts
    corecore