803 research outputs found

    Modelling the perception and composition of Western musical harmony.

    Get PDF
    PhD ThesisHarmony is a fundamental structuring principle in Western music, determining how simultaneously occurring musical notes combine to form chords, and how successions of chords combine to form chord progressions. Harmony is interesting to psychologists because it unites many core features of auditory perception and cognition, such as pitch perception, auditory scene analysis, and statistical learning. A current challenge is to formalise our psychological understanding of harmony through computational modelling. Here we detail computational studies of three core dimensions of harmony: consonance, harmonic expectation, and voice leading. These studies develop and evaluate computational models of the psychoacoustic and cognitive processes involved in harmony perception, and quantitatively model how these processes contribute to music composition. Through these studies we examine long-standing issues in music psychology, such as the relative contributions of roughness and harmonicity to consonance perception, the roles of low-level psychoacoustic and high-level cognitive processes in harmony perception, and the probabilistic nature of harmonic expectation. We also develop cognitively informed computational models that are capable of both analysing existing music and generating new music, with potential applications in computational creativity, music informatics, and music psychology. This thesis is accompanied by a collection of open-source software packages that implement the models developed and evaluated here, which we hope will support future research into the psychological foundations of musical harmony.

    Fast randomized iteration: diffusion Monte Carlo through the lens of numerical linear algebra

    Full text link
    We review the basic outline of the highly successful diffusion Monte Carlo technique commonly used in contexts ranging from electronic structure calculations to rare event simulation and data assimilation, and propose a new class of randomized iterative algorithms based on similar principles to address a variety of common tasks in numerical linear algebra. From the point of view of numerical linear algebra, the main novelty of the Fast Randomized Iteration schemes described in this article is that they work in either linear or constant cost per iteration (and in total, under appropriate conditions) and are rather versatile: we will show how they apply to solution of linear systems, eigenvalue problems, and matrix exponentiation, in dimensions far beyond the present limits of numerical linear algebra. While traditional iterative methods in numerical linear algebra were created in part to deal with instances where a matrix (of size O(n2)\mathcal{O}(n^2)) is too big to store, the algorithms that we propose are effective even in instances where the solution vector itself (of size O(n)\mathcal{O}(n)) may be too big to store or manipulate. In fact, our work is motivated by recent DMC based quantum Monte Carlo schemes that have been applied to matrices as large as 10108×1010810^{108} \times 10^{108}. We provide basic convergence results, discuss the dependence of these results on the dimension of the system, and demonstrate dramatic cost savings on a range of test problems.Comment: 44 pages, 7 figure

    New Classes of Binary Random Sequences for Cryptography

    Get PDF
    In the vision for the 5G wireless communications advancement that yield new security prerequisites and challenges we propose a catalog of three new classes of pseudorandom random sequence generators. This dissertation starts with a review on the requirements of 5G wireless networking systems and the most recent development of the wireless security services applied to 5G, such as private-keys generation, key protection, and flexible authentication. This dissertation proposes new complexity theory-based, number-theoretic approaches to generate lightweight pseudorandom sequences, which protect the private information using spread spectrum techniques. For the class of new pseudorandom sequences, we obtain the generalization. Authentication issues of communicating parties in the basic model of Piggy Bank cryptography is considered and a flexible authentication using a certified authority is proposed

    Random Matrix Theories in Quantum Physics: Common Concepts

    Full text link
    We review the development of random-matrix theory (RMT) during the last decade. We emphasize both the theoretical aspects, and the application of the theory to a number of fields. These comprise chaotic and disordered systems, the localization problem, many-body quantum systems, the Calogero-Sutherland model, chiral symmetry breaking in QCD, and quantum gravity in two dimensions. The review is preceded by a brief historical survey of the developments of RMT and of localization theory since their inception. We emphasize the concepts common to the above-mentioned fields as well as the great diversity of RMT. In view of the universality of RMT, we suggest that the current development signals the emergence of a new "statistical mechanics": Stochasticity and general symmetry requirements lead to universal laws not based on dynamical principles.Comment: 178 pages, Revtex, 45 figures, submitted to Physics Report

    Part I:

    Get PDF

    Ultrasound imaging using coded signals

    Get PDF

    Multiscale Methods for Random Composite Materials

    Get PDF
    Simulation of material behaviour is not only a vital tool in accelerating product development and increasing design efficiency but also in advancing our fundamental understanding of materials. While homogeneous, isotropic materials are often simple to simulate, advanced, anisotropic materials pose a more sizeable challenge. In simulating entire composite components such as a 25m aircraft wing made by stacking several 0.25mm thick plies, finite element models typically exceed millions or even a billion unknowns. This problem is exacerbated by the inclusion of sub-millimeter manufacturing defects for two reasons. Firstly, a finer resolution is required which makes the problem larger. Secondly, defects introduce randomness. Traditionally, this randomness or uncertainty has been quantified heuristically since commercial codes are largely unsuccessful in solving problems of this size. This thesis develops a rigorous uncertainty quantification (UQ) framework permitted by a state of the art finite element package \texttt{dune-composites}, also developed here, designed for but not limited to composite applications. A key feature of this open-source package is a robust, parallel and scalable preconditioner \texttt{GenEO}, that guarantees constant iteration counts independent of problem size. It boasts near perfect scaling properties in both, a strong and a weak sense on over 15,00015,000 cores. It is numerically verified by solving industrially motivated problems containing upwards of 200 million unknowns. Equipped with the capability of solving expensive models, a novel stochastic framework is developed to quantify variability in part performance arising from localized out-of-plane defects. Theoretical part strength is determined for independent samples drawn from a distribution inferred from B-scans of wrinkles. Supported by literature, the results indicate a strong dependence between maximum misalignment angle and strength knockdown based on which an engineering model is presented to allow rapid estimation of residual strength bypassing expensive simulations. The engineering model itself is built from a large set of simulations of residual strength, each of which is computed using the following two step approach. First, a novel parametric representation of wrinkles is developed where the spread of parameters defines the wrinkle distribution. Second, expensive forward models are only solved for independent wrinkles using \texttt{dune-composites}. Besides scalability the other key feature of \texttt{dune-composites}, the \texttt{GenEO} coarse space, doubles as an excellent multiscale basis which is exploited to build high quality reduced order models that are orders of magnitude smaller. This is important because it enables multiple coarse solves for the cost of one fine solve. In an MCMC framework, where many solves are wasted in arriving at the next independent sample, this is a sought after quality because it greatly increases effective sample size for a fixed computational budget thus providing a route to high-fidelity UQ. This thesis exploits both, new solvers and multiscale methods developed here to design an efficient Bayesian framework to carry out previously intractable (large scale) simulations calibrated by experimental data. These new capabilities provide the basis for future work on modelling random heterogeneous materials while also offering the scope for building virtual test programs including nonlinear analyses, all of which can be implemented within a probabilistic setting

    Volatility and correlation: Modeling and forecasting using Support Vector Machines

    Get PDF
    Several Realized Volatility and Correlation estimators have been introduced. The estimators which are defined based on high frequency data converge to the true estimators faster than their counterparts even under Market Microstructure Noise. Also a strategy for multivariate volatility estimation has been introduced. The strategy which is an incorporation of Support Vector Machine with Multiresolution Analysis based on wavelets affords higher performance of estimation than the single estimation
    corecore