50,399 research outputs found

    IDENTIFICATION OF COVER SONGS USING INFORMATION THEORETIC MEASURES OF SIMILARITY

    Get PDF
    13 pages, 5 figures, 4 tables. v3: Accepted version13 pages, 5 figures, 4 tables. v3: Accepted version13 pages, 5 figures, 4 tables. v3: Accepted versio

    Online discussion compensates for suboptimal timing of supportive information presentation in a digitally supported learning environment

    Get PDF
    This study used a sequential set-up to investigate the consecutive effects of timing of supportive information presentation (information before vs. information during the learning task clusters) in interactive digital learning materials (IDLMs) and type of collaboration (personal discussion vs. online discussion) in computer-supported collaborative learning (CSCL) on student knowledge construction. Students (N = 87) were first randomly assigned to the two information presentation conditions to work individually on a case-based assignment in IDLM. Students who received information during learning task clusters tended to show better results on knowledge construction than those who received information only before each cluster. The students within the two separate information presentation conditions were then randomly assigned to pairs to discuss the outcomes of their assignments under either the personal discussion or online discussion condition in CSCL. When supportive information had been presented before each learning task cluster, online discussion led to better results than personal discussion. When supportive information had been presented during the learning task clusters, however, the online and personal discussion conditions had no differential effect on knowledge construction. Online discussion in CSCL appeared to compensate for suboptimal timing of presentation of supportive information before the learning task clusters in IDLM

    Sequential Complexity as a Descriptor for Musical Similarity

    Get PDF
    We propose string compressibility as a descriptor of temporal structure in audio, for the purpose of determining musical similarity. Our descriptors are based on computing track-wise compression rates of quantised audio features, using multiple temporal resolutions and quantisation granularities. To verify that our descriptors capture musically relevant information, we incorporate our descriptors into similarity rating prediction and song year prediction tasks. We base our evaluation on a dataset of 15500 track excerpts of Western popular music, for which we obtain 7800 web-sourced pairwise similarity ratings. To assess the agreement among similarity ratings, we perform an evaluation under controlled conditions, obtaining a rank correlation of 0.33 between intersected sets of ratings. Combined with bag-of-features descriptors, we obtain performance gains of 31.1% and 10.9% for similarity rating prediction and song year prediction. For both tasks, analysis of selected descriptors reveals that representing features at multiple time scales benefits prediction accuracy.Comment: 13 pages, 9 figures, 8 tables. Accepted versio

    Quantum Algorithm Implementations for Beginners

    Full text link
    As quantum computers become available to the general public, the need has arisen to train a cohort of quantum programmers, many of whom have been developing classical computer programs for most of their careers. While currently available quantum computers have less than 100 qubits, quantum computing hardware is widely expected to grow in terms of qubit count, quality, and connectivity. This review aims to explain the principles of quantum programming, which are quite different from classical programming, with straightforward algebra that makes understanding of the underlying fascinating quantum mechanical principles optional. We give an introduction to quantum computing algorithms and their implementation on real quantum hardware. We survey 20 different quantum algorithms, attempting to describe each in a succinct and self-contained fashion. We show how these algorithms can be implemented on IBM's quantum computer, and in each case, we discuss the results of the implementation with respect to differences between the simulator and the actual hardware runs. This article introduces computer scientists, physicists, and engineers to quantum algorithms and provides a blueprint for their implementations

    Network constraints on learnability of probabilistic motor sequences

    Full text link
    Human learners are adept at grasping the complex relationships underlying incoming sequential input. In the present work, we formalize complex relationships as graph structures derived from temporal associations in motor sequences. Next, we explore the extent to which learners are sensitive to key variations in the topological properties inherent to those graph structures. Participants performed a probabilistic motor sequence task in which the order of button presses was determined by the traversal of graphs with modular, lattice-like, or random organization. Graph nodes each represented a unique button press and edges represented a transition between button presses. Results indicate that learning, indexed here by participants' response times, was strongly mediated by the graph's meso-scale organization, with modular graphs being associated with shorter response times than random and lattice graphs. Moreover, variations in a node's number of connections (degree) and a node's role in mediating long-distance communication (betweenness centrality) impacted graph learning, even after accounting for level of practice on that node. These results demonstrate that the graph architecture underlying temporal sequences of stimuli fundamentally constrains learning, and moreover that tools from network science provide a valuable framework for assessing how learners encode complex, temporally structured information.Comment: 29 pages, 4 figure

    Bits from Biology for Computational Intelligence

    Get PDF
    Computational intelligence is broadly defined as biologically-inspired computing. Usually, inspiration is drawn from neural systems. This article shows how to analyze neural systems using information theory to obtain constraints that help identify the algorithms run by such systems and the information they represent. Algorithms and representations identified information-theoretically may then guide the design of biologically inspired computing systems (BICS). The material covered includes the necessary introduction to information theory and the estimation of information theoretic quantities from neural data. We then show how to analyze the information encoded in a system about its environment, and also discuss recent methodological developments on the question of how much information each agent carries about the environment either uniquely, or redundantly or synergistically together with others. Last, we introduce the framework of local information dynamics, where information processing is decomposed into component processes of information storage, transfer, and modification -- locally in space and time. We close by discussing example applications of these measures to neural data and other complex systems
    corecore