1,053 research outputs found

    Scalable Emulation of Sign-Problem-Free Hamiltonians with Room Temperature p-bits

    Full text link
    The growing field of quantum computing is based on the concept of a q-bit which is a delicate superposition of 0 and 1, requiring cryogenic temperatures for its physical realization along with challenging coherent coupling techniques for entangling them. By contrast, a probabilistic bit or a p-bit is a robust classical entity that fluctuates between 0 and 1, and can be implemented at room temperature using present-day technology. Here, we show that a probabilistic coprocessor built out of room temperature p-bits can be used to accelerate simulations of a special class of quantum many-body systems that are sign-problem-free or stoquastic, leveraging the well-known Suzuki-Trotter decomposition that maps a dd-dimensional quantum many body Hamiltonian to a dd+1-dimensional classical Hamiltonian. This mapping allows an efficient emulation of a quantum system by classical computers and is commonly used in software to perform Quantum Monte Carlo (QMC) algorithms. By contrast, we show that a compact, embedded MTJ-based coprocessor can serve as a highly efficient hardware-accelerator for such QMC algorithms providing several orders of magnitude improvement in speed compared to optimized CPU implementations. Using realistic device-level SPICE simulations we demonstrate that the correct quantum correlations can be obtained using a classical p-circuit built with existing technology and operating at room temperature. The proposed coprocessor can serve as a tool to study stoquastic quantum many-body systems, overcoming challenges associated with physical quantum annealers.Comment: Fixed minor typos and expanded Appendi

    Segmentation process and spectral characteristics in the determination of musical genres

    Get PDF
    Over the past few years there has been a tendency to store audio tracks for later use on CD-DVDs, HDD-SSDs as well as on the internet, which makes it challenging to classify the information either online or offline. For this purpose, the audio tracks must be tagged. Tags are said to be texts based on the semantic information of the sound [1]. Thus, music analysis can be done in several ways [2] since music is identified by its genre, artist, instruments and structure, by a tagging system that can be manual or automatic. The manual tagging allows the visualization of the behavior of an audio track either in time domain or in frequency domain as in the spectrogram, making it possible to classify the songs without listening to them. However, this process is very time consuming and labor intensive, including health problems [3] which shows that "the volume, sound sensitivity, time and cost required for a manual labeling process is generally prohibitive. Three fundamental steps are required to carry out automatic labelling: pre-processing, feature extraction and classification [4]. The present study developed an algorithm for performing automatic classification of music genres using a segmentation process employing spectral characteristics such as centroid (SC), flatness (SF) and spread (SS), as well as a time spectral characteristic

    Multivariate Stochastic Approximation to Tune Neural Network Hyperparameters for Criticial Infrastructure Communication Device Identification

    Get PDF
    The e-government includes Wireless Personal Area Network (WPAN) enabled internet-to-government pathways. Of interest herein is Z-Wave, an insecure, low-power/cost WPAN technology increasingly used in critical infrastructure. Radio Frequency (RF) Fingerprinting can augment WPAN security by a biometric-like process that computes statistical features from signal responses to 1) develop an authorized device library, 2) develop classifier models and 3) vet claimed identities. For classification, the neural network-based Generalized Relevance Learning Vector Quantization-Improved (GRLVQI) classifier is employed. GRLVQI has shown high fidelity in classifying Z-Wave RF Fingerprints; however, GRLVQI has multiple hyperparameters. Prior work optimized GRLVQI via a full factorial experimental design. Herein, optimizing GRLVQI via stochastic approximation, which operates by iterative searching for optimality, is of interest to provide an unconstrained optimization approach to avoid limitations found in full factorial experimental designs. The results provide an improvement in GRLVQI operation and accuracy. The methodology is further generalizable to other problems and algorithms

    Visual Analytics for the Exploratory Analysis and Labeling of Cultural Data

    Get PDF
    Cultural data can come in various forms and modalities, such as text traditions, artworks, music, crafted objects, or even as intangible heritage such as biographies of people, performing arts, cultural customs and rites. The assignment of metadata to such cultural heritage objects is an important task that people working in galleries, libraries, archives, and museums (GLAM) do on a daily basis. These rich metadata collections are used to categorize, structure, and study collections, but can also be used to apply computational methods. Such computational methods are in the focus of Computational and Digital Humanities projects and research. For the longest time, the digital humanities community has focused on textual corpora, including text mining, and other natural language processing techniques. Although some disciplines of the humanities, such as art history and archaeology have a long history of using visualizations. In recent years, the digital humanities community has started to shift the focus to include other modalities, such as audio-visual data. In turn, methods in machine learning and computer vision have been proposed for the specificities of such corpora. Over the last decade, the visualization community has engaged in several collaborations with the digital humanities, often with a focus on exploratory or comparative analysis of the data at hand. This includes both methods and systems that support classical Close Reading of the material and Distant Reading methods that give an overview of larger collections, as well as methods in between, such as Meso Reading. Furthermore, a wider application of machine learning methods can be observed on cultural heritage collections. But they are rarely applied together with visualizations to allow for further perspectives on the collections in a visual analytics or human-in-the-loop setting. Visual analytics can help in the decision-making process by guiding domain experts through the collection of interest. However, state-of-the-art supervised machine learning methods are often not applicable to the collection of interest due to missing ground truth. One form of ground truth are class labels, e.g., of entities depicted in an image collection, assigned to the individual images. Labeling all objects in a collection is an arduous task when performed manually, because cultural heritage collections contain a wide variety of different objects with plenty of details. A problem that arises with these collections curated in different institutions is that not always a specific standard is followed, so the vocabulary used can drift apart from another, making it difficult to combine the data from these institutions for large-scale analysis. This thesis presents a series of projects that combine machine learning methods with interactive visualizations for the exploratory analysis and labeling of cultural data. First, we define cultural data with regard to heritage and contemporary data, then we look at the state-of-the-art of existing visualization, computer vision, and visual analytics methods and projects focusing on cultural data collections. After this, we present the problems addressed in this thesis and their solutions, starting with a series of visualizations to explore different facets of rap lyrics and rap artists with a focus on text reuse. Next, we engage in a more complex case of text reuse, the collation of medieval vernacular text editions. For this, a human-in-the-loop process is presented that applies word embeddings and interactive visualizations to perform textual alignments on under-resourced languages supported by labeling of the relations between lines and the relations between words. We then switch the focus from textual data to another modality of cultural data by presenting a Virtual Museum that combines interactive visualizations and computer vision in order to explore a collection of artworks. With the lessons learned from the previous projects, we engage in the labeling and analysis of medieval illuminated manuscripts and so combine some of the machine learning methods and visualizations that were used for textual data with computer vision methods. Finally, we give reflections on the interdisciplinary projects and the lessons learned, before we discuss existing challenges when working with cultural heritage data from the computer science perspective to outline potential research directions for machine learning and visual analytics of cultural heritage data

    Analysis of Music Genre Clustering Algorithms

    Get PDF
    Classification and clustering of music genres has become an increasingly prevalent focusin recent years, prompting a push for research into relevant algorithms. The most successful algorithms have typically applied the Naive Bayes or k-Nearest Neighbors algorithms, or used Neural Networks to perform classification. This thesis seeks to investigate the use of unsupervised clustering algorithms such as K-Means or Hierarchical clustering, and establish their usefulness in comparison to or conjunction with established methods

    An O(N) Algorithm for Stokes and Laplace Interactions of Particles

    Get PDF
    A method for computing Laplace and Stokes interactions among N spherical particles arbitrarily placed in a unit cell of a periodic array is described. The method is based on an algorithm by Greengard and Rokhlin [J. Comput. Phys. 73, 325 (1987)] for rapidly summing the Laplace interactions among particles by organizing the particles into a number of different groups of varying sizes. The far-field induced by each group of particles is expressed by a multipole expansion technique into an equivalent field with its singularities at the center of the group. The resulting computational effort increases only linearly with N. The method is applied to a number of problems in suspension mechanics with the goal of assessing the efficiency and the potential usefulness of the method in studying dynamics of large systems. It is shown that reasonably accurate results for the interaction forces are obtained in most cases even with relatively low-order multipole expansions

    Tuning Hyperparameters for DNA-based Discrimination of Wireless Devices

    Get PDF
    The Internet of Things (IoT) and Industrial IoT (IIoT) is enabled by Wireless Personal Area Network (WPAN) devices. However, these devices increase vulnerability concerns of the IIoT and resultant Critical Infrastructure (CI) risks. Secure IIoT is enabled by both pre-attack security and post-attack forensic analysis. Radio Frequency (RF) Fingerprinting enables both pre- and post-attack security by providing serial-number level identification of devices through fingerprint characterization of their emissions. For classification and verification, research has shown high performance by employing the neural network-based Generalized Relevance Learning Vector Quantization-Improved (GRLVQI) classifier. However, GRLVQI has numerous hyperparameters and tuning requires AI expertise, thus some researchers have abandoned GRLVQI for notionally simpler, but less accurate, methods. Herein, we develop a fool-proof approach for tuning AI algorithms. For demonstration, Z-Wave, an insecure low-power/cost WPAN technology, and the GRLVQI classifier are considered. Results show significant increases in accuracy (5% for classification, 50% verification) over baseline methods

    Autonomous Probabilistic Coprocessing with Petaflips per Second

    Full text link
    In this paper we present a concrete design for a probabilistic (p-) computer based on a network of p-bits, robust classical entities fluctuating between -1 and +1, with probabilities that are controlled through an input constructed from the outputs of other p-bits. The architecture of this probabilistic computer is similar to a stochastic neural network with the p-bit playing the role of a binary stochastic neuron, but with one key difference: there is no sequencer used to enforce an ordering of p-bit updates, as is typically required. Instead, we explore \textit{sequencerless} designs where all p-bits are allowed to flip autonomously and demonstrate that such designs can allow ultrafast operation unconstrained by available clock speeds without compromising the solution's fidelity. Based on experimental results from a hardware benchmark of the autonomous design and benchmarked device models, we project that a nanomagnetic implementation can scale to achieve petaflips per second with millions of neurons. A key contribution of this paper is the focus on a hardware metric - flips per second - as a problem and substrate-independent figure-of-merit for an emerging class of hardware annealers known as Ising Machines. Much like the shrinking feature sizes of transistors that have continually driven Moore's Law, we believe that flips per second can be continually improved in later technology generations of a wide class of probabilistic, domain specific hardware.Comment: 13 pages, 8 figures, 1 tabl
    corecore