72 research outputs found

    Benchmark Test of CP-PACS for Lattice QCD

    Full text link
    The CP-PACS is a massively parallel computer dedicated for calculations in computational physics and will be in operation in the spring of 1996 at Center for Computational Physics, University of Tsukuba. In this article, we describe the architecture of the CP-PACS and report the results of the estimate of the performance of the CP-PACS for typical lattice QCD calculations.Comment: 12 pages (5 figures), Postscript file, talk presented at "QCD on Massively Parallel Computers" (Yamagata, Japan, March 16-18,1995

    FPGA-Based Adaptive Digital Beamforming Using Machine Learning for MIMO Systems

    Get PDF
    In modern Multiple-Input and Multiple-Output (MIMO) systems, such as cellular and Wi-Fi technology, an array of antenna elements is used to spatially steer RF signals with the goal of changing the overall antenna gain pattern to achieve a higher Signal-to-interference-plus-noise ratio (SINR). Digital Beamforming (DBF) achieves this steering effect by applying weighted coefficients to antenna elements- similar to digital filtering- which adjust the phase and gain of the received, or transmitted, signals. Since real world MIMO systems are often used in dynamic environments, Adaptive Beamforming techniques have been used to overcome variable challenges to system SINR- such as dispersive channels or inter-device interference- by applying statistically-based algorithms to calculate weights adaptively. However, large element count array systems, with their high degrees of freedom (DOF), can face many challenges in real application of these adaptive algorithms. These statistical matrix methods can be either computationally prohibitive, or utilize non-optimal simplifications, in order to provide adaptive weights in time for an application, especially given a certain system's computational capability; for instance, MIMO communication devices with strict size, weight and power (SWaP) constraints often have processing limitations due to use of low-power processors or Field-Programmable Gate Arrays (FPGAs). Thus, this thesis research investigation will show novel progress in these adaptive MIMO challenges in a twofold approach. First, it will be shown that advances in Machine Learning (ML) and Deep Neural Networks (DNNs) can be directly applied to the computationally complex problem of calculating optimal adaptive beamforming weights via a custom Convolutional Neural Net (CNN). Secondly, the derived adaptive beamforming CNN will be shown to efficiently map to programmable logic FPGA resources which can update adaptive coefficients in real-time. This machine learning implementation is contrasted against the current state-of-the-art FPGA architecture for adaptive beamforming- which uses traditional, Recursive Least Squares (RLS) computation- and is shown to provide adaptive beamforming weights faster, and with fewer FPGA logic resources. The reduction in both processing latency and FPGA fabric utilization enables SWaP constrained MIMO processors to perform adaptive beamforming for higher channel count systems than currently possible with traditional computation methods

    An All-Sky Search for Continuous Gravitational Waves.

    Full text link
    Gravitational radiation is a direct consequence of general relativity, yet to this day only indirect confirmation of its existence has been obtained. LIGO is an experiment that aims to detect gravitational waves by comparing the lengths of the two arms of Michelson interferometers. It has recently reached design sensitivity, and a number of searches for intermittent and continuous signals are underway. In particular, isolated rotating neutron stars are expected to emit gravitational radiation of nearly constant frequency and amplitude. Because the gravitational wave signal amplitudes are thought to be extremely weak, long time integrations must be carried out to detect a signal. The search is complicated by the motion of the Earth (daily rotation and orbital motion) which induces substantial modulations of detected frequency and amplitude that are highly dependent on source location. Large amounts of acquired data make this search computationally difficult. We describe our search for such signals using an algorithm called PowerFlux and provide upper limits. In particular, at 153 Hz we obtain a sensitivity of 4.25E-25 for circularly polarized sources in the polar sky region. At a signal frequency of 1100 Hz we are sensitive to neutron stars of ellipticity of ~1E-6 at distances up to 500 pc. We also describe a prototype for a high-sampling-rate wavefront camera that can be used to monitor laser beam characteristics at radio frequencies.Ph.D.PhysicsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/64817/1/volodya_1.pd

    A novel computational framework for fast, distributed computing and knowledge integration for microarray gene expression data analysis

    Get PDF
    The healthcare burden and suffering due to life-threatening diseases such as cancer would be significantly reduced by the design and refinement of computational interpretation of micro-molecular data collected by bioinformaticians. Rapid technological advancements in the field of microarray analysis, an important component in the design of in-silico molecular medicine methods, have generated enormous amounts of such data, a trend that has been increasing exponentially over the last few years. However, the analysis and handling of these data has become one of the major bottlenecks in the utilization of the technology. The rate of collection of these data has far surpassed our ability to analyze the data for novel, non-trivial, and important knowledge. The high-performance computing platform, and algorithms that utilize its embedded computing capacity, has emerged as a leading technology that can handle such data-intensive knowledge discovery applications. In this dissertation, we present a novel framework to achieve fast, robust, and accurate (biologically-significant) multi-class classification of gene expression data using distributed knowledge discovery and integration computational routines, specifically for cancer genomics applications. The research presents a unique computational paradigm for the rapid, accurate, and efficient selection of relevant marker genes, while providing parametric controls to ensure flexibility of its application. The proposed paradigm consists of the following key computational steps: (a) preprocess, normalize the gene expression data; (b) discretize the data for knowledge mining application; (c) partition the data using two proposed methods: partitioning with overlapped windows and adaptive selection; (d) perform knowledge discovery on the partitioned data-spaces for association rule discovery; (e) integrate association rules from partitioned data and knowledge spaces on distributed processor nodes using a novel knowledge integration algorithm; and (f) post-analysis and functional elucidation of the discovered gene rule sets. The framework is implemented on a shared-memory multiprocessor supercomputing environment, and several experimental results are demonstrated to evaluate the algorithms. We conclude with a functional interpretation of the computational discovery routines for enhanced biological physiological discovery from cancer genomics datasets, while suggesting some directions for future research

    Computer Models for Musical Instrument Identification

    Get PDF
    PhDA particular aspect in the perception of sound is concerned with what is commonly termed as texture or timbre. From a perceptual perspective, timbre is what allows us to distinguish sounds that have similar pitch and loudness. Indeed most people are able to discern a piano tone from a violin tone or able to distinguish different voices or singers. This thesis deals with timbre modelling. Specifically, the formant theory of timbre is the main theme throughout. This theory states that acoustic musical instrument sounds can be characterised by their formant structures. Following this principle, the central point of our approach is to propose a computer implementation for building musical instrument identification and classification systems. Although the main thrust of this thesis is to propose a coherent and unified approach to the musical instrument identification problem, it is oriented towards the development of algorithms that can be used in Music Information Retrieval (MIR) frameworks. Drawing on research in speech processing, a complete supervised system taking into account both physical and perceptual aspects of timbre is described. The approach is composed of three distinct processing layers. Parametric models that allow us to represent signals through mid-level physical and perceptual representations are considered. Next, the use of the Line Spectrum Frequencies as spectral envelope and formant descriptors is emphasised. Finally, the use of generative and discriminative techniques for building instrument and database models is investigated. Our system is evaluated under realistic recording conditions using databases of isolated notes and melodic phrases

    A Partially Randomized Approach to Trajectory Planning and Optimization for Mobile Robots with Flat Dynamics

    Get PDF
    Motion planning problems are characterized by huge search spaces and complex obstacle structures with no concise mathematical expression. The fixed-wing airplane application considered in this thesis adds differential constraints and point-wise bounds, i. e. an infinite number of equality and inequality constraints. An optimal trajectory planning approach is presented, based on the randomized Rapidly-exploring Random Trees framework (RRT*). The local planner relies on differential flatness of the equations of motion to obtain tree branch candidates that automatically satisfy the differential constraints. Flat output trajectories, in this case equivalent to the airplane's flight path, are designed using Bézier curves. Segment feasibility in terms of point-wise inequality constraints is tested by an indicator integral, which is evaluated alongside the segment cost functional. Although the RRT* guarantees optimality in the limit of infinite planning time, it is argued by intuition and experimentation that convergence is not approached at a practically useful rate. Therefore, the randomized planner is augmented by a deterministic variational optimization technique. To this end, the optimal planning task is formulated as a semi-infinite optimization problem, using the intermediate result of the RRT(*) as an initial guess. The proposed optimization algorithm follows the feasible flavor of the primal-dual interior point paradigm. Discretization of functional (infinite) constraints is deferred to the linear subproblems, where it is realized implicitly by numeric quadrature. An inherent numerical ill-conditioning of the method is circumvented by a reduction-like approach, which tracks active constraint locations by introducing new problem variables. Obstacle avoidance is achieved by extending the line search procedure and dynamically adding obstacle-awareness constraints to the problem formulation. Experimental evaluation confirms that the hybrid approach is practically feasible and does indeed outperform RRT*'s built-in optimization mechanism, but the computational burden is still significant.Bewegungsplanungsaufgaben sind typischerweise gekennzeichnet durch umfangreiche Suchräume, deren vollständige Exploration nicht praktikabel ist, sowie durch unstrukturierte Hindernisse, für die nur selten eine geschlossene mathematische Beschreibung existiert. Bei der in dieser Arbeit betrachteten Anwendung auf Flächenflugzeuge kommen differentielle Randbedingungen und beschränkte Systemgrößen erschwerend hinzu. Der vorgestellte Ansatz zur optimalen Trajektorienplanung basiert auf dem Rapidly-exploring Random Trees-Algorithmus (RRT*), welcher die Suchraumkomplexität durch Randomisierung beherrschbar macht. Der spezifische Beitrag ist eine Realisierung des lokalen Planers zur Generierung der Äste des Suchbaums. Dieser erfordert ein flaches Bewegungsmodell, sodass differentielle Randbedingungen automatisch erfüllt sind. Die Trajektorien des flachen Ausgangs, welche im betrachteten Beispiel der Flugbahn entsprechen, werden mittels Bézier-Kurven entworfen. Die Einhaltung der Ungleichungsnebenbedingungen wird durch ein Indikator-Integral überprüft, welches sich mit wenig Zusatzaufwand parallel zum Kostenfunktional berechnen lässt. Zwar konvergiert der RRT*-Algorithmus (im probabilistischen Sinne) zu einer optimalen Lösung, jedoch ist die Konvergenzrate aus praktischer Sicht unbrauchbar langsam. Es ist daher naheliegend, den Planer durch ein gradientenbasiertes lokales Optimierungsverfahren mit besseren Konvergenzeigenschaften zu unterstützen. Hierzu wird die aktuelle Zwischenlösung des Planers als Initialschätzung für ein kompatibles semi-infinites Optimierungsproblem verwendet. Der vorgeschlagene Optimierungsalgorithmus erweitert das verbreitete innere-Punkte-Konzept (primal dual interior point method) auf semi-infinite Probleme. Eine explizite Diskretisierung der funktionalen Ungleichungsnebenbedingungen ist nicht erforderlich, denn diese erfolgt implizit durch eine numerische Integralauswertung im Rahmen der linearen Teilprobleme. Da die Methode an Stellen aktiver Nebenbedingungen nicht wohldefiniert ist, kommt zusätzlich eine Variante des Reduktions-Ansatzes zum Einsatz, bei welcher der Vektor der Optimierungsvariablen um die (endliche) Menge der aktiven Indizes erweitert wird. Weiterhin wurde eine Kollisionsvermeidung integriert, die in den Teilschritt der Liniensuche eingreift und die Problemformulierung dynamisch um Randbedingungen zur lokalen Berücksichtigung von Hindernissen erweitert. Experimentelle Untersuchungen bestätigen, dass die Ergebnisse des hybriden Ansatzes aus RRT(*) und numerischem Optimierungsverfahren der klassischen RRT*-basierten Trajektorienoptimierung überlegen sind. Der erforderliche Rechenaufwand ist zwar beträchtlich, aber unter realistischen Bedingungen praktisch beherrschbar

    Hierarchical feature extraction from spatiotemporal data for cyber-physical system analytics

    Get PDF
    With the advent of ubiquitous sensing, robust communication and advanced computation, data-driven modeling is increasingly becoming popular for many engineering problems. Eliminating difficulties of physics-based modeling, avoiding simplifying assumptions and ad hoc empirical models are significant among many advantages of data-driven approaches, especially for large-scale complex systems. While classical statistics and signal processing algorithms have been widely used by the engineering community, advanced machine learning techniques have not been sufficiently explored in this regard. This study summarizes various categories of machine learning tools that have been applied or may be a candidate for addressing engineering problems. While there are increasing number of machine learning algorithms, the main steps involved in applying such techniques to the problems consist in: data collection and pre-processing, feature extraction, model training and inference for decision-making. To support decision-making processes in many applications, hierarchical feature extraction is key. Among various feature extraction principles, recent studies emphasize hierarchical approaches of extracting salient features that is carried out at multiple abstraction levels from data. In this context, the focus of the dissertation is towards developing hierarchical feature extraction algorithms within the framework of machine learning in order to solve challenging cyber-physical problems in various domains such as electromechanical systems and agricultural systems. Furthermore, the feature extraction techniques are described using the spatial, temporal and spatiotemporal data types collected from the systems. The wide applicability of such features in solving some selected real-life domain problems are demonstrated throughout this study
    corecore