508 research outputs found

    Interferometric Methods

    Get PDF
    Future radio telescopes promise great advances in resolution and sensitivity. These include the Square Kilometer Array, a two array instrument, in South Africa and Australia. Similarly, the next generation Very Large Array (ngVLA) is being designed for construction in North America. These arrays all promise exceptional advances in sensitivity, angular resolution, and survey speed. The SKA and ngVLA are both specified to have sensitivities at the level of Ό\muJy's. The SKA-Low instrument will consist of a huge number of dipoles antennas in Australia which is pushing the bounds of current FX correlator technology with O(n2)\mathbb{O}(n^2) scaling, where nn is the number of antennas. The design proposals for these instruments include a dense core of antennas, necessitating advances in imaging methods for these very dense cores versus more traditionally sparse instruments. Another ambitious experiment is the Hydrogen Epoch of Reionisation Array (HERA) in South Africa which hopes to make the first direct detection of the Epoch of Reionisation through the red-shifted H{\sc i} signal which is a factor of 10510^{5} smaller than the thermal-like noise. In this thesis, these problems are tackled by re-examining the underlying principles of interferometry. The first working example of a direct imaging correlator is presented which allows images to be formed directly from the voltages off each antenna in a dense array, without an expensive cross-correlation operation as is typically required. A detailed discussion is given of how standard steps in interferometric imaging differ in this new scheme, including calibration. Additionally the first wide field direct imaging correlator is presented, which allows the problems of non-coplanarity to be dealt with for both sparse and dense arrays in a very efficient manner on modern GPU compute hardware. These are, to the best of the authors knowledge, the only working implementations of a direct imaging correlator for generic arrays with no restrictions on the geometry of the array or homogeneity of constituent receiver elements. These new approaches have been published in the scientific literature as discussed in the Declaration. Moving on from this, the closure phase bispectrum is presented as a way of uncovering the cosmological Epoch of Reionisation signal from the H{\sc i} line. This is using the HERA telescope, which consists of a dense core of parabolic antennas in a highly redundant layout. A data reduction and processing pipeline for the HERA telescope is constructed and presented, for use with the bispectrum. Initial results towards a cosmologial limit are reported. The HERA telescope relies on redundancy in its antenna elements for its calibration and measurement strategy. The bispectrum with its unique mathematical propeties, in combination with forward modelling, is shown to be a potent tool for probing departures from the assumed reudundancy. It is shown, through this method, that HERA suffers significant direction-dependent non-redundancies in the dataset used for our analysis, which are extremely difficult to calibrate out. Finally, the problem of wide-field imaging in next generation arrays is tackled through the development and implementation of a new scheme of wide field imaging. This uses a new method of parallelising the problem of wide-field imaging, and is intended for use with the very large datasets that will be produced by upcoming instruments. Two schemes are introduced: ww-towers, and Improved ww-towers. The latter generalises the former in combination with advances in optimal convolution theory for the radio astronomy ``gridding'' problem. The theory behind this approach is explored, and a high performance implementation is presented for ww-towers and Improved ww-stacking within Improved ww-towers.ARM Ltd iCase Sponsorshi

    Machine Learning-Based Data and Model Driven Bayesian Uncertanity Quantification of Inverse Problems for Suspended Non-structural System

    Get PDF
    Inverse problems involve extracting the internal structure of a physical system from noisy measurement data. In many fields, the Bayesian inference is used to address the ill-conditioned nature of the inverse problem by incorporating prior information through an initial distribution. In the nonparametric Bayesian framework, surrogate models such as Gaussian Processes or Deep Neural Networks are used as flexible and effective probabilistic modeling tools to overcome the high-dimensional curse and reduce computational costs. In practical systems and computer models, uncertainties can be addressed through parameter calibration, sensitivity analysis, and uncertainty quantification, leading to improved reliability and robustness of decision and control strategies based on simulation or prediction results. However, in the surrogate model, preventing overfitting and incorporating reasonable prior knowledge of embedded physics and models is a challenge. Suspended Nonstructural Systems (SNS) pose a significant challenge in the inverse problem. Research on their seismic performance and mechanical models, particularly in the inverse problem and uncertainty quantification, is still lacking. To address this, the author conducts full-scale shaking table dynamic experiments and monotonic & cyclic tests, and simulations of different types of SNS to investigate mechanical behaviors. To quantify the uncertainty of the inverse problem, the author proposes a new framework that adopts machine learning-based data and model driven stochastic Gaussian process model calibration to quantify the uncertainty via a new black box variational inference that accounts for geometric complexity measure, Minimum Description length (MDL), through Bayesian inference. It is validated in the SNS and yields optimal generalizability and computational scalability

    Machine Learning Techniques as Applied to Discrete and Combinatorial Structures

    Get PDF
    Machine Learning Techniques have been used on a wide array of input types: images, sound waves, text, and so forth. In articulating these input types to the almighty machine, there have been all sorts of amazing problems that have been solved for many practical purposes. Nevertheless, there are some input types which don’t lend themselves nicely to the standard set of machine learning tools we have. Moreover, there are some provably difficult problems which are abysmally hard to solve within a reasonable time frame. This thesis addresses several of these difficult problems. It frames these problems such that we can then attempt to marry the allegedly powerful utility of existing machine learning techniques to the practical solvability of said problems

    SpiNNaker - A Spiking Neural Network Architecture

    Get PDF
    20 years in conception and 15 in construction, the SpiNNaker project has delivered the world’s largest neuromorphic computing platform incorporating over a million ARM mobile phone processors and capable of modelling spiking neural networks of the scale of a mouse brain in biological real time. This machine, hosted at the University of Manchester in the UK, is freely available under the auspices of the EU Flagship Human Brain Project. This book tells the story of the origins of the machine, its development and its deployment, and the immense software development effort that has gone into making it openly available and accessible to researchers and students the world over. It also presents exemplar applications from ‘Talk’, a SpiNNaker-controlled robotic exhibit at the Manchester Art Gallery as part of ‘The Imitation Game’, a set of works commissioned in 2016 in honour of Alan Turing, through to a way to solve hard computing problems using stochastic neural networks. The book concludes with a look to the future, and the SpiNNaker-2 machine which is yet to come

    Computational imaging and automated identification for aqueous environments

    Get PDF
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution June 2011Sampling the vast volumes of the ocean requires tools capable of observing from a distance while retaining detail necessary for biology and ecology, ideal for optical methods. Algorithms that work with existing SeaBED AUV imagery are developed, including habitat classi fication with bag-of-words models and multi-stage boosting for rock sh detection. Methods for extracting images of sh from videos of longline operations are demonstrated. A prototype digital holographic imaging device is designed and tested for quantitative in situ microscale imaging. Theory to support the device is developed, including particle noise and the effects of motion. A Wigner-domain model provides optimal settings and optical limits for spherical and planar holographic references. Algorithms to extract the information from real-world digital holograms are created. Focus metrics are discussed, including a novel focus detector using local Zernike moments. Two methods for estimating lateral positions of objects in holograms without reconstruction are presented by extending a summation kernel to spherical references and using a local frequency signature from a Riesz transform. A new metric for quickly estimating object depths without reconstruction is proposed and tested. An example application, quantifying oil droplet size distributions in an underwater plume, demonstrates the efficacy of the prototype and algorithms.Funding was provided by NOAA Grant #5710002014, NOAA NMFS Grant #NA17RJ1223, NSF Grant #OCE-0925284, and NOAA Grant #NA10OAR417008

    Generalized database index structures on massively parallel processor architectures

    Get PDF
    Height-balanced search trees are ubiquitous in database management systems as well as in other applications that require efficient access methods in order to identify entries in large data volumes. They can be configured with various strategies for structuring the search space for a given data set and for pruning it when different kinds of search queries are answered. In order to facilitate the development of application-specific tree variants, index frameworks, such as GiST, exist that provide a reusable library of commonly shared tree management functionality. By specializing internal data organization strategies, the framework can be customized to create an index that is efficient for an application's data access characteristics. Because the majority of the framework's code can be reused development and testing efforts are significantly lower, compared to an implementation from scratch. However, none of the existing frameworks supports the execution of index operations on massively parallel processor architectures, such as GPUs. Enabling the use of such processors for generalized index frameworks is the goal of this thesis. By compiling state-of-the-art techniques from a wide range of CPU- and GPU-optimized indexes, a GiST extension is developed that abstracts the physical execution aspect of generic, tree-based search queries. Tree traversals are broken-down into vectorized processing primitives that can be scheduled to one of the available (co-)processors for execution. Further, a CPU-based implementation is provided as well as a new GPU-based algorithm that, unlike prior art in this area, does not require that the index is fully stored inside a GPU's main memory buffer. The applicability of the extended framework is assessed for image rendering engines and, based on microbenchmarks, the parallelized algorithm performance is compared for different CPU and GPU generations. It will be shown that cases exist, where the GPU clearly outperforms the CPU and vice versa. In order to leverage the strengths of each processor type, an adaptive scheduler is presented that can be calibrated to schedule index operations to the best-fitting device in a hybrid system. With the help of a tree traversal simulation different scheduling strategies are evaluated and it will be shown that the adaptive scheduler can be used to make near-optimal decisions.SuchbĂ€ume sind allgegenwĂ€rtig in Datenbanksystemen und anderen Anwendungen, die eine effiziente Möglichkeit benötigen um in großen DatensĂ€tzen nach EintrĂ€gen zu suchen, die bestimmte Suchkriterien erfĂŒllen. Sie können mit verschiedenen Strategien konfiguriert werden um den Suchraum zu strukturieren und die fĂŒr ein Suchergebnis irrelevante Bereiche von der Bearbeitung auszuschließen. Die Entwicklung von anwendungsspezifischen Indexen wird durch Frameworks wie GiST unterstĂŒtzt. Jedoch unterstĂŒtzt keines der heute bereits existierenden Frameworks die Verwendung von hochgradig parallelen Prozessorarchitekturen wie GPUs. Solche Prozessoren fĂŒr generische Index Frameworks nutzbar zu machen, ist Ziel dieser Arbeit. Dazu werden Techniken aus verschiedensten CPU- und GPU-optimierten Indexen analysiert und fĂŒr die Entwicklung einer GiST-Erweiterung verwendet, welche die fĂŒr eine Suche in SuchbĂ€umen nötigen Berechnungen abstrahiert. Traversierungsoperationen werden dabei auf vektorisierte Primitive abgebildet, die auf parallelen Prozessoren implementiert werden können. Die Verwendung dieser Erweiterung wird beispielhaft an einem CPU Algorithmus demonstriert. Weiterhin wird ein neuer GPU-basierter Algorithmus vorgestellt, der im Vergleich zu bisherigen Verfahren, ein dynamisches Nachladen der Index Daten in den Hauptspeicher der GPU unterstĂŒtzt. Die PraktikabilitĂ€t des erweiterten Frameworks wird am Beispiel von Anwendungen aus der Computergrafik untersucht und die Performanz der verwendeten Algorithmen mit Hilfe eines Benchmarks auf verschiedenen CPU- und GPU-Modellen analysiert. Dabei wird gezeigt, unter welchen Bedingungen die parallele GPU-basierte AusfĂŒhrung schneller ist als die CPU-basierte Variante - und umgekehrt. Um die StĂ€rken beider Prozessortypen in einem hybriden System ausnutzen zu können, wird ein Scheduler entwickelt, der nach einer Kalibrierungsphase fĂŒr eine gegebene Operation den geeignetsten Prozessor wĂ€hlen kann. Mit Hilfe eines Simulators fĂŒr Baumtraversierungen werden verschiedenste Scheduling Strategien verglichen. Dabei wird gezeigt, dass die Entscheidungen des Schedulers kaum vom Optimum abweichen und, abhĂ€ngig von der simulierten Last, die erzielbaren DurchsĂ€tze fĂŒr die parallele AusfĂŒhrung mehrerer Suchoperationen durch hybrides Scheduling um eine GrĂ¶ĂŸenordnung und mehr erhöht werden können

    SpiNNaker - A Spiking Neural Network Architecture

    Get PDF
    20 years in conception and 15 in construction, the SpiNNaker project has delivered the world’s largest neuromorphic computing platform incorporating over a million ARM mobile phone processors and capable of modelling spiking neural networks of the scale of a mouse brain in biological real time. This machine, hosted at the University of Manchester in the UK, is freely available under the auspices of the EU Flagship Human Brain Project. This book tells the story of the origins of the machine, its development and its deployment, and the immense software development effort that has gone into making it openly available and accessible to researchers and students the world over. It also presents exemplar applications from ‘Talk’, a SpiNNaker-controlled robotic exhibit at the Manchester Art Gallery as part of ‘The Imitation Game’, a set of works commissioned in 2016 in honour of Alan Turing, through to a way to solve hard computing problems using stochastic neural networks. The book concludes with a look to the future, and the SpiNNaker-2 machine which is yet to come

    Fundamentals

    Get PDF
    Volume 1 establishes the foundations of this new field. It goes through all the steps from data collection, their summary and clustering, to different aspects of resource-aware learning, i.e., hardware, memory, energy, and communication awareness. Machine learning methods are inspected with respect to resource requirements and how to enhance scalability on diverse computing architectures ranging from embedded systems to large computing clusters
    • 

    corecore