214 research outputs found
Adaptive Real Time Imaging Synthesis Telescopes
The digital revolution is transforming astronomy from a data-starved to a
data-submerged science. Instruments such as the Atacama Large Millimeter Array
(ALMA), the Large Synoptic Survey Telescope (LSST), and the Square Kilometer
Array (SKA) will measure their accumulated data in petabytes. The capacity to
produce enormous volumes of data must be matched with the computing power to
process that data and produce meaningful results. In addition to handling huge
data rates, we need adaptive calibration and beamforming to handle atmospheric
fluctuations and radio frequency interference, and to provide a user
environment which makes the full power of large telescope arrays accessible to
both expert and non-expert users. Delayed calibration and analysis limit the
science which can be done. To make the best use of both telescope and human
resources we must reduce the burden of data reduction.
Our instrumentation comprises of a flexible correlator, beam former and
imager with digital signal processing closely coupled with a computing cluster.
This instrumentation will be highly accessible to scientists, engineers, and
students for research and development of real-time processing algorithms, and
will tap into the pool of talented and innovative students and visiting
scientists from engineering, computing, and astronomy backgrounds.
Adaptive real-time imaging will transform radio astronomy by providing
real-time feedback to observers. Calibration of the data is made in close to
real time using a model of the sky brightness distribution. The derived
calibration parameters are fed back into the imagers and beam formers. The
regions imaged are used to update and improve the a-priori model, which becomes
the final calibrated image by the time the observations are complete
Reducing adaptive optics latency using many-core processors
Atmospheric turbulence reduces the achievable resolution of ground based optical
telescopes. Adaptive optics systems attempt to mitigate the impact of this turbulence
and are required to update their corrections quickly and deterministically (i.e. in realtime).
The technological challenges faced by the future extremely large telescopes
(ELTs) and their associated instruments are considerable. A simple extrapolation of
current systems to the ELT scale is not sufficient.
My thesis work consisted in the identification and examination of new many-core
technologies for accelerating the adaptive optics real-time control loop. I investigated
the Mellanox TILE-Gx36 and the Intel Xeon Phi (5110p). The TILE-Gx36 with
4x10 GbE ports and 36 processing cores is a good candidate for fast computation of
the wavefront sensor images. The Intel Xeon Phi with 60 processing cores and high
memory bandwidth is particularly well suited for the acceleration of the wavefront
reconstruction.
Through extensive testing I have shown that the TILE-Gx can provide the performance
required for the wavefront processing units of the ELT first light instruments.
The Intel Xeon Phi (Knights Corner) while providing good overall performance does
not have the required determinism. We believe that the next generation of Xeon Phi
(Knights Landing) will provide the necessary determinism and increased performance.
In this thesis, we show that by using currently available novel many-core processors
it is possible to reach the performance required for ELT instruments
Accelerated CTIS Using the Cell Processor
The Computed Tomography Imaging Spectrometer (CTIS) is a device capable of simultaneously acquiring imagery from multiple bands of the electromagnetic spectrum. Due to the method of data collection from this system, a processing intensive reconstruction phase is required to resolve the image output. This paper evaluates a parallelized implementation of the Vose-Horton CTIS reconstruction algorithm using the Cell processor. In addition to demonstrating the feasibility of a mixed precision implementation, it is shown that use of the parallel processing capabilities of the Cell may provide a significant reduction in reconstruction time
Sky Surveys
Sky surveys represent a fundamental data basis for astronomy. We use them to
map in a systematic way the universe and its constituents, and to discover new
types of objects or phenomena. We review the subject, with an emphasis on the
wide-field imaging surveys, placing them in a broader scientific and historical
context. Surveys are the largest data generators in astronomy, propelled by the
advances in information and computation technology, and have transformed the
ways in which astronomy is done. We describe the variety and the general
properties of surveys, the ways in which they may be quantified and compared,
and offer some figures of merit that can be used to compare their scientific
discovery potential. Surveys enable a very wide range of science; that is
perhaps their key unifying characteristic. As new domains of the observable
parameter space open up thanks to the advances in technology, surveys are often
the initial step in their exploration. Science can be done with the survey data
alone or a combination of different surveys, or with a targeted follow-up of
potentially interesting selected sources. Surveys can be used to generate
large, statistical samples of objects that can be studied as populations, or as
tracers of larger structures. They can be also used to discover or generate
samples of rare or unusual objects, and may lead to discoveries of some
previously unknown types. We discuss a general framework of parameter spaces
that can be used for an assessment and comparison of different surveys, and the
strategies for their scientific exploration. As we move into the Petascale
regime, an effective processing and scientific exploitation of such large data
sets and data streams poses many challenges, some of which may be addressed in
the framework of Virtual Observatory and Astroinformatics, with a broader
application of data mining and knowledge discovery technologies.Comment: An invited chapter, to appear in Astronomical Techniques, Software,
and Data (ed. H. Bond), Vol.2 of Planets, Stars, and Stellar Systems (ser.
ed. T. Oswalt), Springer Verlag, in press (2012). 62 pages, incl. 2 tables
and 3 figure
Recommended from our members
Parallelisation of greedy algorithms for compressive sensing reconstruction
Compressive Sensing (CS) is a technique which allows a signal to be compressed at the same
time as it is captured. The process of capturing and simultaneously compressing the signal is
represented as linear sampling, which can encompass a variety of physical processes or signal
processing. Instead of explicitly identifying redundancies in the source signal, CS relies on the
property of sparsity in order to reconstruct the compressed signal. While linear sampling is
much less burdensome than conventional compression, this is more than made up for by the high
computational cost of reconstructing a signal which has been captured using CS. Even when
using some of the fastest reconstruction techniques, known as greedy pursuits, reconstruction
of large problems can pose a significant burden, consuming a great deal of memory as well as
compute time.
Parallel computing is the foundation of the field of High Performance Computing (HPC).
Modern supercomputers are generally composed of large clusters of standard servers, with a
dedicated low-latency high-bandwidth interconnect network. On such a cluster, an appropriately
written program can harness vast quantities of memory and computational power. However, in
order to exploit a parallel compute resource, an algorithm usually has to be redesigned from
the ground up. In this thesis I describe the development of parallel variants of two algorithms
commonly used in CS reconstruction, Matching Pursuit (MP) and Orthogonal Matching Pursuit
(OMP), resulting in the new distributed compute algorithms DistMP and DistOMP. I present
the results from experiments showing how DistMP and DistOMP can utilise a compute cluster
to solve CS problems much more quickly than a single computer could alone. Speed-up of as
much as a factor of 76 is observed with DistMP when utilising 210 workers across 14 servers,
compared to a single worker. Finally, I demonstrate how DistOMP can solve a problem with a
429GB equivalent sampling matrix in as little as 62 minutes using a 16-node compute cluster.Funded by an ICASE award from the Engineering and Physical Sciences Research Council, with sponsorship provided by Thales Research and Technology
NASA SBIR abstracts of 1991 phase 1 projects
The objectives of 301 projects placed under contract by the Small Business Innovation Research (SBIR) program of the National Aeronautics and Space Administration (NASA) are described. These projects were selected competitively from among proposals submitted to NASA in response to the 1991 SBIR Program Solicitation. The basic document consists of edited, non-proprietary abstracts of the winning proposals submitted by small businesses. The abstracts are presented under the 15 technical topics within which Phase 1 proposals were solicited. Each project was assigned a sequential identifying number from 001 to 301, in order of its appearance in the body of the report. Appendixes to provide additional information about the SBIR program and permit cross-reference of the 1991 Phase 1 projects by company name, location by state, principal investigator, NASA Field Center responsible for management of each project, and NASA contract number are included
ASCR/HEP Exascale Requirements Review Report
This draft report summarizes and details the findings, results, and
recommendations derived from the ASCR/HEP Exascale Requirements Review meeting
held in June, 2015. The main conclusions are as follows. 1) Larger, more
capable computing and data facilities are needed to support HEP science goals
in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of
the demand at the 2025 timescale is at least two orders of magnitude -- and in
some cases greater -- than that available currently. 2) The growth rate of data
produced by simulations is overwhelming the current ability, of both facilities
and researchers, to store and analyze it. Additional resources and new
techniques for data analysis are urgently needed. 3) Data rates and volumes
from HEP experimental facilities are also straining the ability to store and
analyze large and complex data volumes. Appropriately configured
leadership-class facilities can play a transformational role in enabling
scientific discovery from these datasets. 4) A close integration of HPC
simulation and data analysis will aid greatly in interpreting results from HEP
experiments. Such an integration will minimize data movement and facilitate
interdependent workflows. 5) Long-range planning between HEP and ASCR will be
required to meet HEP's research needs. To best use ASCR HPC resources the
experimental HEP program needs a) an established long-term plan for access to
ASCR computational and data resources, b) an ability to map workflows onto HPC
resources, c) the ability for ASCR facilities to accommodate workflows run by
collaborations that can have thousands of individual members, d) to transition
codes to the next-generation HPC platforms that will be available at ASCR
facilities, e) to build up and train a workforce capable of developing and
using simulations and analysis to support HEP scientific research on
next-generation systems.Comment: 77 pages, 13 Figures; draft report, subject to further revisio
Deep learning-based vessel detection from very high and medium resolution optical satellite images as component of maritime surveillance systems
This thesis presents an end-to-end multiclass vessel detection method from optical satellite images. The proposed workflow covers the complete processing chain and involves rapid image enhancement techniques, the fusion with automatic identification system (AIS) data, and the detection algorithm based on convolutional neural networks (CNN). The algorithms presented are implemented in the form of independent software processors and integrated in an automated processing chain as part of the Earth Observation Maritime Surveillance System (EO-MARISS).In der vorliegenden Arbeit wird eine Methode zur Detektion von Schiffen unterschiedlicher Klassen in optischen Satellitenbildern vorgestellt. Diese gliedert sich in drei aufeinanderfolgende Funktionen: i) die Bildbearbeitung zur Verbesserung der Bildeigenschaften, ii) die Datenfusion mit den Daten des Automatischen Identifikation Systems (AIS) und iii) dem auf „Convolutional Neural Network“ (CNN) basierenden Detektionsalgorithmus. Die vorgestellten Algorithmen wurden in Form eigenständiger Softwareprozessoren implementiert und als Teil des maritimen Erdbeobachtungssystems integriert
- …