46,809 research outputs found
Running a distributed virtual observatory: US Virtual Astronomical Observatory operations
Operation of the US Virtual Astronomical Observatory shares some issues with
modern physical observatories, e.g., intimidating data volumes and rapid
technological change, and must also address unique concerns like the lack of
direct control of the underlying and scattered data resources, and the
distributed nature of the observatory itself. In this paper we discuss how the
VAO has addressed these challenges to provide the astronomical community with a
coherent set of science-enabling tools and services. The distributed nature of
our virtual observatory-with data and personnel spanning geographic,
institutional and regime boundaries-is simultaneously a major operational
headache and the primary science motivation for the VAO. Most astronomy today
uses data from many resources. Facilitation of matching heterogeneous datasets
is a fundamental reason for the virtual observatory. Key aspects of our
approach include continuous monitoring and validation of VAO and VO services
and the datasets provided by the community, monitoring of user requests to
optimize access, caching for large datasets, and providing distributed storage
services that allow user to collect results near large data repositories. Some
elements are now fully implemented, while others are planned for subsequent
years. The distributed nature of the VAO requires careful attention to what can
be a straightforward operation at a conventional observatory, e.g., the
organization of the web site or the collection and combined analysis of logs.
Many of these strategies use and extend protocols developed by the
international virtual observatory community.Comment: 7 pages with 2 figures included within PD
Investigation on the automatic geo-referencing of archaeological UAV photographs by correlation with pre-existing ortho-photos
We present a method for the automatic geo-referencing of archaeological photographs captured aboard unmanned aerial vehicles (UAVs), termed UPs. We do so by help of pre-existing ortho-photo maps (OPMs) and digital surface models (DSMs). Typically, these pre-existing data sets are based on data that were captured at a widely different point in time. This renders the detection (and hence the matching) of homologous feature points in the UPs and OPMs infeasible mainly due to temporal variations of vegetation and illumination. Facing this difficulty, we opt for the normalized cross correlation coefficient of perspectively transformed image patches as the measure of image similarity. Applying a threshold to this measure, we detect candidates for homologous image points, resulting in a distinctive, but computationally intensive method. In order to lower computation times, we reduce the dimensionality and extents of the search space by making use of a priori knowledge of the data sets. By assigning terrain heights interpolated in the DSM to the image points found in the OPM, we generate control points. We introduce respective observations into a bundle block, from which gross errors i.e. false matches are eliminated during its robust adjustment. A test of our approach on a UAV image data set demonstrates its potential and raises hope to successfully process large image archives
Reflectance Transformation Imaging (RTI) System for Ancient Documentary Artefacts
This tutorial summarises our uses of reflectance transformation imaging in archaeological contexts. It introduces the UK AHRC funded project reflectance Transformation Imaging for Anciant Documentary Artefacts and demonstrates imaging methodologies
Accurate Light Field Depth Estimation with Superpixel Regularization over Partially Occluded Regions
Depth estimation is a fundamental problem for light field photography
applications. Numerous methods have been proposed in recent years, which either
focus on crafting cost terms for more robust matching, or on analyzing the
geometry of scene structures embedded in the epipolar-plane images. Significant
improvements have been made in terms of overall depth estimation error;
however, current state-of-the-art methods still show limitations in handling
intricate occluding structures and complex scenes with multiple occlusions. To
address these challenging issues, we propose a very effective depth estimation
framework which focuses on regularizing the initial label confidence map and
edge strength weights. Specifically, we first detect partially occluded
boundary regions (POBR) via superpixel based regularization. Series of
shrinkage/reinforcement operations are then applied on the label confidence map
and edge strength weights over the POBR. We show that after weight
manipulations, even a low-complexity weighted least squares model can produce
much better depth estimation than state-of-the-art methods in terms of average
disparity error rate, occlusion boundary precision-recall rate, and the
preservation of intricate visual features
Cloud Storage and Bioinformatics in a private cloud deployment: Lessons for Data Intensive research
This paper describes service portability for a private cloud deployment, including a detailed case study about Cloud Storage and bioinformatics services developed as part of the Cloud Computing Adoption Framework (CCAF). Our Cloud Storage design and deployment is based on Storage Area Network (SAN) technologies, details of which include functionalities, technical implementation, architecture and user support. Experiments for data services (backup automation, data recovery and data migration) are performed and results confirm backup automation is completed swiftly and is reliable for data-intensive research. The data recovery result confirms that execution time is in proportion to quantity of recovered data, but the failure rate increases in an exponential manner. The data migration result confirms execution time is in proportion to disk volume of migrated data, but again the failure rate increases in an exponential manner. In addition, benefits of CCAF are illustrated using several bioinformatics examples such as tumour modelling, brain imaging, insulin molecules and simulations for medical training. Our Cloud Storage solution described here offers cost reduction, time-saving and user friendliness
Small Data Archives and Libraries
Preservation is important for documenting original observations, and existing data are an important resource which can be re-used. Observatories should set up electronic data archives and formulate archiving policies. VO (Virtual Observatory) compliance is desirable; even if this is not possible, at least some VO ideas should be applied. Data archives should be visible and their data kept on-line. Metadata should be plentiful, and as standard as possible, just like file formats. Literature and data should be cross-linked. Libraries can play an important role in this process. In this paper, we discuss data archiving for small projects and observatories. We review the questions of digitization, cost factors, manpower, organizational structure and more
A photometricity and extinction monitor at the Apache Point Observatory
An unsupervised software ``robot'' that automatically and robustly reduces
and analyzes CCD observations of photometric standard stars is described. The
robot measures extinction coefficients and other photometric parameters in real
time and, more carefully, on the next day. It also reduces and analyzes data
from an all-sky camera to detect clouds; photometric data taken
during cloudy periods are automatically rejected. The robot reports its
findings back to observers and data analysts via the World-Wide Web. It can be
used to assess photometricity, and to build data on site conditions. The
robot's automated and uniform site monitoring represents a minimum standard for
any observing site with queue scheduling, a public data archive, or likely
participation in any future National Virtual Observatory.Comment: accepted for publication in A
VXA: A Virtual Architecture for Durable Compressed Archives
Data compression algorithms change frequently, and obsolete decoders do not
always run on new hardware and operating systems, threatening the long-term
usability of content archived using those algorithms. Re-encoding content into
new formats is cumbersome, and highly undesirable when lossy compression is
involved. Processor architectures, in contrast, have remained comparatively
stable over recent decades. VXA, an archival storage system designed around
this observation, archives executable decoders along with the encoded content
it stores. VXA decoders run in a specialized virtual machine that implements an
OS-independent execution environment based on the standard x86 architecture.
The VXA virtual machine strictly limits access to host system services, making
decoders safe to run even if an archive contains malicious code. VXA's adoption
of a "native" processor architecture instead of type-safe language technology
allows reuse of existing "hand-optimized" decoders in C and assembly language,
and permits decoders access to performance-enhancing architecture features such
as vector processing instructions. The performance cost of VXA's virtualization
is typically less than 15% compared with the same decoders running natively.
The storage cost of archived decoders, typically 30-130KB each, can be
amortized across many archived files sharing the same compression method.Comment: 14 pages, 7 figures, 2 table
- âŚ