28 research outputs found
Predicting Long-Range Traversability from Short-Range Stereo-Derived Geometry
Based only on its appearance in imagery, this program uses close-range 3D terrain analysis to produce training data sufficient to estimate the traversability of terrain beyond 3D sensing range. This approach is called learning from stereo (LFS). In effect, the software transfers knowledge from middle distances, where 3D geometry provides training cues, into the far field where only appearance is available. This is a viable approach because the same obstacle classes, and sometimes the same obstacles, are typically present in the mid-field and the farfield. Learning thus extends the effective look-ahead distance of the sensors
Centralized Alert-Processing and Asset Planning for Sensorwebs
A software program provides a Sensorweb architecture for alert-processing, event detection, asset allocation and planning, and visualization. It automatically tasks and re-tasks various types of assets such as satellites and robotic vehicles in response to alerts (fire, weather) extracted from various data sources, including low-level Webcam data. JPL has adapted cons iderable Sensorweb infrastructure that had been previously applied to NASA Earth Science applications. This NASA Earth Science Sensorweb has been in operational use since 2003, and has proven reliability of the Sensorweb technologies for robust event detection and autonomous response using space and ground assets. Unique features of the software include flexibility to a range of detection and tasking methods including those that require aggregation of data over spatial and temporal ranges, generality of the response structure to represent and implement a range of response campaigns, and the ability to respond rapidly
App Store at OpenNEX: A Gateway to Help Find Apps over Big Data on the Cloud
In this on-going work, we report our efforts of building App Store at OpenNEX, which aims to provide a data analytics software search engine to help researchers find reusable software components to facilitate the development of their own algorithms
Recommended from our members
Mapping tropical Pacific sea level: Data assimilation via a reduced state space Kalman filter
The well-known fact that tropical sea level can be usefully simulated by linear wind driven models recommends it as a realistic test problem for data assimilation schemes. Here we report on an assimilation of monthly data for the period 1975-1992 from 34 tropical Pacific tide gauges into such a model using a Kalman filter. We present an approach to the Kalman filter that uses a reduced state space representation for the required error covariance matrices. This reduction makes the calculation highly feasible. We argue that a more complete representation will be of no value in typical oceanographic practice, that in principle it is unlikely to be helpful, and that it may even be harmful if the data coverage is sparse, the usual case in oceanography. This is in part a consequence of ignorance of the correct error statistics for the data and model, but only in part. The reduced state space is obtained from a truncated set of multivariate empirical orthogonal functions (EOFs) derived from a long model run without assimilation. The reduced state space filter is compared with a full grid point Kalman filter using the same dynamical model for the period 1979-1985, assimilating eight tide gauge stations and using an additional seven for verification [Miller et al., 1995]. Results are not inferior to the full grid point filter, even when the reduced filter retains only nine EOFs. Five sets of reduced space filter assimilations are run with all tide gauge data for the period 1975-1992. In each set a different number of EOFs is retained: 5, 9, 17, 32, and 93, accounting for 60, 70, 80, 90, and 99% of the model variance, respectively. Each set consists of 34 runs, in each of which one station is withheld for verification. Comparing each set to the nonassimilation run, the average rms error at the withheld stations decreases by more than 1 cm. The improvement is generally larger for the stations at lowest latitudes. Increasing the number of EOFs increases agreement with data at locations where data are assimilated; the added structures allow better fits locally. In contrast, results at withheld stations are almost insensitive to the number of EOFs retained. We also compare the Kalman filter theoretical error estimates with the actual errors of the assimilations. Features agree on average, but not in detail, a reminder of the fact that the quality of theoretical estimates is limited by the quality of error models they assume. We briefly discuss the implications of our work for future studies, including the application of the method to full ocean general circulation models and coupled models.Copyrighted by American Geophysical Union
Autonomous Exploration for Gathering Increased Science
The Autonomous Exploration for Gathering Increased Science System (AEGIS) provides automated targeting for remote sensing instruments on the Mars Exploration Rover (MER) mission, which at the time of this reporting has had two rovers exploring the surface of Mars (see figure). Currently, targets for rover remote-sensing instruments must be selected manually based on imagery already on the ground with the operations team. AEGIS enables the rover flight software to analyze imagery onboard in order to autonomously select and sequence targeted remote-sensing observations in an opportunistic fashion. In particular, this technology will be used to automatically acquire sub-framed, high-resolution, targeted images taken with the MER panoramic cameras. This software provides: 1) Automatic detection of terrain features in rover camera images, 2) Feature extraction for detected terrain targets, 3) Prioritization of terrain targets based on a scientist target feature set, and 4) Automated re-targeting of rover remote-sensing instruments at the highest priority target
A machine learning classifier for fast radio burst detection at the VLBA
Time domain radio astronomy observing campaigns frequently generate large volumes of data. Our goal is to develop automated methods that can identify events of interest buried within the larger data stream. The V-FASTR fast transient system was designed to detect rare fast radio bursts within data collected by the Very Long Baseline Array. The resulting event candidates constitute a significant burden in terms of subsequent human reviewing time. We have trained and deployed a machine learning classifier that marks each candidate detection as a pulse from a known pulsar, an artifact due to radio frequency interference, or a potential new discovery. The classifier maintains high reliability by restricting its predictions to those with at least 90% confidence. We have also implemented several efficiency and usability improvements to the V-FASTR web-based candidate review system. Overall, we found that time spent reviewing decreased and the fraction of interesting candidates increased. The classifier now classifies (and therefore filters) 80%–90% of the candidates, with an accuracy greater than 98%, leaving only the 10%–20% most promising candidates to be reviewed by humans
Multiclass Reduced-Set Support Vector Machines
There are well-established methods for reducing the number of support vectors in a trained binary support vector machine, often with minimal impact on accuracy. We show how reduced-set methods can be applied to multiclass SVMs made up of several binary SVMs, with significantly better results than reducing each binary SVM independently. Our approach is based on Burges' approach that constructs each reduced-set vector as the pre-image of a vector in kernel space, but we extend this by recomputing the SVM weights and bias optimally using the original SVM objective function. This leads to greater accuracy for a binary reduced-set SVM, and also allows vectors to be 'shared' between multiple binary SVMs for greater multiclass accuracy with fewer reduced-set vectors. We also propose computing pre-images using differential evolution, which we have found to be more robust than gradient descent alone. We show experimental results on a variety of problems and find that this new approach is consistently better than previous multiclass reduced-set methods, sometimes with a dramatic difference
Interannual variability of accumulated snow in the Columbia basin, British Columbia
Snow water equivalent anomalies (SWEA) measured around April 1 by stations in the Columbia basin area in British Columbia, Canada, were studied for their interannual variability during the period 1950–1999, particularly in relation to El Niño/La Niña events and to high and low Pacific–North American (PNA) atmospheric circulation patterns. Composites of the SWEA showed that SWEA were negative during El Niño years, positive during La Niña years, negative during high PNA years, and positive during low PNA years. High PNA appeared to have the most impact on the SWEA, followed by La Niña, El Niño, and low PNA. In the Columbia basin area, La Niña effects (relative to El Niño effects) on SWEA decrease northward and eastward but strengthen with elevation. Composites of the Pacific sea surface temperature anomalies (SSTA) during the 10 lowest SWEA years revealed weak signals, with El Niño warm SSTA present only during spring and early summer in the preceding year and the SSTA pattern consistent with a high PNA present by fall and winter. In contrast, composites of the SSTA during the 10 highest SWEA years showed strong La Niña cool SSTA starting around May in the preceding year and lasting onto winter. An edited version of this paper was published by AGU. Copyright 2001 American Geophysical Union.Science, Faculty ofEarth and Ocean Sciences, Department ofReviewedFacult
ABSTRACT On-board Analysis of Uncalibrated Data
Analyzing data on-board a spacecraft as it is collected enables several advanced spacecraft capabilities, such as prioritizing observations to make the best use of limited bandwidth and reacting to dynamic events as they happen. In this paper, we describe how we addressed the unique challenges associated with on-board mining of data as it is collected: uncalibrated data, noisy observations, and severe limitations on computational and memory resources. The goal of this effort, which falls into the emerging application area of spacecraft-based data mining, was to study three specific science phenomena on Mars. Following previous work that used a linear support vector machine (SVM) on-board the Earth Observing 1 (EO-1) spacecraft, we developed three data mining techniques for use on-board the Mars Odyssey spacecraft. These methods range from simple thresholding to state-of-the-art reduced-set SVM technology. We tested these algorithms on archived data in a flight software testbed. We also describe a significant, serendipitous science discovery of this data mining effort: the confirmation of a water ice annulus around the north polar cap of Mars. We conclude with a discussion on lessons learned in developing algorithms for use on-board a spacecraft
Spacebased Estimation of Moisture Transport in Marine Atmosphere Using Support Vector Regression
An improved algorithm is developed based on support vector regression (SVR) to estimate horizonal water vapor transport integrated through the depth of the atmosphere ((Theta)) over the global ocean from observations of surface wind-stress vector by QuikSCAT, cloud drift wind vector derived from the Multi-angle Imaging SpectroRadiometer (MISR) and geostationary satellites, and precipitable water from the Special Sensor Microwave/Imager (SSM/I). The statistical relation is established between the input parameters (the surface wind stress, the 850 mb wind, the precipitable water, time and location) and the target data ((Theta) calculated from rawinsondes and reanalysis of numerical weather prediction model). The results are validated with independent daily rawinsonde observations, monthly mean reanalysis data, and through regional water balance. This study clearly demonstrates the improvement of (Theta) derived from satellite data using SVR over previous data sets based on linear regression and neural network. The SVR methodology reduces both mean bias and standard deviation comparedwith rawinsonde observations. It agrees better with observations from synoptic to seasonal time scales, and compare more favorably with the reanalysis data on seasonal variations. Only the SVR result can achieve the water balance over South America. The rationale of the advantage by SVR method and the impact of adding the upper level wind will also be discussed