508 research outputs found
A spatial data handling system for retrieval of images by unrestricted regions of user interest
The Intelligent Data Management (IDM) project at NASA/Goddard Space Flight Center has prototyped an Intelligent Information Fusion System (IIFS), which automatically ingests metadata from remote sensor observations into a large catalog which is directly queryable by end-users. The greatest challenge in the implementation of this catalog was supporting spatially-driven searches, where the user has a possible complex region of interest and wishes to recover those images that overlap all or simply a part of that region. A spatial data management system is described, which is capable of storing and retrieving records of image data regardless of their source. This system was designed and implemented as part of the IIFS catalog. A new data structure, called a hypercylinder, is central to the design. The hypercylinder is specifically tailored for data distributed over the surface of a sphere, such as satellite observations of the Earth or space. Operations on the hypercylinder are regulated by two expert systems. The first governs the ingest of new metadata records, and maintains the efficiency of the data structure as it grows. The second translates, plans, and executes users' spatial queries, performing incremental optimization as partial query results are returned
Down the Rabbit Hole: Robust Proximity Search and Density Estimation in Sublinear Space
For a set of points in , and parameters and \eps, we present
a data structure that answers (1+\eps,k)-\ANN queries in logarithmic time.
Surprisingly, the space used by the data-structure is \Otilde (n /k); that
is, the space used is sublinear in the input size if is sufficiently large.
Our approach provides a novel way to summarize geometric data, such that
meaningful proximity queries on the data can be carried out using this sketch.
Using this, we provide a sublinear space data-structure that can estimate the
density of a point set under various measures, including:
\begin{inparaenum}[(i)]
\item sum of distances of closest points to the query point, and
\item sum of squared distances of closest points to the query point.
\end{inparaenum}
Our approach generalizes to other distance based estimation of densities of
similar flavor. We also study the problem of approximating some of these
quantities when using sampling. In particular, we show that a sample of size
\Otilde (n /k) is sufficient, in some restricted cases, to estimate the above
quantities. Remarkably, the sample size has only linear dependency on the
dimension
A Deep Learning Approach for the Computation of Curvature in the Level-Set Method
We propose a deep learning strategy to estimate the mean curvature of
two-dimensional implicit interfaces in the level-set method. Our approach is
based on fitting feed-forward neural networks to synthetic data sets
constructed from circular interfaces immersed in uniform grids of various
resolutions. These multilayer perceptrons process the level-set values from
mesh points next to the free boundary and output the dimensionless curvature at
their closest locations on the interface. Accuracy analyses involving irregular
interfaces, both in uniform and adaptive grids, show that our models are
competitive with traditional numerical schemes in the and norms. In
particular, our neural networks approximate curvature with comparable precision
in coarse resolutions, when the interface features steep curvature regions, and
when the number of iterations to reinitialize the level-set function is small.
Although the conventional numerical approach is more robust than our framework,
our results have unveiled the potential of machine learning for dealing with
computational tasks where the level-set method is known to experience
difficulties. We also establish that an application-dependent map of local
resolutions to neural models can be devised to estimate mean curvature more
effectively than a universal neural network.Comment: Submitted to SIAM Journal on Scientific Computin
Algorithms and Data Structures for Automated Change Detection and Classification of Sidescan Sonar Imagery
During Mine Warfare (MIW) operations, MIW analysts perform change detection by visually comparing historical sidescan sonar imagery (SSI) collected by a sidescan sonar with recently collected SSI in an attempt to identify objects (which might be explosive mines) placed at sea since the last time the area was surveyed. This dissertation presents a data structure and three algorithms, developed by the author, that are part of an automated change detection and classification (ACDC) system. MIW analysts at the Naval Oceanographic Office, to reduce the amount of time to perform change detection, are currently using ACDC. The dissertation introductory chapter gives background information on change detection, ACDC, and describes how SSI is produced from raw sonar data. Chapter 2 presents the author\u27s Geospatial Bitmap (GB) data structure, which is capable of storing information geographically and is utilized by the three algorithms. This chapter shows that a GB data structure used in a polygon-smoothing algorithm ran between 1.3 – 48.4x faster than a sparse matrix data structure. Chapter 3 describes the GB clustering algorithm, which is the author\u27s repeatable, order-independent method for clustering. Results from tests performed in this chapter show that the time to cluster a set of points is not affected by the distribution or the order of the points. In Chapter 4, the author presents his real-time computer-aided detection (CAD) algorithm that automatically detects mine-like objects on the seafloor in SSI. The author ran his GB-based CAD algorithm on real SSI data, and results of these tests indicate that his real-time CAD algorithm performs comparably to or better than other non-real-time CAD algorithms. The author presents his computer-aided search (CAS) algorithm in Chapter 5. CAS helps MIW analysts locate mine-like features that are geospatially close to previously detected features. A comparison between the CAS and a great circle distance algorithm shows that the CAS performs geospatial searching 1.75x faster on large data sets. Finally, the concluding chapter of this dissertation gives important details on how the completed ACDC system will function, and discusses the author\u27s future research to develop additional algorithms and data structures for ACDC
Geo-Adaptive Deep Spatio-Temporal predictive modeling for human mobility
Deep learning approaches for spatio-temporal prediction problems such as
crowd-flow prediction assumes data to be of fixed and regular shaped tensor and
face challenges of handling irregular, sparse data tensor. This poses
limitations in use-case scenarios such as predicting visit counts of
individuals' for a given spatial area at a particular temporal resolution using
raster/image format representation of the geographical region, since the
movement patterns of an individual can be largely restricted and localized to a
certain part of the raster. Additionally, current deep-learning approaches for
solving such problem doesn't account for the geographical awareness of a region
while modelling the spatio-temporal movement patterns of an individual. To
address these limitations, there is a need to develop a novel strategy and
modeling approach that can handle both sparse, irregular data while
incorporating geo-awareness in the model. In this paper, we make use of
quadtree as the data structure for representing the image and introduce a novel
geo-aware enabled deep learning layer, GA-ConvLSTM that performs the
convolution operation based on a novel geo-aware module based on quadtree data
structure for incorporating spatial dependencies while maintaining the
recurrent mechanism for accounting for temporal dependencies. We present this
approach in the context of the problem of predicting spatial behaviors of an
individual (e.g., frequent visits to specific locations) through deep-learning
based predictive model, GADST-Predict. Experimental results on two GPS based
trace data shows that the proposed method is effective in handling frequency
visits over different use-cases with considerable high accuracy
A network tomography approach for traffic monitoring in smart cities
Various urban planning and managing activities required by a Smart City are feasible because of traffic monitoring. As such, the thesis proposes a network tomography-based approach that can be applied to road networks to achieve a cost-efficient, flexible, and scalable monitor deployment. Due to the algebraic approach of network tomography, the selection of monitoring intersections can be solved through the use of matrices, with its rows representing paths between two intersections, and its columns representing links in the road network. Because the goal of the algorithm is to provide a cost-efficient, minimum error, and high coverage monitor set, this problem can be translated into an optimization problem over a matroid, which can be solved efficiently by a greedy algorithm. Also as supplementary, the approach is capable of handling noisy measurements and a measurement-to-path matching. The approach proves a low error and a 90% coverage with only 20% nodes selected as monitors in a downtown San Francisco, CA topology --Abstract, page iv
A summary of image segmentation techniques
Machine vision systems are often considered to be composed of two subsystems: low-level vision and high-level vision. Low level vision consists primarily of image processing operations performed on the input image to produce another image with more favorable characteristics. These operations may yield images with reduced noise or cause certain features of the image to be emphasized (such as edges). High-level vision includes object recognition and, at the highest level, scene interpretation. The bridge between these two subsystems is the segmentation system. Through segmentation, the enhanced input image is mapped into a description involving regions with common features which can be used by the higher level vision tasks. There is no theory on image segmentation. Instead, image segmentation techniques are basically ad hoc and differ mostly in the way they emphasize one or more of the desired properties of an ideal segmenter and in the way they balance and compromise one desired property against another. These techniques can be categorized in a number of different groups including local vs. global, parallel vs. sequential, contextual vs. noncontextual, interactive vs. automatic. In this paper, we categorize the schemes into three main groups: pixel-based, edge-based, and region-based. Pixel-based segmentation schemes classify pixels based solely on their gray levels. Edge-based schemes first detect local discontinuities (edges) and then use that information to separate the image into regions. Finally, region-based schemes start with a seed pixel (or group of pixels) and then grow or split the seed until the original image is composed of only homogeneous regions. Because there are a number of survey papers available, we will not discuss all segmentation schemes. Rather than a survey, we take the approach of a detailed overview. We focus only on the more common approaches in order to give the reader a flavor for the variety of techniques available yet present enough details to facilitate implementation and experimentation
- …