799 research outputs found
Metric Learning for Generalizing Spatial Relations to New Objects
Human-centered environments are rich with a wide variety of spatial relations
between everyday objects. For autonomous robots to operate effectively in such
environments, they should be able to reason about these relations and
generalize them to objects with different shapes and sizes. For example, having
learned to place a toy inside a basket, a robot should be able to generalize
this concept using a spoon and a cup. This requires a robot to have the
flexibility to learn arbitrary relations in a lifelong manner, making it
challenging for an expert to pre-program it with sufficient knowledge to do so
beforehand. In this paper, we address the problem of learning spatial relations
by introducing a novel method from the perspective of distance metric learning.
Our approach enables a robot to reason about the similarity between pairwise
spatial relations, thereby enabling it to use its previous knowledge when
presented with a new relation to imitate. We show how this makes it possible to
learn arbitrary spatial relations from non-expert users using a small number of
examples and in an interactive manner. Our extensive evaluation with real-world
data demonstrates the effectiveness of our method in reasoning about a
continuous spectrum of spatial relations and generalizing them to new objects.Comment: Accepted at the 2017 IEEE/RSJ International Conference on Intelligent
Robots and Systems. The new Freiburg Spatial Relations Dataset and a demo
video of our approach running on the PR-2 robot are available at our project
website: http://spatialrelations.cs.uni-freiburg.d
SE-KGE: A Location-Aware Knowledge Graph Embedding Model for Geographic Question Answering and Spatial Semantic Lifting
Learning knowledge graph (KG) embeddings is an emerging technique for a
variety of downstream tasks such as summarization, link prediction, information
retrieval, and question answering. However, most existing KG embedding models
neglect space and, therefore, do not perform well when applied to (geo)spatial
data and tasks. For those models that consider space, most of them primarily
rely on some notions of distance. These models suffer from higher computational
complexity during training while still losing information beyond the relative
distance between entities. In this work, we propose a location-aware KG
embedding model called SE-KGE. It directly encodes spatial information such as
point coordinates or bounding boxes of geographic entities into the KG
embedding space. The resulting model is capable of handling different types of
spatial reasoning. We also construct a geographic knowledge graph as well as a
set of geographic query-answer pairs called DBGeo to evaluate the performance
of SE-KGE in comparison to multiple baselines. Evaluation results show that
SE-KGE outperforms these baselines on the DBGeo dataset for geographic logic
query answering task. This demonstrates the effectiveness of our
spatially-explicit model and the importance of considering the scale of
different geographic entities. Finally, we introduce a novel downstream task
called spatial semantic lifting which links an arbitrary location in the study
area to entities in the KG via some relations. Evaluation on DBGeo shows that
our model outperforms the baseline by a substantial margin.Comment: Accepted to Transactions in GI
Efficient Analysis in Multimedia Databases
The rapid progress of digital technology has led to a situation
where computers have become ubiquitous tools. Now we can find them
in almost every environment, be it industrial or even private. With
ever increasing performance computers assumed more and more vital
tasks in engineering, climate and environmental research, medicine
and the content industry. Previously, these tasks could only be
accomplished by spending enormous amounts of time and money. By
using digital sensor devices, like earth observation satellites,
genome sequencers or video cameras, the amount and complexity of
data with a spatial or temporal relation has gown enormously. This
has led to new challenges for the data analysis and requires the use
of modern multimedia databases.
This thesis aims at developing efficient techniques for the analysis
of complex multimedia objects such as CAD data, time series and
videos. It is assumed that the data is modeled by commonly used
representations. For example CAD data is represented as a set of
voxels, audio and video data is represented as multi-represented,
multi-dimensional time series.
The main part of this thesis focuses on finding efficient methods
for collision queries of complex spatial objects. One way to speed
up those queries is to employ a cost-based decompositioning,
which uses interval groups to approximate a spatial object. For
example, this technique can be used for the Digital Mock-Up (DMU)
process, which helps engineers to ensure short product cycles. This
thesis defines and discusses a new similarity measure for time
series called threshold-similarity. Two time series are
considered similar if they expose a similar behavior regarding the
transgression of a given threshold value. Another part of the thesis
is concerned with the efficient calculation of reverse
k-nearest neighbor (RkNN) queries in general metric spaces
using conservative and progressive approximations. The aim of such
RkNN queries is to determine the impact of single objects on the
whole database. At the end, the thesis deals with video
retrieval and hierarchical genre classification of music
using multiple representations. The practical relevance of the
discussed genre classification approach is highlighted with a
prototype tool that helps the user to organize large music
collections.
Both the efficiency and the effectiveness of the presented
techniques are thoroughly analyzed. The benefits over traditional
approaches are shown by evaluating the new methods on real-world
test datasets
A query processing system for very large spatial databases using a new map algebra
Dans cette thĂšse nous introduisons une approche de traitement de requĂȘtes pour des bases de donnĂ©e spatiales. Nous expliquons aussi les concepts principaux que nous avons dĂ©fini et dĂ©veloppĂ©: une algĂšbre spatiale et une approche Ă base de graphe utilisĂ©e dans l'optimisateur. L'algĂšbre spatiale est dĂ©fini pour exprimer les requĂȘtes et les rĂšgles de transformation pendant les diffĂ©rentes Ă©tapes de l'optimisation de requĂȘtes. Nous avons essayĂ© de dĂ©finir l'algĂšbre la plus complĂšte que possible pour couvrir une grande variĂ©tĂ© d'application. L'opĂ©rateur algĂ©brique reçoit et produit seulement des carte. Les fonctions reçoivent des cartes et produisent des scalaires ou des objets. L'optimisateur reçoit la requĂȘte en expression algĂ©brique et produit un QEP (Query Evaluation Plan) efficace dans deux Ă©tapes: gĂ©nĂ©ration de QEG (Query Evaluation Graph) et gĂ©nĂ©ration de QEP. Dans premiĂšre Ă©tape un graphe (QEG) Ă©quivalent de l'expression algĂ©brique est produit. Les rĂšgles de transformation sont utilisĂ©es pour transformer le graphe a un Ă©quivalent plus efficace. Dans deuxiĂšme Ă©tape un QEP est produit de QEG passĂ© de l'Ă©tape prĂ©cĂ©dente. Le QEP est un ensemble des opĂ©rations primitives consĂ©cutives qui produit les rĂ©sultats finals (la rĂ©ponse finale de la requĂȘte soumise au base de donnĂ©e). Nous avons implĂ©mentĂ© l'optimisateur, un gĂ©nĂ©rateur de requĂȘte spatiale alĂ©atoire, et une base de donnĂ©e simulĂ©e. La base de donnĂ©e spatiale simulĂ©e est un ensemble de fonctions pour simuler des opĂ©rations spatiales primitives. Les requĂȘtes alĂ©atoires sont soumis Ă l'optimisateur. Les QEPs gĂ©nĂ©rĂ©es sont soumis au simulateur de base de donnĂ©es spatiale. Les rĂ©sultats expĂ©rimentaux sont utilisĂ©s pour discuter les performances et les caractĂ©ristiques de l'optimisateur.Abstract: In this thesis we introduce a query processing approach for spatial databases and explain the main concepts we defined and developed: a spatial algebra and a graph based approach used in the optimizer. The spatial algebra was defined to express queries and transformation rules during different steps of the query optimization. To cover a vast variety of potential applications, we tried to define the algebra as complete as possible. The algebra looks at the spatial data as maps of spatial objects. The algebraic operators act on the maps and result in new maps. Aggregate functions can act on maps and objects and produce objects or basic values (characters, numbers, etc.). The optimizer receives the query in algebraic expression and produces one efficient QEP (Query Evaluation Plan) through two main consecutive blocks: QEG (Query Evaluation Graph) generation and QEP generation. In QEG generation we construct a graph equivalent of the algebraic expression and then apply graph transformation rules to produce one efficient QEG. In QEP generation we receive the efficient QEG and do predicate ordering and approximation and then generate the efficient QEP. The QEP is a set of consecutive phases that must be executed in the specified order. Each phase consist of one or more primitive operations. All primitive operations that are in the same phase can be executed in parallel. We implemented the optimizer, a randomly spatial query generator and a simulated spatial database. The query generator produces random queries for the purpose of testing the optimizer. The simulated spatial database is a set of functions to simulate primitive spatial operations. They return the cost of the corresponding primitive operation according to input parameters. We put randomly generated queries to the optimizer, got the generated QEPs and put them to the spatial database simulator. We used the experimental results to discuss on the optimizer characteristics and performance. The optimizer was designed for databases with a very large number of spatial objects nevertheless most of the concepts we used can be applied to all spatial information systems."--RĂ©sumĂ© abrĂ©gĂ© par UMI
Geographica: A Benchmark for Geospatial RDF Stores
Geospatial extensions of SPARQL like GeoSPARQL and stSPARQL have recently
been defined and corresponding geospatial RDF stores have been implemented.
However, there is no widely used benchmark for evaluating geospatial RDF stores
which takes into account recent advances to the state of the art in this area.
In this paper, we develop a benchmark, called Geographica, which uses both
real-world and synthetic data to test the offered functionality and the
performance of some prominent geospatial RDF stores
- âŠ