775 research outputs found
Investigating the use of semantic technologies in spatial mapping applications
Semantic Web Technologies are ideally suited to build context-aware information retrieval applications. However, the geospatial aspect of context awareness presents unique challenges such as the semantic modelling of geographical references for efficient handling of spatial queries, the reconciliation of the heterogeneity at the semantic and geo-representation levels, maintaining the quality of service and scalability of communicating, and the efficient rendering of the spatial queries' results. In this paper, we describe the modelling decisions taken to solve these challenges by analysing our implementation of an intelligent planning and recommendation tool that provides location-aware advice for a specific application domain. This paper contributes to the methodology of integrating heterogeneous geo-referenced data into semantic knowledgebases, and also proposes mechanisms for efficient spatial interrogation of the semantic knowledgebase and optimising the rendering of the dynamically retrieved context-relevant information on a web frontend
Hillview:A trillion-cell spreadsheet for big data
Hillview is a distributed spreadsheet for browsing very large datasets that
cannot be handled by a single machine. As a spreadsheet, Hillview provides a
high degree of interactivity that permits data analysts to explore information
quickly along many dimensions while switching visualizations on a whim. To
provide the required responsiveness, Hillview introduces visualization
sketches, or vizketches, as a simple idea to produce compact data
visualizations. Vizketches combine algorithmic techniques for data
summarization with computer graphics principles for efficient rendering. While
simple, vizketches are effective at scaling the spreadsheet by parallelizing
computation, reducing communication, providing progressive visualizations, and
offering precise accuracy guarantees. Using Hillview running on eight servers,
we can navigate and visualize datasets of tens of billions of rows and
trillions of cells, much beyond the published capabilities of competing
systems
Massively-Parallel Break Detection for Satellite Data
The field of remote sensing is nowadays faced with huge amounts of data.
While this offers a variety of exciting research opportunities, it also yields
significant challenges regarding both computation time and space requirements.
In practice, the sheer data volumes render existing approaches too slow for
processing and analyzing all the available data. This work aims at accelerating
BFAST, one of the state-of-the-art methods for break detection given satellite
image time series. In particular, we propose a massively-parallel
implementation for BFAST that can effectively make use of modern parallel
compute devices such as GPUs. Our experimental evaluation shows that the
proposed GPU implementation is up to four orders of magnitude faster than the
existing publicly available implementation and up to ten times faster than a
corresponding multi-threaded CPU execution. The dramatic decrease in running
time renders the analysis of significantly larger datasets possible in seconds
or minutes instead of hours or days. We demonstrate the practical benefits of
our implementations given both artificial and real datasets.Comment: 10 page
Semantic Pose using Deep Networks Trained on Synthetic RGB-D
In this work we address the problem of indoor scene understanding from RGB-D
images. Specifically, we propose to find instances of common furniture classes,
their spatial extent, and their pose with respect to generalized class models.
To accomplish this, we use a deep, wide, multi-output convolutional neural
network (CNN) that predicts class, pose, and location of possible objects
simultaneously. To overcome the lack of large annotated RGB-D training sets
(especially those with pose), we use an on-the-fly rendering pipeline that
generates realistic cluttered room scenes in parallel to training. We then
perform transfer learning on the relatively small amount of publicly available
annotated RGB-D data, and find that our model is able to successfully annotate
even highly challenging real scenes. Importantly, our trained network is able
to understand noisy and sparse observations of highly cluttered scenes with a
remarkable degree of accuracy, inferring class and pose from a very limited set
of cues. Additionally, our neural network is only moderately deep and computes
class, pose and position in tandem, so the overall run-time is significantly
faster than existing methods, estimating all output parameters simultaneously
in parallel on a GPU in seconds.Comment: ICCV 2015 Submissio
- …