26,676 research outputs found
The Footprint Database and Web Services of the Herschel Space Observatory
Data from the Herschel Space Observatory is freely available to the public
but no uniformly processed catalogue of the observations has been published so
far. To date, the Herschel Science Archive does not contain the exact sky
coverage (footprint) of individual observations and supports search for
measurements based on bounding circles only. Drawing on previous experience in
implementing footprint databases, we built the Herschel Footprint Database and
Web Services for the Herschel Space Observatory to provide efficient search
capabilities for typical astronomical queries. The database was designed with
the following main goals in mind: (a) provide a unified data model for
meta-data of all instruments and observational modes, (b) quickly find
observations covering a selected object and its neighbourhood, (c) quickly find
every observation in a larger area of the sky, (d) allow for finding solar
system objects crossing observation fields. As a first step, we developed a
unified data model of observations of all three Herschel instruments for all
pointing and instrument modes. Then, using telescope pointing information and
observational meta-data, we compiled a database of footprints. As opposed to
methods using pixellation of the sphere, we represent sky coverage in an exact
geometric form allowing for precise area calculations. For easier handling of
Herschel observation footprints with rather complex shapes, two algorithms were
implemented to reduce the outline. Furthermore, a new visualisation tool to
plot footprints with various spherical projections was developed. Indexing of
the footprints using Hierarchical Triangular Mesh makes it possible to quickly
find observations based on sky coverage, time and meta-data. The database is
accessible via a web site (http://herschel.vo.elte.hu) and also as a set of
REST web service functions.Comment: Accepted for publication in Experimental Astronom
Multiverse: Mobility pattern understanding improves localization accuracy
Department of Computer Science and EngineeringThis paper presents the design and implementation of Multiverse, a practical indoor localization system that can be deployed on top of already existing WiFi infrastructure. Although the existing WiFi-based positioning techniques achieve acceptable accuracy levels, we find that existing solutions are not practical for use in buildings due to a requirement of installing sophisticated access point (AP) hardware or special application on client devices to aid the system with extra information. Multiverse achieves sub-room precision estimates, while utilizing only received signal strength indication (RSSI) readings available to most of today's buildings through their installed APs, along with the assumption that most users would walk at the normal speed. This level of simplicity would promote ubiquity of indoor localization in the era of smartphones.ope
A procedure for developing an acceptance test for airborne bathymetric lidar data application to NOAA charts in shallow waters
National Oceanic and Atmospheric Administration (NOAA) hydrographic data is typically acquired using sonar systems, with a small percent acquired via airborne lidar bathymetry for nearâshore areas. This study investigated an integrated approach for meeting NOAAâs hydrographic survey requirements for nearâshore areas of NOAA charts, using the existing topographicâbathymetric lidar data from USACEâs National Coastal Mapping Program (NCMP). Because these existing NCMP bathymetric lidar datasets were not collected to NOAA hydrographic surveying standards, it is unclear if, and under what circumstances, they might aid in meeting certain hydrographic surveying requirements. The NCMPâs bathymetric lidar data are evaluated through a comparison to NOAAâs Office of Coast Survey hydrographic data derived from acoustic surveys. As a result, it is possible to assess if NCMPâs bathymetry can be used to fill in the data gap shoreward of the navigable area limit line (0 to 4 meters) and if there is potential for applying NCMPâs bathymetry lidar data to nearâshore areas deeper than 10 meters. Based on the study results, recommendations will be provided to NOAA for the site conditions where this data will provide the most benefit. Additionally, this analysis may allow the development of future operating procedures and workflows using other topographicâ bathymetric lidar datasets to help update nearâshore areas of the NOAA charts
Gesture-Based Input for Drawing Schematics on a Mobile Device
We present a system for drawing metro map style schematics using a gesture-based interface. This work brings together techniques in gesture recognition on touch-sensitive devices with research in schematic layout of networks. The software allows users to create and edit schematic networks, and provides an automated layout method for improving the appearance of the schematic. A case study using the metro map metaphor to visualize social networks and web site structure is described
Investigating the effectiveness of an efficient label placement method using eye movement data
This paper focuses on improving the efficiency and effectiveness of dynamic and interactive maps in relation to the user. A label placement method with an improved algorithmic efficiency is presented. Since this algorithm has an influence on the actual placement of the name labels on the map, it is tested if this efficient algorithms also creates more effective maps: how well is the information processed by the user. We tested 30 participants while they were working on a dynamic and interactive map display. Their task was to locate geographical names on each of the presented maps. Their eye movements were registered together with the time at which a given label was found. The gathered data reveal no difference in the user's response times, neither in the number and the duration of the fixations between both map designs. The results of this study show that the efficiency of label placement algorithms can be improved without disturbing the user's cognitive map. Consequently, we created a more efficient map without affecting its effectiveness towards the user
Mobile 2D and 3D Spatial Query Techniques for the Geospatial Web
The increasing availability of abundant geographically referenced information in the Geospatial Web provides a variety of opportunities for developing value-added LBS applications. However, large data volumes of the Geospatial Web and small mobile device displays impose a data visualization problem, as the amount of searchable information overwhelms the display when too many query results are returned. Excessive returned results clutter the mobile display, making it harder for users to prioritize information and causes confusion and usability problems. Mobile Spatial Interaction (MSI) research into this âinformation overloadâ problem is ongoing where map personalization and other semantic based filtering mechanisms are essential to de-clutter and adapt the exploration of the real-world to the processing/display limitations of mobile devices. In this thesis, we propose that another way to filter this information is to intelligently refine the search space. 3DQ (3-Dimensional Query) is our novel MSI prototype for information discovery on todayâs location and orientation-aware smartphones within 3D Geospatial Web environments. Our application incorporates human interactions (interpreted from embedded sensors) in the geospatial query process by determining the shape of their actual visibility space as a query âwindowâ in a spatial database, e.g. Isovist in 2D and Threat Dome in 3D. This effectively applies hidden query removal (HQR) functionality in 360Âș 3D that takes into account both the horizontal and vertical dimensions when calculating the 3D search space, significantly reducing display clutter and information overload on mobile devices. The effect is a more accurate and expected search result for mobile LBS applications by returning information on only those objects visible within a userâs 3D field-of-view. ii
Navigation domain representation for interactive multiview imaging
Enabling users to interactively navigate through different viewpoints of a
static scene is a new interesting functionality in 3D streaming systems. While
it opens exciting perspectives towards rich multimedia applications, it
requires the design of novel representations and coding techniques in order to
solve the new challenges imposed by interactive navigation. Interactivity
clearly brings new design constraints: the encoder is unaware of the exact
decoding process, while the decoder has to reconstruct information from
incomplete subsets of data since the server can generally not transmit images
for all possible viewpoints due to resource constrains. In this paper, we
propose a novel multiview data representation that permits to satisfy bandwidth
and storage constraints in an interactive multiview streaming system. In
particular, we partition the multiview navigation domain into segments, each of
which is described by a reference image and some auxiliary information. The
auxiliary information enables the client to recreate any viewpoint in the
navigation segment via view synthesis. The decoder is then able to navigate
freely in the segment without further data request to the server; it requests
additional data only when it moves to a different segment. We discuss the
benefits of this novel representation in interactive navigation systems and
further propose a method to optimize the partitioning of the navigation domain
into independent segments, under bandwidth and storage constraints.
Experimental results confirm the potential of the proposed representation;
namely, our system leads to similar compression performance as classical
inter-view coding, while it provides the high level of flexibility that is
required for interactive streaming. Hence, our new framework represents a
promising solution for 3D data representation in novel interactive multimedia
services
Mobile 2D and 3D Spatial Query Techniques for the Geospatial Web
The increasing availability of abundant geographically referenced information in the Geospatial Web provides a variety of opportunities for developing value-added LBS applications. However, large data volumes of the Geospatial Web and small mobile device displays impose a data visualization problem, as the amount of searchable information overwhelms the display when too many query results are returned. Excessive returned results clutter the mobile display, making it harder for users to prioritize information and causes confusion and usability problems. Mobile Spatial Interaction (MSI) research into this âinformation overloadâ problem is ongoing where map personalization and other semantic based filtering mechanisms are essential to de-clutter and adapt the exploration of the real-world to the processing/display limitations of mobile devices. In this thesis, we propose that another way to filter this information is to intelligently refine the search space. 3DQ (3-Dimensional Query) is our novel MSI prototype for information discovery on todayâs location and orientation-aware smartphones within 3D Geospatial Web environments. Our application incorporates human interactions (interpreted from embedded sensors) in the geospatial query process by determining the shape of their actual visibility space as a query âwindowâ in a spatial database, e.g. Isovist in 2D and Threat Dome in 3D. This effectively applies hidden query removal (HQR) functionality in 360Âș 3D that takes into account both the horizontal and vertical dimensions when calculating the 3D search space, significantly reducing display clutter and information overload on mobile devices. The effect is a more accurate and expected search result for mobile LBS applications by returning information on only those objects visible within a userâs 3D field-of-view
- âŠ