3,025 research outputs found

    Amorphous Placement and Retrieval of Sensory Data in Sparse Mobile Ad-Hoc Networks

    Full text link
    Abstract—Personal communication devices are increasingly being equipped with sensors that are able to passively collect information from their surroundings – information that could be stored in fairly small local caches. We envision a system in which users of such devices use their collective sensing, storage, and communication resources to query the state of (possibly remote) neighborhoods. The goal of such a system is to achieve the highest query success ratio using the least communication overhead (power). We show that the use of Data Centric Storage (DCS), or directed placement, is a viable approach for achieving this goal, but only when the underlying network is well connected. Alternatively, we propose, amorphous placement, in which sensory samples are cached locally and informed exchanges of cached samples is used to diffuse the sensory data throughout the whole network. In handling queries, the local cache is searched first for potential answers. If unsuccessful, the query is forwarded to one or more direct neighbors for answers. This technique leverages node mobility and caching capabilities to avoid the multi-hop communication overhead of directed placement. Using a simplified mobility model, we provide analytical lower and upper bounds on the ability of amorphous placement to achieve uniform field coverage in one and two dimensions. We show that combining informed shuffling of cached samples upon an encounter between two nodes, with the querying of direct neighbors could lead to significant performance improvements. For instance, under realistic mobility models, our simulation experiments show that amorphous placement achieves 10% to 40% better query answering ratio at a 25% to 35% savings in consumed power over directed placement.National Science Foundation (CNS Cybertrust 0524477, CNS NeTS 0520166, CNS ITR 0205294, EIA RI 0202067

    Strategies for Handling Spatial Uncertainty due to Discretization

    Get PDF
    Geographic information systems (GISs) allow users to analyze geographic phenomena within areas of interest that lead to an understanding of their relationships and thus provide a helpful tool in decision-making. Neglecting the inherent uncertainties in spatial representations may result in undesired misinterpretations. There are several sources of uncertainty contributing to the quality of spatial data within a GIS: imperfections (e.g., inaccuracy and imprecision) and effects of discretization. An example for discretization in the thematic domain is the chosen number of classes to represent a spatial phenomenon (e.g., air temperature). In order to improve the utility of a GIS an inclusion of a formal data quality model is essential. A data quality model stores, specifies, and handles the necessary data required to provide uncertainty information for GIS applications. This dissertation develops a data quality model that associates sources of uncertainty with units of information (e.g., measurement and coverage) in a GIS. The data quality model provides a basis to construct metrics dealing with different sources of uncertainty and to support tools for propagation and cross-propagation. Two specific metrics are developed that focus on two sources of uncertainty: inaccuracy and discretization. The first metric identifies a minimal?resolvable object size within a sampled field of a continuous variable. This metric, called detectability, is calculated as a spatially varying variable. The second metric, called reliability, investigates the effects of discretization on reliability. This metric estimates the variation of an underlying random variable and determines the reliability of a representation. It is also calculated as a spatially varying variable. Subsequently, this metric is used to assess the relationship between the influence of the number of sample points versus the influence of the degree of variation on the reliability of a representation. The results of this investigation show that the variation influences the reliability of a representation more than the number of sample points

    10381 Summary and Abstracts Collection -- Robust Query Processing

    Get PDF
    Dagstuhl seminar 10381 on robust query processing (held 19.09.10 - 24.09.10) brought together a diverse set of researchers and practitioners with a broad range of expertise for the purpose of fostering discussion and collaboration regarding causes, opportunities, and solutions for achieving robust query processing. The seminar strove to build a unified view across the loosely-coupled system components responsible for the various stages of database query processing. Participants were chosen for their experience with database query processing and, where possible, their prior work in academic research or in product development towards robustness in database query processing. In order to pave the way to motivate, measure, and protect future advances in robust query processing, seminar 10381 focused on developing tests for measuring the robustness of query processing. In these proceedings, we first review the seminar topics, goals, and results, then present abstracts or notes of some of the seminar break-out sessions. We also include, as an appendix, the robust query processing reading list that was collected and distributed to participants before the seminar began, as well as summaries of a few of those papers that were contributed by some participants

    Adaptive work placement for query processing on heterogeneous computing resources

    Get PDF
    The hardware landscape is currently changing from homogeneous multi-core systems towards heterogeneous systems with many di↵erent computing units, each with their own characteristics. This trend is a great opportunity for database systems to increase the overall performance if the heterogeneous resources can be utilized eciently. To achieve this, the main challenge is to place the right work on the right computing unit. Current approaches tackling this placement for query processing assume that data cardinalities of intermediate results can be correctly estimated. However, this assumption does not hold for complex queries. To overcome this problem, we propose an adaptive placement approach being independent of cardinality estimation of intermediate results. Our approach is incorporated in a novel adaptive placement sequence. Additionally, we implement our approach as an extensible virtualization layer, to demonstrate the broad applicability with multiple database systems. In our evaluation, we clearly show that our approach significantly improves OLAP query processing on heterogeneous hardware, while being adaptive enough to react to changing cardinalities of intermediate query results

    Location-Dependent Query Processing Under Soft Real-Time Constraints

    Get PDF
    • …
    corecore