54 research outputs found

    A semantic data dictionary method for database schema integration in CIESIN

    Full text link
    CIESIN (Consortium for International Earth Science Information Network) is funded by NASA to investigate the technology necessary to integrate and facilitate the interdisciplinary use of Global Change information. A clear of this mission includes providing a link between the various global change data sets, in particular the physical sciences and the human (social) sciences.The typical scientist using the CIESIN system will want to know how phenomena in an outside field affects his/her work. For example, a medical researcher might ask: how does air‐quality effect emphysema? This and many similar questions will require sophisticated semantic data integration. The researcher who raised the question may be familiar with medical data sets containing emphysema occurrences. But this same investigator may know little, if anything, about the existance or location of air‐quality data. It is easy to envision a system which would allow that investigator to locate and perform a ‘‘join’’ on two data sets, one containing emphysema cases and the other containing air‐quality levels. No such system exists today.One major obstacle to providing such a system will be overcoming the heterogeneity which falls into two broad categories. ‘‘Database system’’ heterogeneity involves differences in data models and packages. ‘‘Data semantic’’ heterogeneity involves differences in terminology between disciplines which translates into data semantic issues, and varying levels of data refinement, from raw to summary.Our work investigates a global data dictionary mechanism to facilitate a merged data service. Specially, we propose using a semantic tree during schema definition to aid in locating and integrating heterogeneous databases.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/87323/2/139_1.pd

    Efficient key establishment for group-based wireless sensor deployments

    Full text link
    Establishing pairwise keys for each pair of neighboring sensors is the first concern in securing communication in sensor networks. This task is challenging because resources are limited. Several random key predistribution schemes have been proposed, but they are appropriate only when sensors are uniformly distributed with high density. These schemes also suffer from a dramatic degradation of security when the number of compromised sensors exceeds a threshold. In this paper, we present a group-based key predistribution scheme, GKE, which enables any pair of neighboring sensors to establish a unique pairwise key, regardless of sensor density or distribution. Since pairwise keys are unique, security in GKE degrades gracefully as the number of compromised nodes increases. In addition, GKE is very efficient since it requires only localized communication to establish pairwise keys, thus significantly reducing the communication overhead. Our security analysis and performance evaluation illustrate the superiority of GKE in terms of resilience, connectivity, communication overhead and memory requirement. Categories and Subject Descriptors C.2 [Computer-Communication Networks]: secuirty and protection

    Secure synthesis and activation of protocol translation agents

    Full text link
    Protocol heterogeneity is pervasive and is a major obstacle to effective integration of services in large systems. However, standardization is not a complete answer. Standardized protocols must be general to prevent a proliferation of standards, and can therefore become complex and inefficient. Specialized protocols can be simple and efficient, since they can ignore situations that are precluded by application characteristics. One solution is to maintain agents for translating between protocols. However, n protocol types would require agents, since an agent must exist for a source - destination pair. A better solution is to create agents as needed. This paper examines the issues in the creation and management of protocol translation agents. We focus on the design of Nestor, an environment for synthesizing and managing RPC protocol translation agents. We provide rationale for the translation mechanism and the synthesis environment, with specific emphasis on the security issues arising in Nestor. Nestor has been implemented and manages heterogeneous RPC agents generated using the Cicero protocol construction language and the URPC toolkit.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/49229/2/ds7402.pd

    Stochastic Route Planning for Electric Vehicles

    Get PDF
    Electric Vehicle routing is often modeled as a generalization of the energy-constrained shortest path problem, taking travel times and energy consumptions on road network edges to be deterministic. In practice, however, energy consumption and travel times are stochastic distributions, typically estimated from real-world data. Consequently, real-world routing algorithms can make only probabilistic feasibility guarantees. Current stochastic route planning methods either fail to ensure that routes are energy-feasible, or if they do, have not been shown to scale well to large graphs. Our work bridges this gap by finding routes to maximize on-time arrival probability and the set of non-dominated routes under two criteria for stochastic route feasibility: ?-feasibility and p-feasibility. Our ?-feasibility criterion ensures energy-feasibility in expectation, using expected energy values along network edges. Our p-feasibility criterion accounts for the actual distribution along edges, and keeps the stranding probability along the route below a user-specified threshold p. We generalize the charging function propagation algorithm to accept stochastic edge weights to find routes that maximize the probability of on-time arrival, while maintaining ?- or p-feasibility. We also extend multi-criteria Contraction Hierarchies to accept stochastic edge weights and offer heuristics to speed up queries. Our experiments on a real-world road network instance of the Los Angeles area show that our methods answer stochastic queries in reasonable time, that the two criteria produce similar routes for longer deadlines, but that ?-feasibility queries can be much faster than p-feasibility queries

    OM: A Tunable Framework for Optimizing Continuous Queries over Data Streams

    No full text
    Abstract. Continuous query (CQ) is an important class of queries in Data Stream Management Systems. While much work has been done on algorithms for processing CQs, less attention has been paid to the issue of optimizing such queries. In this paper, we first argue that parameters such as output rate and main memory utilization are important cost objectives for CQ performance, than disk I/O. We propose a novel framework, called OM to optimize the memory utilization and output rate of CQs. Our technique monitors input stream and query characteristics, and switches plans only at certain boundary conditions. Our approach is tunable to application requirements and enables a user to make the query performance versus optimization overhead trade-off. Experimental analysis shows that our approach has high fidelity in predicting correct plans and is promising in terms of minimizing query scheduling overhead. Resumo. Consulta contínua (CQ, Continuous query) é uma classe importante em Sistemas de Gerenciamento de Streams. Enquanto vários trabalhos propõem algoritmos para processar CQs, pouca atenção tem sido dada à otimização de tais consultas. Nesse artigo, argumentamos que parâmetros tais como a taxa de saída e a utilização da memória principal são métricas mais importantes para o desempenho de CQ do que o acesso ao disco. Dessa forma, nós propomos um novo framework, chamado OM, para otimizar a utilização de memória eataxadesaída de CQs. Nossa técnica monitora o stream de entrada e as características da consulta, e troca planos somente em certas condições de limite. Nossa proposta éajustável aos requisitos da aplicação e permite que um usuário compare os benefícios entre o desempenho da consulta e a sobrecarga imposta pela otimização. A análise experimental mostra que a nossa proposta tem alta fidelidade ao prever os planos corretos e é promissora em minimizar a sobrecarga do agendamento de consultas. 1

    Designing an agent synthesis system for cross-RPC communication

    No full text
    Abstract-Remote procedure call (RPC) is the most popular paradigm used today to build distributed systems and applica-tions. As a consequence, the term “RPC ” has grown to include a range of vastly different protocols above the transport layer. A resulting problem is that programs often use different RPC proto-cols, cannot be interconnected directly, and building a solution for each case in a large heterogeneous environment is prohibitively expensive. In this paper, we describe the design of a system that can synthesize programs (RPC agents) to accommodate RPC het-erogeneities. Because of its synthesis capability, our system also facilitates the design and implementation of new RPC protocols through rapid prototyping. We have built a prototype system to validate the design and to estimate the agent development costs and cross-RPC performance. Our evaluation shows that our synthesis approach provides a more general solution tha
    corecore