68 research outputs found

    Four Lessons in Versatility or How Query Languages Adapt to the Web

    Get PDF
    Exposing not only human-centered information, but machine-processable data on the Web is one of the commonalities of recent Web trends. It has enabled a new kind of applications and businesses where the data is used in ways not foreseen by the data providers. Yet this exposition has fractured the Web into islands of data, each in different Web formats: Some providers choose XML, others RDF, again others JSON or OWL, for their data, even in similar domains. This fracturing stifles innovation as application builders have to cope not only with one Web stack (e.g., XML technology) but with several ones, each of considerable complexity. With Xcerpt we have developed a rule- and pattern based query language that aims to give shield application builders from much of this complexity: In a single query language XML and RDF data can be accessed, processed, combined, and re-published. Though the need for combined access to XML and RDF data has been recognized in previous work (including the W3C’s GRDDL), our approach differs in four main aspects: (1) We provide a single language (rather than two separate or embedded languages), thus minimizing the conceptual overhead of dealing with disparate data formats. (2) Both the declarative (logic-based) and the operational semantics are unified in that they apply for querying XML and RDF in the same way. (3) We show that the resulting query language can be implemented reusing traditional database technology, if desirable. Nevertheless, we also give a unified evaluation approach based on interval labelings of graphs that is at least as fast as existing approaches for tree-shaped XML data, yet provides linear time and space querying also for many RDF graphs. We believe that Web query languages are the right tool for declarative data access in Web applications and that Xcerpt is a significant step towards a more convenient, yet highly efficient data access in a “Web of Data”

    Oort: User-Centric Cloud Storage with Global Queries

    Get PDF
    In principle, the web should provide the perfect stage for user-generated content, allowing users to share their data seamlessly with other users across services and applications. In practice, the web fragments a user's data over many sites, each exposing only limited APIs for sharing. This paper describes Oort, a new cloud storage system that organizes data primarily by user rather than by application or web site. Oort allows users to choose which web software to use with their data and which other users to share it with, while giving applications powerful tools to query that data. Users rent space from providers that cooperate to provide a global, federated, general-purpose storage system. To support large-scale, multi-user applications such as Twitter and e-mail, Oort provides global queries that find and combine data from relevant users across all providers. Oort makes global query execution efficient by recognizing and merging similar queries issued by many users' application instances, largely eliminating the per-user factor in the global complexity of queries. Our evaluation predicts that an Oort implementation could handle traffic similar to that seen by Twitter using a hundred cooperating Oort servers, and that applications with other sharing patterns, like e-mail, can also be executed efficiently

    Offloading content routing cost from routers

    Get PDF
    The publish/subscribe paradigm has lately received much attention. In publish/subscribe systems, a specialized event-based middleware delivers notifications of events created by producers (publishers) to consumers (subscribers) interested in that particular event. It is considered a good approach for implementing Internet-wide distributed systems as it provides full decoupling of the communicating parties in time, space and synchronization. One flavor of the paradigm is content-based publish/subscribe which allows the subscribers to express their interests very accurately. In order to implement a content-based publish/subscribe middleware in way suitable for Internet scale, its underlying architecture must be organized as a peer-to-peer network of content-based routers that take care of forwarding the event notifications to all interested subscribers. A communication infrastructure that provides such service is called a content-based network. A content-based network is an application-level overlay network. Unfortunately, the expressiveness of the content-based interaction scheme comes with a price - compiling and maintaining the content-based forwarding and routing tables is very expensive when the amount of nodes in the network is large. The routing tables are usually partially-ordered set (poset) -based data structures. In this work, we present an algorithm that aims to improve scalability in content-based networks by reducing the workload of content-based routers by offloading some of their content routing cost to clients. We also provide experimental results of the performance of the algorithm. Additionally, we give an introduction to the publish/subscribe paradigm and content-based networking and discuss alternative ways of improving scalability in content-based networks. ACM Computing Classification System (CCS): C.2.4 [Computer-Communication Networks]: Distributed Systems - Distributed application

    Paralelização de algoritmos de Filtragem baseados em XPATH/XML com recurso a GPUs

    Get PDF
    Dissertação de Mestrado em Engenharia InformáticaEsta dissertação envolve o estudo da viabilidade da utilização dos GPUs para o processamento paralelo aplicado aos algoritmos de filtragem de notificações num sistema editor/assinante. Este objectivo passou por realizar uma comparação de resultados experimentais entre a versão sequencial (nos CPUs) e a versão paralela de um algoritmo de filtragem escolhido como referência. Essa análise procurou dar elementos para aferir se eventuais ganhos da exploração dos GPUs serão suficientes para compensar a maior complexidade do processo

    A Knowledge-based Approach for Creating Detailed Landscape Representations by Fusing GIS Data Collections with Associated Uncertainty

    Get PDF
    Geographic Information Systems (GIS) data for a region is of different types and collected from different sources, such as aerial digitized color imagery, elevation data consisting of terrain height at different points in that region, and feature data consisting of geometric information and properties about entities above/below the ground in that region. Merging GIS data and understanding the real world information present explicitly or implicitly in that data is a challenging task. This is often done manually by domain experts because of their superior capability to efficiently recognize patterns, combine, reason, and relate information. When a detailed digital representation of the region is to be created, domain experts are required to make best-guess decisions about each object. For example, a human would create representations of entities by collectively looking at the data layers, noting even elements that are not visible, like a covered overpass or underwater tunnel of a certain width and length. Such detailed representations are needed for use by processes like visualization or 3D modeling in applications used by military, simulation, earth sciences and gaming communities. Many of these applications are increasingly using digitally synthesized visuals and require detailed digital 3D representations to be generated quickly after acquiring the necessary initial data. Our main thesis, and a significant research contribution of this work, is that this task of creating detailed representations can be automated to a very large extent using a methodology which first fuses all Geographic Information System (GIS) data sources available into knowledge base (KB) assertions (instances) representing real world objects using a subprocess called GIS2KB. Then using reasoning, implicit information is inferred to define detailed 3D entity representations using a geometry definition engine called KB2Scene. Semantic Web is used as the semantic inferencing system and is extended with a data extraction framework. This framework enables the extraction of implicit property information using data and image analysis techniques. The data extraction framework supports extraction of spatial relationship values and attribution of uncertainties to inferred details. Uncertainty is recorded per property and used under Zadeh fuzzy semantics to compute a resulting uncertainty for inferred assertional axioms. This is achieved by another major contribution of our research, a unique extension of the KB ABox Realization service using KB explanation services. Previous semantics based research in this domain has concentrated more on improving represented details through the addition of artifacts like lights, signage, crosswalks, etc. Previous attempts regarding uncertainty in assertions use a modified reasoner expressivity and calculus. Our work differs in that separating formal knowledge from data processing allows fusion of different heterogeneous data sources which share the same context. Imprecision is modeled through uncertainty on assertions without defining a new expressivity as long as KB explanation services are available for the used expressivity. We also believe that in our use case, this simplifies uncertainty calculations. The uncertainties are then available for user-decision at output. We show that the process of creating 3D visuals from GIS data sources can be more automated, modular, verifiable, and the knowledge base instances available for other applications to use as part of a common knowledge base. We define our method’s components, discuss advantages and limitations, and show sample results for the transportation domain

    Automated Deduction – CADE 28

    Get PDF
    This open access book constitutes the proceeding of the 28th International Conference on Automated Deduction, CADE 28, held virtually in July 2021. The 29 full papers and 7 system descriptions presented together with 2 invited papers were carefully reviewed and selected from 76 submissions. CADE is the major forum for the presentation of research in all aspects of automated deduction, including foundations, applications, implementations, and practical experience. The papers are organized in the following topics: Logical foundations; theory and principles; implementation and application; ATP and AI; and system descriptions

    9th International Workshop "What can FCA do for Artificial Intelligence?" (FCA4AI 2021)

    Get PDF
    International audienceFormal Concept Analysis (FCA) is a mathematically well-founded theory aimed at classification and knowledge discovery that can be used for many purposes in Artificial Intelligence (AI). The objective of the ninth edition of the FCA4AI workshop (see http://www.fca4ai.hse.ru/) is to investigate several issues such as: how can FCA support various AI activities (knowledge discovery, knowledge engineering, machine learning, data mining, information retrieval, recommendation...), how can FCA be extended in order to help AI researchers to solve new and complex problems in their domains, and how FCA can play a role in current trends in AI such as explainable AI and fairness of algorithms in decision making.The workshop was held in co-location with IJCAI 2021, Montréal, Canada, August, 28 2021

    Percepción basada en visión estereoscópica, planificación de trayectorias y estrategias de navegación para exploración robótica autónoma

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Informática, Departamento de Ingeniería del Software e Inteligencia artificial, leída el 13-05-2015En esta tesis se trata el desarrollo de una estrategia de navegación autónoma basada en visión artificial para exploración robótica autónoma de superficies planetarias. Se han desarrollado una serie de subsistemas, módulos y software específicos para la investigación desarrollada en este trabajo, ya que la mayoría de las herramientas existentes para este dominio son propiedad de agencias espaciales nacionales, no accesibles a la comunidad científica. Se ha diseñado una arquitectura software modular multi-capa con varios niveles jerárquicos para albergar el conjunto de algoritmos que implementan la estrategia de navegación autónoma y garantizar la portabilidad del software, su reutilización e independencia del hardware. Se incluye también el diseño de un entorno de trabajo destinado a dar soporte al desarrollo de las estrategias de navegación. Éste se basa parcialmente en herramientas de código abierto al alcance de cualquier investigador o institución, con las necesarias adaptaciones y extensiones, e incluye capacidades de simulación 3D, modelos de vehículos robóticos, sensores, y entornos operacionales, emulando superficies planetarias como Marte, para el análisis y validación a nivel funcional de las estrategias de navegación desarrolladas. Este entorno también ofrece capacidades de depuración y monitorización.La presente tesis se compone de dos partes principales. En la primera se aborda el diseño y desarrollo de las capacidades de autonomía de alto nivel de un rover, centrándose en la navegación autónoma, con el soporte de las capacidades de simulación y monitorización del entorno de trabajo previo. Se han llevado a cabo un conjunto de experimentos de campo, con un robot y hardware real, detallándose resultados, tiempo de procesamiento de algoritmos, así como el comportamiento y rendimiento del sistema en general. Como resultado, se ha identificado al sistema de percepción como un componente crucial dentro de la estrategia de navegación y, por tanto, el foco principal de potenciales optimizaciones y mejoras del sistema. Como consecuencia, en la segunda parte de este trabajo, se afronta el problema de la correspondencia en imágenes estéreo y reconstrucción 3D de entornos naturales no estructurados. Se han analizado una serie de algoritmos de correspondencia, procesos de imagen y filtros. Generalmente se asume que las intensidades de puntos correspondientes en imágenes del mismo par estéreo es la misma. Sin embargo, se ha comprobado que esta suposición es a menudo falsa, a pesar de que ambas se adquieren con un sistema de visión compuesto de dos cámaras idénticas. En consecuencia, se propone un sistema experto para la corrección automática de intensidades en pares de imágenes estéreo y reconstrucción 3D del entorno basado en procesos de imagen no aplicados hasta ahora en el campo de la visión estéreo. Éstos son el filtrado homomórfico y la correspondencia de histogramas, que han sido diseñados para corregir intensidades coordinadamente, ajustando una imagen en función de la otra. Los resultados se han podido optimizar adicionalmente gracias al diseño de un proceso de agrupación basado en el principio de continuidad espacial para eliminar falsos positivos y correspondencias erróneas. Se han estudiado los efectos de la aplicación de dichos filtros, en etapas previas y posteriores al proceso de correspondencia, con eficiencia verificada favorablemente. Su aplicación ha permitido la obtención de un mayor número de correspondencias válidas en comparación con los resultados obtenidos sin la aplicación de los mismos, consiguiendo mejoras significativas en los mapas de disparidad y, por lo tanto, en los procesos globales de percepción y reconstrucción 3D.Depto. de Ingeniería de Software e Inteligencia Artificial (ISIA)Fac. de InformáticaTRUEunpu

    An emotion-based agent architecture

    Get PDF
    Grévy Jules, Constans Ernest. Montpellier. — Facultés de droit, des sciences et des lettres. In: Bulletin administratif de l'instruction publique. Tome 23 n°455, 1880. pp. 864-865
    corecore