31,694 research outputs found

    Multi-Paradigm Reasoning for Access to Heterogeneous GIS

    Get PDF
    Accessing and querying geographical data in a uniform way has become easier in recent years. Emerging standards like WFS turn the web into a geospatial web services enabled place. Mediation architectures like VirGIS overcome syntactical and semantical heterogeneity between several distributed sources. On mobile devices, however, this kind of solution is not suitable, due to limitations, mostly regarding bandwidth, computation power, and available storage space. The aim of this paper is to present a solution for providing powerful reasoning mechanisms accessible from mobile applications and involving data from several heterogeneous sources. By adapting contents to time and location, mobile web information systems can not only increase the value and suitability of the service itself, but can substantially reduce the amount of data delivered to users. Because many problems pertain to infrastructures and transportation in general and to way finding in particular, one cornerstone of the architecture is higher level reasoning on graph networks with the Multi-Paradigm Location Language MPLL. A mediation architecture is used as a “graph provider” in order to transfer the load of computation to the best suited component – graph construction and transformation for example being heavy on resources. Reasoning in general can be conducted either near the “source” or near the end user, depending on the specific use case. The concepts underlying the proposal described in this paper are illustrated by a typical and concrete scenario for web applications

    Fourteenth Biennial Status Report: MĂ€rz 2017 - February 2019

    No full text

    Model Checking Spatial Logics for Closure Spaces

    Full text link
    Spatial aspects of computation are becoming increasingly relevant in Computer Science, especially in the field of collective adaptive systems and when dealing with systems distributed in physical space. Traditional formal verification techniques are well suited to analyse the temporal evolution of programs; however, properties of space are typically not taken into account explicitly. We present a topology-based approach to formal verification of spatial properties depending upon physical space. We define an appropriate logic, stemming from the tradition of topological interpretations of modal logics, dating back to earlier logicians such as Tarski, where modalities describe neighbourhood. We lift the topological definitions to the more general setting of closure spaces, also encompassing discrete, graph-based structures. We extend the framework with a spatial surrounded operator, a propagation operator and with some collective operators. The latter are interpreted over arbitrary sets of points instead of individual points in space. We define efficient model checking procedures, both for the individual and the collective spatial fragments of the logic and provide a proof-of-concept tool

    On Formal Methods for Collective Adaptive System Engineering. {Scalable Approximated, Spatial} Analysis Techniques. Extended Abstract

    Full text link
    In this extended abstract a view on the role of Formal Methods in System Engineering is briefly presented. Then two examples of useful analysis techniques based on solid mathematical theories are discussed as well as the software tools which have been built for supporting such techniques. The first technique is Scalable Approximated Population DTMC Model-checking. The second one is Spatial Model-checking for Closure Spaces. Both techniques have been developed in the context of the EU funded project QUANTICOL.Comment: In Proceedings FORECAST 2016, arXiv:1607.0200

    Spatial reasoning to determine stream network from LANDSAT imagery

    Get PDF
    In LANDSAT imagery, spectral and spatial information can be used to detect the drainage network as well as the relative elevation model in mountainous terrain. To do this, mixed information of material reflectance in the original LANDSAT imagery must be separated. From the material reflectance information, big visible rivers can be detected. From the topographic modulation information, ridges and valleys can be detected and assigned relative elevations. A complete elevation model can be generated by interpolating values for nonridge and non-valley pixels. The small streams not detectable from material reflectance information can be located in the valleys with flow direction known from the elevation model. Finally, the flow directions of big visible rivers can be inferred by solving a consistent labeling problem based on a set of spatial reasoning constraints

    Representing simmodel in the web ontology language

    Get PDF
    Many building energy performance (BEP) simulation tools, such as EnergyPlus and DOE-2, use custom schema definitions (IDD and BDL respectively) as opposed to standardised schema definitions (defined in XSD, EXPRESS, and so forth). A Simulation Domain Model (SimModel) was therefore proposed earlier, representative for a new interoperable XML-based data model for the building simulation domain. Its ontology aims at moving away from tool-specific, non-standard nomenclature by implementing an industry-validated terminology aligned with the Industry Foundation Classes (IFC). In this paper, we document our ongoing efforts to make building simulation data more interoperable with other building data. In order to be able to better integrate SimModel information with other building information, we have aimed at representing this information in the Resource Description Framework (RDF). A conversion service has been built that is able to parse the SimModel ontology in the form of XSD schemas and output a SimModel ontology in OWL. In this article, we document this effort and give an indication of what the resulting SimModel ontology in OWL can be used for

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web

    Graph Convolutional Neural Networks for Web-Scale Recommender Systems

    Full text link
    Recent advancements in deep neural networks for graph-structured data have led to state-of-the-art performance on recommender system benchmarks. However, making these methods practical and scalable to web-scale recommendation tasks with billions of items and hundreds of millions of users remains a challenge. Here we describe a large-scale deep recommendation engine that we developed and deployed at Pinterest. We develop a data-efficient Graph Convolutional Network (GCN) algorithm PinSage, which combines efficient random walks and graph convolutions to generate embeddings of nodes (i.e., items) that incorporate both graph structure as well as node feature information. Compared to prior GCN approaches, we develop a novel method based on highly efficient random walks to structure the convolutions and design a novel training strategy that relies on harder-and-harder training examples to improve robustness and convergence of the model. We also develop an efficient MapReduce model inference algorithm to generate embeddings using a trained model. We deploy PinSage at Pinterest and train it on 7.5 billion examples on a graph with 3 billion nodes representing pins and boards, and 18 billion edges. According to offline metrics, user studies and A/B tests, PinSage generates higher-quality recommendations than comparable deep learning and graph-based alternatives. To our knowledge, this is the largest application of deep graph embeddings to date and paves the way for a new generation of web-scale recommender systems based on graph convolutional architectures.Comment: KDD 201

    Human-agent collectives

    No full text
    We live in a world where a host of computer systems, distributed throughout our physical and information environments, are increasingly implicated in our everyday actions. Computer technologies impact all aspects of our lives and our relationship with the digital has fundamentally altered as computers have moved out of the workplace and away from the desktop. Networked computers, tablets, phones and personal devices are now commonplace, as are an increasingly diverse set of digital devices built into the world around us. Data and information is generated at unprecedented speeds and volumes from an increasingly diverse range of sources. It is then combined in unforeseen ways, limited only by human imagination. People’s activities and collaborations are becoming ever more dependent upon and intertwined with this ubiquitous information substrate. As these trends continue apace, it is becoming apparent that many endeavours involve the symbiotic interleaving of humans and computers. Moreover, the emergence of these close-knit partnerships is inducing profound change. Rather than issuing instructions to passive machines that wait until they are asked before doing anything, we will work in tandem with highly inter-connected computational components that act autonomously and intelligently (aka agents). As a consequence, greater attention needs to be given to the balance of control between people and machines. In many situations, humans will be in charge and agents will predominantly act in a supporting role. In other cases, however, the agents will be in control and humans will play the supporting role. We term this emerging class of systems human-agent collectives (HACs) to reflect the close partnership and the flexible social interactions between the humans and the computers. As well as exhibiting increased autonomy, such systems will be inherently open and social. This means the participants will need to continually and flexibly establish and manage a range of social relationships. Thus, depending on the task at hand, different constellations of people, resources, and information will need to come together, operate in a coordinated fashion, and then disband. The openness and presence of many distinct stakeholders means participation will be motivated by a broad range of incentives rather than diktat. This article outlines the key research challenges involved in developing a comprehensive understanding of HACs. To illuminate this agenda, a nascent application in the domain of disaster response is presented
    • 

    corecore