298 research outputs found
Viewpoints on emergent semantics
Authors include:Philippe Cudr´e-Mauroux, and Karl Aberer (editors),
Alia I. Abdelmoty, Tiziana Catarci, Ernesto Damiani,
Arantxa Illaramendi, Robert Meersman,
Erich J. Neuhold, Christine Parent, Kai-Uwe Sattler,
Monica Scannapieco, Stefano Spaccapietra,
Peter Spyns, and Guy De Tr´eWe introduce a novel view on how to deal with the problems of semantic interoperability in distributed systems. This view is based on the concept of emergent semantics, which sees both the representation of semantics and the discovery of the proper interpretation of symbols as the result of a self-organizing process performed by distributed agents exchanging symbols and having utilities dependent on the proper interpretation of the symbols. This is a complex systems perspective on the problem of dealing with semantics. We highlight some of the distinctive features of our vision and point out preliminary examples of its applicatio
Functional inferences over heterogeneous data
Inference enables an agent to create new knowledge from old or discover implicit
relationships between concepts in a knowledge base (KB), provided that appropriate
techniques are employed to deal with ambiguous, incomplete and sometimes erroneous
data.
The ever-increasing volumes of KBs on the web, available for use by automated
systems, present an opportunity to leverage the available knowledge in order to improve
the inference process in automated query answering systems. This thesis focuses
on the FRANK (Functional Reasoning for Acquiring Novel Knowledge) framework
that responds to queries where no suitable answer is readily contained in any available
data source, using a variety of inference operations.
Most question answering and information retrieval systems assume that answers
to queries are stored in some form in the KB, thereby limiting the range of answers
they can find. We take an approach motivated by rich forms of inference using techniques,
such as regression, for prediction. For instance, FRANK can answer “what
country in Europe will have the largest population in 2021?" by decomposing Europe
geo-spatially, using regression on country population for past years and selecting the
country with the largest predicted value. Our technique, which we refer to as Rich
Inference, combines heuristics, logic and statistical methods to infer novel answers
to queries. It also determines what facts are needed for inference, searches for them,
and then integrates the diverse facts and their formalisms into a local query-specific
inference tree.
Our primary contribution in this thesis is the inference algorithm on which FRANK
works. This includes (1) the process of recursively decomposing queries in way that
allows variables in the query to be instantiated by facts in KBs; (2) the use of aggregate
functions to perform arithmetic and statistical operations (e.g. prediction) to infer new
values from child nodes; and (3) the estimation and propagation of uncertainty values
into the returned answer based on errors introduced by noise in the KBs or errors
introduced by aggregate functions.
We also discuss many of the core concepts and modules that constitute FRANK.
We explain the internal “alist” representation of FRANK that gives it the required
flexibility to tackle different kinds of problems with minimal changes to its internal
representation. We discuss the grammar for a simple query language that allows users
to express queries in a formal way, such that we avoid the complexities of natural
language queries, a problem that falls outside the scope of this thesis. We evaluate the
framework with datasets from open sources
A Decision Support System For The Intelligence Satellite Analyst
The study developed a decision support system known as Visual Analytic Cognitive Model (VACOM) to support the Intelligence Analyst (IA) in satellite information processing task within a Geospatial Intelligence (GEOINT) domain. As a visual analytics, VACOM contains the image processing algorithms, a cognitive network of the IA mental model, and a Bayesian belief model for satellite information processing. A cognitive analysis tool helps to identify eight knowledge levels in a satellite information processing. These are, spatial, prototypical, contextual, temporal, semantic, pragmatic, intentional, and inferential knowledge levels, respectively. A cognitive network was developed for each knowledge level with data input from the subjective questionnaires that probed the analysts’ mental model. VACOM interface was designed to allow the analysts have a transparent view of the processes, including, visualization model, and signal processing model applied to the images, geospatial data representation, and the cognitive network of expert beliefs. VACOM interface allows the user to select a satellite image of interest, select each of the image analysis methods for visualization, and compare ‘ground-truth’ information against the recommendation of VACOM. The interface was designed to enhance perception, cognition, and even comprehension to the multi and complex image analyses by the analysts. A usability analysis on VACOM showed many advantages for the human analysts. These include, reduction in cognitive workload as a result of less information search, the IA can conduct an interactive experiment on each of his/her belief space and guesses, and selection of best image processing algorithms to apply to an image context
Integrated topological representation of multi-scale utility resource networks
PhD ThesisThe growth of urban areas and their resource consumption presents a significant global
challenge. Existing utility resource supply systems are unresponsive, unreliable and costly.
There is a need to improve the configuration and management of the infrastructure networks
that carry these resources from source to consumer and this is best performed through analysis
of multi-scale, integrated digital representations. However, the real-world networks are
represented across different datasets that are underpinned by different data standards, practices
and assumptions, and are thus challenging to integrate.
Existing integration methods focus predominantly on achieving maximum information
retention through complex schema mappings and the development of new data standards, and
there is strong emphasis on reconciling differences in geometries. However, network topology
is of greatest importance for the analysis of utility networks and simulation of utility resource
flows because it is a representation of functional connectivity, and the derivation of this
topology does not require the preservation of full information detail. The most pressing
challenge is asserting the connectivity between the datasets that each represent subnetworks of
the entire end-to-end network system.
This project presents an approach to integration that makes use of abstracted digital
representations of electricity and water networks to infer inter-dataset network connectivity,
exploring what can be achieved by exploiting commonalities between existing datasets and data
standards to overcome their otherwise inhibiting disparities. The developed methods rely on the
use of graph representations, heuristics and spatial inference, and the results are assessed using
surveying techniques and statistical analysis of uncertainties. An algorithm developed for water
networks was able to correctly infer a building connection that was absent from source datasets.
The thesis concludes that several of the key use cases for integrated topological representation
of utility networks are partially satisfied through the methods presented, but that some
differences in data standardisation and best practice in the GIS and BIM domains prevent full
automation. The common and unique identification of real-world objects, agreement on a
shared concept vocabulary for the built environment, more accurate positioning of distribution
assets, consistent use of (and improved best practice for) georeferencing of BIM models and a
standardised numerical expression of data uncertainties are identified as points of development.Engineering and Physical Sciences Research Council
Ordnance Surve
Ontology Alignment Architecture for Semantic Sensor Web Integration
Abstract: Sensor networks are a concept that has become very popular in data acquisition and processing for multiple applications in different fields such as industrial, medicine, home automation, environmental detection, etc. Today, with the proliferation of small communication devices with sensors that collect environmental data, semantic Web technologies are becoming closely related with sensor networks. The linking of elements from Semantic Web technologies with sensor networks has been called Semantic Sensor Web and has among its main features the use of ontologies. One of the key challenges of using ontologies in sensor networks is to provide mechanisms to integrate and exchange knowledge from heterogeneous sources (that is, dealing with semantic heterogeneity). Ontology alignment is the process of bringing ontologies into mutual agreement by the automatic discovery of mappings between related concepts. This paper presents a system for ontology alignment in the Semantic Sensor Web which uses fuzzy logic techniques to combine similarity measures between entities of different ontologies. The proposed approach focuses on two key elements: the terminological similarity, which takes into account the linguistic and semantic information of the context of the entity's names, and the structural similarity, based on both the internal and relational structure of the concepts. This work has been validated using sensor network ontologies and the Ontology Alignment Evaluation Initiative (OAEI) tests. The results show that the proposed techniques outperform previous approaches in terms of precision and recall
Recommended from our members
Geographic Knowledge Graph Summarization
Geographic knowledge graphs play a significant role in the geospatial semantics paradigm for fulfilling the interoperability, the accessibility, and the conceptualization demands in geographic information science. However, due to the immense quantity of information accompanying and the enormous diversity of geographic knowledge graphs, there are many challenges that hinder the applicability and mass adoption of such useful structured knowledge. In order to tackle these challenges, this dissertation focuses on devising ways in which geographic knowledge graphs can be digested and summarized. Such a summarization task, on the one hand lifts the burden of information overload for end users, on the other hand facilitates the reduction of data storage, speeds up queries, and helps eliminate noise. The main contribution of this dissertation is that it introduces the general concept of geospatial inductive bias and explains different ways this idea can be used in the geographic knowledge graph summarization task. By decomposing the task into separate but related components, this dissertation is based upon three peer-reviewed articles which focus on the hierarchical place type structure, multimedia leaf nodes, and general relation and entity components respectively. A spatial knowledge map interface that illustrates the effectiveness of summarizing geographic knowledge graphs is presented. Throughout the dissertation, top-down knowledge engineering and bottom-up knowledge learning methods are integrated. We hope this dissertation would promote the awareness of this fascinating area and motivate researchers to investigate related questions
Geographical places as a personalisation element: extracting profiles from human activities and services of visited places in mobility logs
Collecting personal mobility traces of individuals is currently applicable on a large scale due to the popularity of position-aware mobile phones. Statistical analysis of GPS data streams, collected with a mobile phone, can reveal several interesting measures such as the most frequently visited geographical places by some individual. Applying probabilistic models to such data sets can predict the next place to visit, and when. Several practical applications can utilise the results of such analysis. Current state of the art, however, is limited in terms of the qualitative analysis of personal mobility logs. Without explicit user-interactions, not much semantics can be inferred from a GPS log. This work proposes the utilisation of the common human activities and services provided at certain place types to extract semantically rich profiles from personal mobility logs. The resulting profiles include spatial, temporal and generic thematic description of a user. The work introduces several pre-processing methods for GPS data streams, collected with personal mobile devices, which improved the quality of the place extraction process from GPS logs. The thesis also introduces a method for extracting place semantics from multiple data sources. A textual corpus of functional descriptions of human activities and services associated with certain geographic place types is analysed to identify the frequent linguistic patterns used to describe such terms. The patterns found are then matched against multiple textual data sources of place semantics, to extract such terms, for a collection of place types. The results were evaluated in comparison to an equivalent expert ontology, as well as to semantics collected from the general public. Finally, the work proposes a model for the resulting profiles, the necessary algorithms to build and utilise such profiles, along with an encoding mark-up language. A simulated mobile application was developed to show the usability and for evaluation of the resulting profiles
Personalized City Tours - An Extension of the OGC OpenLocation Specification
A business trip to London last month , a day visit in Cologne next saturday and romantic weekend in Paris in autumn – this example exhibits one of the central characteristics of today’s tourism. People in the western hemisphere take much pleasure in frequent and repeated short term visits of cities. Every city visitor faces the general problems of where to go and what to see in the diverse microcosm of a metropolis. This thesis presents a framework for the generation of personalized city tours - as extension of the Open Location Specification of the Open Geospatial Consortium. It is founded on context-awareness and personalization while at the same time proposing a combined approach to allow for adaption to the user. This framework considers TimeGeography and its algorithmic implementations to be able to cope with spatio-temporal constraints of a city tour. Traveling salesmen problems - for which a heuristic approache is proposed – are subjacent to the tour generation. To meet the requirements of today’s distributed and heterogeneous computing environments, the tour framework comprises individual services that expose standard-compliant interfaces and allow for integration in service oriented architectures
- …