3,920 research outputs found
Knowledge formalization for vector data matching using belief theory
Nowadays geographic vector data is produced both by public and private institutions using well defined specifications or crowdsourcing via Web 2.0 mapping portals. As a result, multiple representations of the same real world objects exist, without any links between these different representations. This becomes an issue when integration, updates, or multi-level analysis needs to be performed, as well as for data quality assessment. In this paper a multi-criteria data matching approach allowing the automatic definition of links between identical features is proposed. The originality of the approach is that the process is guided by an explicit representation and fusion of knowledge from various sources. Moreover the imperfection (imprecision, uncertainty, and incompleteness) is explicitly modeled in the process. Belief theory is used to represent and fuse knowledge from different sources, to model imperfection, and make a decision. Experiments are reported on real data coming from different producers, having different scales and either representing relief (isolated points) or road networks (linear data)
Modeling of Phenomena and Dynamic Logic of Phenomena
Modeling of complex phenomena such as the mind presents tremendous
computational complexity challenges. Modeling field theory (MFT) addresses
these challenges in a non-traditional way. The main idea behind MFT is to match
levels of uncertainty of the model (also, problem or theory) with levels of
uncertainty of the evaluation criterion used to identify that model. When a
model becomes more certain, then the evaluation criterion is adjusted
dynamically to match that change to the model. This process is called the
Dynamic Logic of Phenomena (DLP) for model construction and it mimics processes
of the mind and natural evolution. This paper provides a formal description of
DLP by specifying its syntax, semantics, and reasoning system. We also outline
links between DLP and other logical approaches. Computational complexity issues
that motivate this work are presented using an example of polynomial models
Semantic Matchmaking as Non-Monotonic Reasoning: A Description Logic Approach
Matchmaking arises when supply and demand meet in an electronic marketplace,
or when agents search for a web service to perform some task, or even when
recruiting agencies match curricula and job profiles. In such open
environments, the objective of a matchmaking process is to discover best
available offers to a given request. We address the problem of matchmaking from
a knowledge representation perspective, with a formalization based on
Description Logics. We devise Concept Abduction and Concept Contraction as
non-monotonic inferences in Description Logics suitable for modeling
matchmaking in a logical framework, and prove some related complexity results.
We also present reasonable algorithms for semantic matchmaking based on the
devised inferences, and prove that they obey to some commonsense properties.
Finally, we report on the implementation of the proposed matchmaking framework,
which has been used both as a mediator in e-marketplaces and for semantic web
services discovery
Introducing fuzzy trust for managing belief conflict over semantic web data
Interpreting Semantic Web Data by different human experts can end up in scenarios, where each expert comes up with different and conflicting ideas what a concept can mean and how they relate to other concepts. Software agents that operate on the Semantic Web have to deal with similar scenarios where the interpretation of Semantic Web data that describes the heterogeneous sources becomes contradicting. One such application area of the Semantic Web is ontology mapping where different similarities have to be combined into a more reliable and coherent view, which might easily become unreliable if the conflicting
beliefs in similarities are not managed effectively between the different agents. In this paper we propose a solution for managing this conflict by introducing trust between the mapping agents based on the fuzzy voting model
Linear superposition as a core theorem of quantum empiricism
Clarifying the nature of the quantum state is at the root of
the problems with insight into (counterintuitive) quantum postulates. We
provide a direct-and math-axiom free-empirical derivation of this object as an
element of a vector space. Establishing the linearity of this structure-quantum
superposition-is based on a set-theoretic creation of ensemble formations and
invokes the following three principia: quantum statics,
doctrine of a number in the physical theory, and
mathematization of matching the two observations with each
other; quantum invariance.
All of the constructs rest upon a formalization of the minimal experimental
entity: observed micro-event, detector click. This is sufficient for producing
the -numbers, axioms of linear vector space (superposition
principle), statistical mixtures of states, eigenstates and their spectra, and
non-commutativity of observables. No use is required of the concept of time. As
a result, the foundations of theory are liberated to a significant extent from
the issues associated with physical interpretations, philosophical exegeses,
and mathematical reconstruction of the entire quantum edifice.Comment: No figures. 64 pages; 68 pages(+4), overall substantial improvements;
70 pages(+2), further improvement
- …