4,464 research outputs found
Bounded Rationality and Heuristics in Humans and in Artificial Cognitive Systems
In this paper I will present an analysis of the impact that the notion of “bounded rationality”,
introduced by Herbert Simon in his book “Administrative Behavior”, produced in the
field of Artificial Intelligence (AI). In particular, by focusing on the field of Automated
Decision Making (ADM), I will show how the introduction of the cognitive dimension into
the study of choice of a rational (natural) agent, indirectly determined - in the AI field - the
development of a line of research aiming at the realisation of artificial systems whose decisions
are based on the adoption of powerful shortcut strategies (known as heuristics) based
on “satisficing” - i.e. non optimal - solutions to problem solving. I will show how the
“heuristic approach” to problem solving allowed, in AI, to face problems of combinatorial
complexity in real-life situations and still represents an important strategy for the design
and implementation of intelligent systems
Semantic metrics
In the context of the Semantic Web, many ontology-related operations, e.g. ontology ranking, segmentation, alignment, articulation, reuse, evaluation, can be boiled down to one fundamental operation: computing the similarity and?or dissimilarity among ontological entities, and in some cases among ontologies themselves. In this paper, we review standard metrics for computing distance measures and we propose a series of semantic metrics. We give a formal account of semantic metrics drawn from a variety of research disciplines, and enrich them with semantics based on standard Description Logic constructs. We argue that concept-based metrics can be aggregated to produce numeric distances at ontology-level and we speculate on the usability of our ideas through potential areas
Non classical concept representation and reasoning in formal ontologies
Formal ontologies are nowadays widely considered a standard tool for knowledge
representation and reasoning in the Semantic Web. In this context, they are expected to
play an important role in helping automated processes to access information. Namely:
they are expected to provide a formal structure able to explicate the relationships
between different concepts/terms, thus allowing intelligent agents to interpret, correctly,
the semantics of the web resources improving the performances of the search
technologies.
Here we take into account a problem regarding Knowledge Representation in general,
and ontology based representations in particular; namely: the fact that knowledge
modeling seems to be constrained between conflicting requirements, such as
compositionality, on the one hand and the need to represent prototypical information on
the other. In particular, most common sense concepts seem not to be captured by the
stringent semantics expressed by such formalisms as, for example, Description Logics
(which are the formalisms on which the ontology languages have been built). The aim
of this work is to analyse this problem, suggesting a possible solution suitable for
formal ontologies and semantic web representations.
The questions guiding this research, in fact, have been: is it possible to provide a formal
representational framework which, for the same concept, combines both the classical
modelling view (accounting for compositional information) and defeasible, prototypical
knowledge ? Is it possible to propose a modelling architecture able to provide different
type of reasoning (e.g. classical deductive reasoning for the compositional component
and a non monotonic reasoning for the prototypical one)?
We suggest a possible answer to these questions proposing a modelling framework able
to represent, within the semantic web languages, a multilevel representation of
conceptual information, integrating both classical and non classical (typicality based)
information. Within this framework we hypothesise, at least in principle, the coexistence of multiple reasoning processes involving the different levels of
representation
The semantics of similarity in geographic information retrieval
Similarity measures have a long tradition in fields such as information retrieval artificial intelligence and cognitive science. Within the last years these measures have been extended and reused to measure semantic similarity; i.e. for comparing meanings rather than syntactic differences. Various measures for spatial applications have been developed but a solid foundation for answering what they measure; how they are best applied in information retrieval; which role contextual information plays; and how similarity values or rankings should be interpreted is still missing. It is therefore difficult to decide which measure should be used for a particular application or to compare results from different similarity theories. Based on a review of existing similarity measures we introduce a framework to specify the semantics of similarity. We discuss similarity-based information retrieval paradigms as well as their implementation in web-based user interfaces for geographic information retrieval to demonstrate the applicability of the framework. Finally we formulate open challenges for similarity research
Dwelling on ontology - semantic reasoning over topographic maps
The thesis builds upon the hypothesis that the spatial arrangement of topographic
features, such as buildings, roads and other land cover parcels, indicates how land is
used. The aim is to make this kind of high-level semantic information explicit within
topographic data. There is an increasing need to share and use data for a wider range of
purposes, and to make data more definitive, intelligent and accessible. Unfortunately,
we still encounter a gap between low-level data representations and high-level concepts
that typify human qualitative spatial reasoning. The thesis adopts an ontological
approach to bridge this gap and to derive functional information by using standard
reasoning mechanisms offered by logic-based knowledge representation formalisms. It
formulates a framework for the processes involved in interpreting land use information
from topographic maps. Land use is a high-level abstract concept, but it is also an
observable fact intimately tied to geography. By decomposing this relationship, the
thesis correlates a one-to-one mapping between high-level conceptualisations
established from human knowledge and real world entities represented in the data.
Based on a middle-out approach, it develops a conceptual model that incrementally
links different levels of detail, and thereby derives coarser, more meaningful
descriptions from more detailed ones. The thesis verifies its proposed ideas by
implementing an ontology describing the land use ‘residential area’ in the ontology
editor Protégé. By asserting knowledge about high-level concepts such as types of
dwellings, urban blocks and residential districts as well as individuals that link directly
to topographic features stored in the database, the reasoner successfully infers instances
of the defined classes. Despite current technological limitations, ontologies are a
promising way forward in the manner we handle and integrate geographic data,
especially with respect to how humans conceptualise geographic space
Microtheories for SDI - Accounting for diversity of local conceptualisations at a global level
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.The categorization and conceptualization of geographic features is fundamental to cartography,
geographic information retrieval, routing applications, spatial decision support
and data sharing in general. However, there is no standard conceptualization of
the world. Humans conceptualize features based on numerous factors including cultural
background, knowledge, motivation and particularly space and time. Thus, geographic
features are prone to multiple, context-dependent conceptualizations reflecting local
conditions. This creates semantic heterogeneity and undermines interoperability. Standardization
of a shared definition is often employed to overcome semantic heterogeneity.
However, this approach loses important local diversity in feature conceptualizations and
may result in feature definitions which are too broad or too specific. This work proposes
the use of microtheories in Spatial Data Infrastructures, such as INSPIRE, to account
for diversity of local conceptualizations while maintaining interoperability at a global
level. It introduces a novel method of structuring microtheories based on space and
time, represented by administrative boundaries, to reflect variations in feature conceptualization.
A bottom-up approach, based on non-standard inference, is used to create
an appropriate global-level feature definition from the local definitions. Conceptualizations
of rivers, forests and estuaries throughout Europe are used to demonstrate how
the approach can improve the INSPIRE data model and ease its adoption by European
member states
Techniques for organizational memory information systems
The KnowMore project aims at providing active support to humans working on knowledge-intensive tasks. To this end the knowledge available in the modeled business processes or their incarnations in specific workflows shall be used to improve information handling. We present a representation formalism for knowledge-intensive tasks and the specification of its object-oriented realization. An operational semantics is sketched by specifying the basic functionality of the Knowledge Agent which works on the knowledge intensive task representation.
The Knowledge Agent uses a meta-level description of all information sources available in the Organizational Memory. We discuss the main dimensions that such a description scheme must be designed along, namely information content, structure, and context. On top of relational database management systems, we basically realize deductive object- oriented modeling with a comfortable annotation facility. The concrete knowledge descriptions are obtained by configuring the generic formalism with ontologies which describe the required modeling dimensions.
To support the access to documents, data, and formal knowledge in an Organizational Memory an integrated domain ontology and thesaurus is proposed which can be constructed semi-automatically by combining document-analysis and knowledge engineering methods. Thereby the costs for up-front knowledge engineering and the need to consult domain experts can be considerably reduced. We present an automatic thesaurus generation tool and show how it can be applied to build and enhance an integrated ontology /thesaurus. A first evaluation shows that the proposed method does indeed facilitate knowledge acquisition and maintenance of an organizational memory
Conceptual graph-based knowledge representation for supporting reasoning in African traditional medicine
Although African patients use both conventional or modern and traditional healthcare simultaneously, it has been proven that 80% of people rely on African traditional medicine (ATM). ATM includes medical activities stemming from practices, customs and traditions which were integral to the distinctive African cultures. It is based mainly on the oral transfer of knowledge, with the risk of losing critical knowledge. Moreover, practices differ according to the regions and the availability of medicinal plants. Therefore, it is necessary to compile tacit, disseminated and complex knowledge from various Tradi-Practitioners (TP) in order to determine interesting patterns for treating a given disease. Knowledge engineering methods for traditional medicine are useful to model suitably complex information needs, formalize knowledge of domain experts and highlight the effective practices for their integration to conventional medicine. The work described in this paper presents an approach which addresses two issues. First it aims at proposing a formal representation model of ATM knowledge and practices to facilitate their sharing and reusing. Then, it aims at providing a visual reasoning mechanism for selecting best available procedures and medicinal plants to treat diseases. The approach is based on the use of the Delphi method for capturing knowledge from various experts which necessitate reaching a consensus. Conceptual graph formalism is used to model ATM knowledge with visual reasoning capabilities and processes. The nested conceptual graphs are used to visually express the semantic meaning of Computational Tree Logic (CTL) constructs that are useful for formal specification of temporal properties of ATM domain knowledge. Our approach presents the advantage of mitigating knowledge loss with conceptual development assistance to improve the quality of ATM care (medical diagnosis and therapeutics), but also patient safety (drug monitoring)
- …