4,258 research outputs found
Exploring the Local Grammar of Evaluation: The Case of Adjectival Patterns in American and Italian Judicial Discourse
Based on a 2-million word bilingual comparable corpus of American and Italian judgments, this paper tests the applicability of a local grammar to study evaluative phraseology in judicial discourse in English and Italian. In particular, the study compares the use of two patterns: v-link + ADJ + that pattern / copula + ADJ + che and v-link + ADJ + to-infinitive pattern / copula + ADJ + verbo all’infinito in the disciplinary genre of criminal judgments delivered by the US Supreme Court and the Italian Corte Suprema di Cassazione. It is argued that these two patterns represent a viable and efficient diagnostic tool for retrieving instances of evaluative language and they represent an ideal starting point and a relevant unit of analysis for a cross-language analysis of evaluation in domainrestricted specialised discourse. Further, the findings provided shed light on important interactions occurring among major interactants involved in the judicial discourse
Formal nursing terminology systems: a means to an end
In response to the need to support diverse and complex information requirements, nursing has developed a number of different terminology systems. The two main kinds of systems that have emerged are enumerative systems and combinatorial systems, although some systems have characteristics of both approaches. Differences in the structure and content of terminology systems, while useful at a local level, prevent effective wider communication, information sharing, integration of record systems, and comparison of nursing elements of healthcare information at a more global level. Formal nursing terminology systems present an alternative approach. This paper describes a number of recent initiatives and explains how these emerging approaches may help to augment existing nursing terminology systems and overcome their limitations through mediation. The development of formal nursing terminology systems is not an end in itself and there remains a great deal of work to be done before success can be claimed. This paper presents an overview of the key issues outstanding and provides recommendations for a way forward
Classification Methodology for Architectures in Information Systems: A Statistical Converging Technique
Architectures are critical to the Information System (IS) domain because they represent funda- mental structures and interactions of systems. Since analysing architecture similarities is chal- lenging and time-consuming even in one domain, IS architecture classifications are paramount to understanding architectural complexity. However, classification approaches used in existing research commonly rely on manual interventions, and thus architectural classification reliability is hampered. We propose a novel methodology based on component modelling and applica- tion of a statistical converging technique, which ensures reliable IS architectural classification and minimises subjective interventions. We demonstrate the methodology by classifying data warehouse architectures
CBR and MBR techniques: review for an application in the emergencies domain
The purpose of this document is to provide an in-depth analysis of current reasoning engine practice and the integration strategies of Case Based Reasoning and Model Based Reasoning that will be used in the design and development of the RIMSAT system.
RIMSAT (Remote Intelligent Management Support and Training) is a European Commission funded project designed to:
a.. Provide an innovative, 'intelligent', knowledge based solution aimed at improving the quality of critical decisions
b.. Enhance the competencies and responsiveness of individuals and organisations involved in highly complex, safety critical incidents - irrespective of their location.
In other words, RIMSAT aims to design and implement a decision support system that using Case Base Reasoning as well as Model Base Reasoning technology is applied in the management of emergency situations.
This document is part of a deliverable for RIMSAT project, and although it has been done in close contact with the requirements of the project, it provides an overview wide enough for providing a state of the art in integration strategies between CBR and MBR technologies.Postprint (published version
Observing LOD: Its Knowledge Domains and the Varying Behavior of Ontologies Across Them
Linked Open Data (LOD) is the largest, collaborative, distributed, and publicly-accessible Knowledge Graph (KG) uniformly encoded in the Resource Description Framework (RDF) and formally represented according to the semantics of the Web Ontology Language (OWL). LOD provides researchers with a unique opportunity to study knowledge engineering as an empirical science: to observe existing modelling practices and possibly understanding how to improve knowledge engineering methodologies and knowledge representation formalisms. Following this perspective, several studies have analysed LOD to identify (mis-)use of OWL constructs or other modelling phenomena e.g. class or property usage, their alignment, the average depth of taxonomies. A question that remains open is whether there is a relation between observed modelling practices and knowledge domains (natural science, linguistics, etc.): do certain practices or phenomena change as the knowledge domain varies? Answering this question requires an assessment of the domains covered by LOD as well as a classification of its datasets. Existing approaches to classify LOD datasets provide partial and unaligned views, posing additional challenges. In this paper, we introduce a classification of knowledge domains, and a method for classifying LOD datasets and ontologies based on it. We classify a large portion of LOD and investigate whether a set of observed phenomena have a domain-specific character
Multi modal multi-semantic image retrieval
PhDThe rapid growth in the volume of visual information, e.g. image, and video can
overwhelm users’ ability to find and access the specific visual information of interest
to them. In recent years, ontology knowledge-based (KB) image information retrieval
techniques have been adopted into in order to attempt to extract knowledge from these
images, enhancing the retrieval performance. A KB framework is presented to
promote semi-automatic annotation and semantic image retrieval using multimodal
cues (visual features and text captions). In addition, a hierarchical structure for the KB
allows metadata to be shared that supports multi-semantics (polysemy) for concepts.
The framework builds up an effective knowledge base pertaining to a domain specific
image collection, e.g. sports, and is able to disambiguate and assign high level
semantics to ‘unannotated’ images.
Local feature analysis of visual content, namely using Scale Invariant Feature
Transform (SIFT) descriptors, have been deployed in the ‘Bag of Visual Words’
model (BVW) as an effective method to represent visual content information and to
enhance its classification and retrieval. Local features are more useful than global
features, e.g. colour, shape or texture, as they are invariant to image scale, orientation
and camera angle. An innovative approach is proposed for the representation,
annotation and retrieval of visual content using a hybrid technique based upon the use
of an unstructured visual word and upon a (structured) hierarchical ontology KB
model. The structural model facilitates the disambiguation of unstructured visual
words and a more effective classification of visual content, compared to a vector
space model, through exploiting local conceptual structures and their relationships.
The key contributions of this framework in using local features for image
representation include: first, a method to generate visual words using the semantic
local adaptive clustering (SLAC) algorithm which takes term weight and spatial
locations of keypoints into account. Consequently, the semantic information is
preserved. Second a technique is used to detect the domain specific ‘non-informative
visual words’ which are ineffective at representing the content of visual data and
degrade its categorisation ability. Third, a method to combine an ontology model with
xi
a visual word model to resolve synonym (visual heterogeneity) and polysemy
problems, is proposed. The experimental results show that this approach can discover
semantically meaningful visual content descriptions and recognise specific events,
e.g., sports events, depicted in images efficiently.
Since discovering the semantics of an image is an extremely challenging problem, one
promising approach to enhance visual content interpretation is to use any associated
textual information that accompanies an image, as a cue to predict the meaning of an
image, by transforming this textual information into a structured annotation for an
image e.g. using XML, RDF, OWL or MPEG-7. Although, text and image are distinct
types of information representation and modality, there are some strong, invariant,
implicit, connections between images and any accompanying text information.
Semantic analysis of image captions can be used by image retrieval systems to
retrieve selected images more precisely. To do this, a Natural Language Processing
(NLP) is exploited firstly in order to extract concepts from image captions. Next, an
ontology-based knowledge model is deployed in order to resolve natural language
ambiguities. To deal with the accompanying text information, two methods to extract
knowledge from textual information have been proposed. First, metadata can be
extracted automatically from text captions and restructured with respect to a semantic
model. Second, the use of LSI in relation to a domain-specific ontology-based
knowledge model enables the combined framework to tolerate ambiguities and
variations (incompleteness) of metadata. The use of the ontology-based knowledge
model allows the system to find indirectly relevant concepts in image captions and
thus leverage these to represent the semantics of images at a higher level.
Experimental results show that the proposed framework significantly enhances image
retrieval and leads to narrowing of the semantic gap between lower level machinederived
and higher level human-understandable conceptualisation
Domain oriented object reuse based on genetic software architectures.
In this thesis, a new systematic approach is introduced for developing software systems from domain-oriented components. The approach is called Domain Oriented Object Reuse (DOOR) which is based on domain analysis and Generic Software Architectures. The term 'Generic Software Architectures' is used to denote a new technique for building domain reference architectures using architecture schemas. The architecture schemas are used to model the components behaviour and dependency. Components dependencies describe components behaviour in terms of their inter-relationships within the same domain scope. DOOR uses the architecture schemas as a mechanism for specifying design conceptions within the modelled domain. Such conceptions provide design decisions and solutions to domain-specific problems which may be applied in the development of new systems. Previous research in the area of domain analysis and component-oriented reuse has established the need for a systematic approach to component-oriented development which emphasises the presentation side of the solution in the technology. DOOR addresses the presentation issue by organising the domain knowledge into levels of abstractions known to DOOR as sub-domains. These levels are organised in a hierarchical taxonomy tree which contains, in addition to sub-domains, a collection of reusable assets associated with each level. The tree determines the scope of reuse for every domain asset and the boundaries for their application. Thus, DOOR also answers the questions of reuse scope and domain boundaries which have also been raised by the reuse community. DOOR's reuse process combines development for reuse and development with reuse together. With this process, which is supported by a set of integrated tools, a number of guidelines have been introduced to assist in modelling the domain assets and assessing their reusability. The tools are also used for automatic assessment of the domain architecture and the design conceptions of its schemas. Furthermore, when a new system is synthesised, components are retrieved, with the assistance of the tools, according to the scope of reuse within which the system is developed. The retrieval procedure uses the components dependencies for tracing and retrieving the relevant components for the required abstraction
Maintaining Structured Experiences for Robots via Human Demonstrations: An Architecture To Convey Long-Term Robot\u2019s Beliefs
This PhD thesis presents an architecture for structuring experiences, learned through demonstrations, in a robot memory. To test our architecture, we consider a specific application where a robot learns how
objects are spatially arranged in a tabletop scenario.
We use this application as a mean to present a few software development guidelines for building architecture for similar scenarios, where a robot is able to interact with a user through a qualitative shared knowledge stored in its memory. In particular, the thesis proposes a novel technique for deploying ontologies in a robotic architecture based on semantic interfaces. To better support those interfaces, it also presents general-purpose tools especially designed for an iterative development process, which is suitable for Human-Robot Interaction scenarios.
We considered ourselves at the beginning of the first iteration of the design process, and our objective was to build a flexible architecture through which evaluate different heuristic during further development
iterations.
Our architecture is based on a novel algorithm performing a oneshot structured learning based on logic formalism. We used a fuzzy ontology for dealing with uncertain environments, and we integrated the algorithm in the architecture based on a specific semantic interface.
The algorithm is used for building experience graphs encoded in the robot\u2019s memory that can be used for recognising and associating situations after a knowledge bootstrapping phase. During this phase, a user is supposed to teach and supervise the beliefs of the robot through multimodal, not physical, interactions. We used the algorithm to implement a cognitive like memory involving the encoding, storing, retrieving, consolidating, and forgetting behaviours, and we showed that our flexible design pattern could be used for building architectures where contextualised memories are managed with different purposes, i.e. they contains representation of the same experience encoded with different semantics.
The proposed architecture has the main purposes of generating and maintaining knowledge in memory, but it can be directly interfaced with perceiving and acting components if they provide, or require, symbolical knowledge. With the purposes of showing the type of data considered as inputs and outputs in our tests, this thesis also presents components to evaluate point clouds, engage dialogues, perform late
data fusion and simulate the search of a target position. Nevertheless, our design pattern is not meant to be coupled only with those components, which indeed have a large room of improvement
- …