113 research outputs found
PatOMat - Versatile Framework for Pattern-Based Ontology Transformation
The purpose of the PatOMat transformation framework is to bridge between different modeling styles of web ontologies. We provide a formal model of pattern-based ontology transformation, explain its implementation in PatOMat, and manifest the flexibility of the framework on diverse use cases
Modelo para la evaluaciĂłn de ontologĂas. AplicaciĂłn en Onto-Satcol
This paper analyzes the conceptual and theoretical framework for the evaluation of ontologies, in order to understand the procedures used in the evaluation of these systems and to establish new guidelines for evaluating the system employed by the ontological program, SATCOL, that specializes in Port and Coastal Engineering. This paper describes the characteristics of the Onto-SATCOL ontology and evaluates it by using several indicators (lexical, information retrieval, and syntactic structure). Through an experiment conducted by six experts aided by the tool, Protex, semantic and structural inconsistencies are identified, as are errors in the ontology’s organization of knowledge.Se analizan los referentes teĂłricos y conceptuales de la evaluaciĂłn de ontologĂas para conocer los procedimientos utilizados en la evaluaciĂłn de estos sistemas y establecer nuevas pautas para calibrar el sistema ontolĂłgico empleado por el programa Satcol, especializado en IngenierĂa de Puertos y Costas. En el trabajo se describen las caracterĂsticas de la ontologĂa Onto-Satcol y se evalĂşa la misma mediante el uso de varios indicadores (lĂ©xicos, de recuperaciĂłn de informaciĂłn y de la estructura sintáctica). Mediante un experimento llevado a cabo por 6 expertos, y con la ayuda de la herramienta Protex, se identifican inconsistencias semánticas, estructurales y errores en la organizaciĂłn del conocimiento de dicha ontologĂa
Prototyping a decision support system based on semantic web technologies to aid consumers with food sensitivities in their assessment of product safety
The thesis depicts the information need people with food sensitivities experience in the
shopping situation. Diversity within the group is illustrated through five personas. Existing
measures aimed at helping affected individuals in their search for safe food – ranging from
labeling requirements and practices to various forms of information systems – are
exemplified. Shortcomings of existing solutions are discussed from an information science
perspective. An alternative approach based on Semantic Web technologies and Linked data
is proposed, underpinning a decision support system. The Boolean interpretation of food
safety is rejected in favor of a division that accounts for the need for human assessment of
uncertain cases. SPARQL and automatic inference, relying on a dataset holding facts about
allergen occurrence in various ingredients, is used to automatically classify products as safe,
uncertain or unsafe for individual users. Proof of concept for the proposed approach is
provided through a prototype web application. The automated classification is used to
communicate the safety of each product, using familiar traffic light colors. Tailored decision
support is provided on-demand, emphasizing information that is likely to impact the users
assessment of uncertain products while limiting “information overload”. The development
process behind the ontology and web application is discussed in detail, followed by a
discussion about how to establish the required data sets.Masteroppgaven skildrer det informasjonsbehovet mennesker med matoverfølsomhet
opplever nĂĄr de skal handle mat. Mangfoldet innen gruppen illustreres gjennom fem
”personas”. Eksisterende tiltak med hensikt å hjelpe personer med matoverfølsomhet i
jakten på trygg mat – fra merkeregler og praksiser til forskjellige former for
informasjonssystemer – eksemplifiseres. Svakheter ved eksisterende løsninger drøftes utfra
et informasjonsvitenskapelig perspektiv. En alternativ tilnærming basert på Semantisk Web
teknologier og Lenkede Data introduseres og danner grunnlaget for et
beslutningsstøttesystem for personer med matoverfølsomhet. Den Boolske tilnærmingen til
mattrygghet forkastes til fordel for en tredeling som ivaretar behovet for skjønnsmessig
vurdering i usikre tilfeller. SPARQL og automatisk inferens, basert pĂĄ en kjerne av data om
allergenforekomst i ingredienser, benyttes til ĂĄ klassifisere produkter som trygge, usikre og
utrygge. ”Proof of concept” for den foreslåtte tilnærmingen oppnås ved å prototype en
webapplikasjon. Den automatiske klassifikasjonen blir brukt til ĂĄ kommunisere i hvilken grad
produkter er antatt å være trygge for den enkelte, ved bruk av kjente trafikklysfarger.
Skreddersydd beslutningsstøtte tilbys, der informasjon som trolig vil påvirke brukerens
manuelle vurdering av usikre produkter fremheves, mens andre opplysninger holdes tilbake
for å unngå støy. Utviklingsprosessen bak ontologi og webapplikasjon drøftes inngående,
etterfulgt av en diskusjon av hva som mĂĄ til for ĂĄ skaffe til veie og kvalitetssikre de
datasettene som modellen baserer seg pĂĄ.Master i bibliotek- og informasjonsvitenska
Design of a Controlled Language for Critical Infrastructures Protection
We describe a project for the construction of controlled language for critical infrastructures protection (CIP). This project originates
from the need to coordinate and categorize the communications on CIP at the European level. These communications can be physically
represented by official documents, reports on incidents, informal communications and plain e-mail. We explore the application of
traditional library science tools for the construction of controlled languages in order to achieve our goal. Our starting point is an
analogous work done during the sixties in the field of nuclear science known as the Euratom Thesaurus.JRC.G.6-Security technology assessmen
Recommended from our members
Analysing Java Identifier Names
Identifier names are the principal means of recording and communicating ideas in source code and are a significant source of information for software developers and maintainers, and the tools that support their work. This research aims to increase understanding of identifier name content types - words, abbreviations, etc. - and phrasal structures - noun phrases, verb phrases, etc. - by improving techniques for the analysis of identifier names. The techniques and knowledge acquired can be applied to improve program comprehension tools that support internal code quality, concept location, traceability and model extraction. Previous detailed investigations of identifier names have focused on method names, and the content and structure of Java class and reference (field, parameter, and variable) names are less well understood.
I developed improved algorithms to tokenise names, and trained part-of-speech tagger models on identifier names to support the analysis of class and reference names in a corpus of 60 open source Java projects. I confirm that developers structure the majority of names according to identifier naming conventions, and use phrasal structures reported in the literature. I also show that developers use a wider variety of content types and phrasal structures than previously understood. Unusually structured class names are largely project-specific naming conventions, but could indicate design issues. Analysis of phrasal reference names showed that developers most often use the phrasal structures described in the literature and used to support the extraction of information from names, but also choose unexpected phrasal structures, and complex, multi-phrasal, names.
Using Nominal - software I created to evaluate adherence to naming conventions - I found developers tend to follow naming conventions, but that adherence to published conventions varies between projects because developers also establish new conventions for the use of typography, content types and phrasal structure to support their work: particularly to distinguish the roles of Java field names
Ontologies for Legal Relevance and Consumer Complaints. A Case Study in the Air Transport Passenger Domain
Applying relevant legal information to settle complaints and disputes is a common challenge for all legal practitioners and laymen. However, the analysis of the concept of relevance itself has thus far attracted only sporadic attention. This thesis bridges this gap by understanding the components of complaints, and by defining relevant legal information, and makes use of computational ontologies and design patterns to represent this relevant knowledge in an explicit and structured way. This work uses as a case-study a real situation of consumer disputes in the Air Transport Passenger domain.
Two artifacts were built: the Relevant Legal Information in Consumer Disputes Ontology, and its specialization, the Air Transport Passenger Incidents Ontology, aimed at modelling relevant legal information; and the Complaint Design Pattern proposed to conceptualize complaints.
In order to demonstrate the ability of the ontologies to serve as a knowledge base for a computer program providing relevant legal information, a demonstrative application was developed
Developing a model and a language to identify and specify the integrity constraints in spatial datacubes
La qualité des données dans les cubes de données spatiales est importante étant donné que ces données sont utilisées comme base pour la prise de décision dans les grandes organisations. En effet, une mauvaise qualité de données dans ces cubes pourrait nous conduire à une mauvaise prise de décision. Les contraintes d'intégrité jouent un rôle clé pour améliorer la cohérence logique de toute base de données, l'un des principaux éléments de la qualité des données. Différents modèles de cubes de données spatiales ont été proposés ces dernières années mais aucun n'inclut explicitement les contraintes d'intégrité. En conséquence, les contraintes d'intégrité de cubes de données spatiales sont traitées de façon non-systématique, pragmatique, ce qui rend inefficace le processus de vérification de la cohérence des données dans les cubes de données spatiales. Cette thèse fournit un cadre théorique pour identifier les contraintes d'intégrité dans les cubes de données spatiales ainsi qu'un langage formel pour les spécifier. Pour ce faire, nous avons d'abord proposé un modèle formel pour les cubes de données spatiales qui en décrit les différentes composantes. En nous basant sur ce modèle, nous avons ensuite identifié et catégorisé les différents types de contraintes d'intégrité dans les cubes de données spatiales. En outre, puisque les cubes de données spatiales contiennent typiquement à la fois des données spatiales et temporelles, nous avons proposé une classification des contraintes d'intégrité des bases de données traitant de l'espace et du temps. Ensuite, nous avons présenté un langage formel pour spécifier les contraintes d'intégrité des cubes de données spatiales. Ce langage est basé sur un langage naturel contrôlé et hybride avec des pictogrammes. Plusieurs exemples de contraintes d'intégrité des cubes de données spatiales sont définis en utilisant ce langage. Les designers de cubes de données spatiales (analystes) peuvent utiliser le cadre proposé pour identifier les contraintes d'intégrité et les spécifier au stade de la conception des cubes de données spatiales. D'autre part, le langage formel proposé pour spécifier des contraintes d'intégrité est proche de la façon dont les utilisateurs finaux expriment leurs contraintes d'intégrité. Par conséquent, en utilisant ce langage, les utilisateurs finaux peuvent vérifier et valider les contraintes d'intégrité définies par l'analyste au stade de la conception
Trusted Artificial Intelligence in Manufacturing; Trusted Artificial Intelligence in Manufacturing
The successful deployment of AI solutions in manufacturing environments hinges on their security, safety and reliability which becomes more challenging in settings where multiple AI systems (e.g., industrial robots, robotic cells, Deep Neural Networks (DNNs)) interact as atomic systems and with humans. To guarantee the safe and reliable operation of AI systems in the shopfloor, there is a need to address many challenges in the scope of complex, heterogeneous, dynamic and unpredictable environments. Specifically, data reliability, human machine interaction, security, transparency and explainability challenges need to be addressed at the same time. Recent advances in AI research (e.g., in deep neural networks security and explainable AI (XAI) systems), coupled with novel research outcomes in the formal specification and verification of AI systems provide a sound basis for safe and reliable AI deployments in production lines. Moreover, the legal and regulatory dimension of safe and reliable AI solutions in production lines must be considered as well. To address some of the above listed challenges, fifteen European Organizations collaborate in the scope of the STAR project, a research initiative funded by the European Commission in the scope of its H2020 program (Grant Agreement Number: 956573). STAR researches, develops, and validates novel technologies that enable AI systems to acquire knowledge in order to take timely and safe decisions in dynamic and unpredictable environments. Moreover, the project researches and delivers approaches that enable AI systems to confront sophisticated adversaries and to remain robust against security attacks. This book is co-authored by the STAR consortium members and provides a review of technologies, techniques and systems for trusted, ethical, and secure AI in manufacturing. The different chapters of the book cover systems and technologies for industrial data reliability, responsible and transparent artificial intelligence systems, human centered manufacturing systems such as human-centred digital twins, cyber-defence in AI systems, simulated reality systems, human robot collaboration systems, as well as automated mobile robots for manufacturing environments. A variety of cutting-edge AI technologies are employed by these systems including deep neural networks, reinforcement learning systems, and explainable artificial intelligence systems. Furthermore, relevant standards and applicable regulations are discussed. Beyond reviewing state of the art standards and technologies, the book illustrates how the STAR research goes beyond the state of the art, towards enabling and showcasing human-centred technologies in production lines. Emphasis is put on dynamic human in the loop scenarios, where ethical, transparent, and trusted AI systems co-exist with human workers. The book is made available as an open access publication, which could make it broadly and freely available to the AI and smart manufacturing communities
- …