15 research outputs found

    Integration ontology for distributed database

    Get PDF
    In this work we will study the problem of the design of the "Integration Model for Distributed Database System". We particularly design the canonical model through the ontological handling of the information. The ontology is designed in a way that allows the description of a database like a set of representative terms of its different components. In this ontology, the definitions use classes, relations, functions, among other things, of databases, to describe their components, operations and restrictions, as well as, the process of integration. These databases can be Relational, Fuzzy, Intelligent and Multimedia1st International Workshop on Advanced Software Engineering: Expanding the Frontiers of Software Technology - Session 2: Software ModelingRed de Universidades con Carreras en Informática (RedUNCI

    Integration ontology for distributed database

    Get PDF
    In this work we will study the problem of the design of the "Integration Model for Distributed Database System". We particularly design the canonical model through the ontological handling of the information. The ontology is designed in a way that allows the description of a database like a set of representative terms of its different components. In this ontology, the definitions use classes, relations, functions, among other things, of databases, to describe their components, operations and restrictions, as well as, the process of integration. These databases can be Relational, Fuzzy, Intelligent and Multimedia1st International Workshop on Advanced Software Engineering: Expanding the Frontiers of Software Technology - Session 2: Software ModelingRed de Universidades con Carreras en Informática (RedUNCI

    Security Ontology for Adaptive Mapping of Security Standards

    Get PDF
    Adoption of security standards has the capability of improving the security level in an organization as well as to provide additional benefits and possibilities to the organization. However mapping of used standards has to be done when more than one security standard is employed in order to prevent redundant activities, not optimal resource management and unnecessary outlays. Employment of security ontology to map different standards can reduce the mapping complexity however the choice of security ontology is of high importance and there are no analyses on security ontology suitability for adaptive standards mapping. In this paper we analyze existing security ontologies by comparing their general properties, OntoMetric factors and ability to cover different security standards. As none of the analysed security ontologies were able to cover more than 1/3 of security standards, we proposed a new security ontology, which increased coverage of security standards compared to the existing ontologies and has a better branching and depth properties for ontology visualization purposes. During this research we mapped 4 security standards (ISO 27001, PCI DSS, ISSA 5173 and NISTIR 7621) to the new security ontology, therefore this ontology and mapping data can be used for adaptive mapping of any set of these security standards to optimize usage of multiple securitystandards in an organization

    Ontology Mapping: The State of the Art

    Get PDF
    Ontology mapping is seen as a solution provider in today\u27s landscape of ontology research. As the number of ontologies that are made publicly available and accessible on the Web increases steadily, so does the need for applications to use them. A single ontology is no longer enough to support the tasks envisaged by a distributed environment like the Semantic Web. Multiple ontologies need to be accessed from several applications. Mapping could provide a common layer from which several ontologies could be accessed and hence could exchange information in semantically sound manners. Developing such mapping has beeb the focus of a variety of works originating from diverse communities over a number of years. In this article we comprehensively review and present these works. We also provide insights on the pragmatics of ontology mapping and elaborate on a theoretical approach for defining ontology mapping

    Experiencias prácticas para el desarrollo de los sistemas educativos en la web semántica

    Get PDF
    Semantic Web technologies have been applied in educational settings for different purposes in recent years, with the type of application being mainly defined by the way in which knowledge is represented and exploited. The basic technology for knowledge representation in Semantic Web settings is the ontology, which represents a common, shareable and reusable view of a particular application domain. Ontologies can support different activities in educational settings such as organizing course contents, classifying learning objects or assessing learning levels. Consequently, ontologies can become a very useful tool from a pedagogical perspective. This paper focuses on two different experiences where Semantic Web technologies are used in educational settings, the difference between them lying in how knowledge is obtained and represented. On the one hand, the OeLE platform uses ontologies as a support for assessment processes. Such ontologies have to be designed and implemented in semantic languages apt to be used by OeLE. On the other hand, the ENSEMBLE project pursues the development of semantic web applications by creating specific knowledge representations drawn from user needs. Our paper is consequently going to offer an in-depth analysis of the role played by ontologies, showing how they can be used in different ways drawing a comparison between model patterns and examining the ways in which they can complement each other as well as their practical implicationsEn los últimos años las tecnologías de la Web Semántica se han aplicado en entornos educativos para diferentes propósitos. El tipo de aplicación se ha definido principalmente por cómo el conocimiento se ha representado y difundido. La tecnología básica para la representación del conocimiento en la Web Semántica es la ontología. Ésta representa un punto de vista común, compartido y reutilizable, de un dominio de aplicación concreto. Las ontologías pueden servir de apoyo para diversas actividades en entornos educativos, y pueden ser una herramienta muy utilizada desde el punto de vista pedagógico. En este artículo nos centramos en dos experiencias que utilizan las tecnologías de la Web Semántica en entornos educativos: la plataforma OeLE, que utiliza ontologías como apoyo a los procesos de evaluación (que tienen que estar diseñadas e implementadas en lenguajes semánticos que puedan ser utilizados por OeLE); y el proyecto ENSEMBLE, que pretende desarrollar aplicaciones de la Web Semántica al crear representaciones de conocimiento específicas desde las necesidades del usuario. Vamos a analizar, por tanto, el papel de las ontologías y cómo se pueden utilizar de diferentes modos comparando las pautas de modelos y analizando cómo las ontologías pueden complementarse mutuamente y sus implicaciones para la práctica

    Tecnologías semánticas para la evaluación en red: análisis de una experiencia con la herramienta OeLE

    Get PDF
    The key role of Information and Communication Technology (ICT) in universities is out of question. The fast development of ICTs has brought with it the creation of new teaching-learning environments in higher education. This paper shows the results of a study that made use of a software in order to conduct and assess short-answer online tests. The software also provided students with feedback on their performance. The study was carried out with a group of undergraduate Education students at the University of Murcia (Spain) enrolled in an online subject. The focus of the online assessment procedures introduced in this study was beyond multiple-choice tests traditionally used in online assessment. Results of this study not only showed the scope of online assessment, but also defined guidelines for evaluation in online courses. This research study is part of the “Semantic Environment for Personalized Learning” Project funded by the Fundación Séneca (Murcia, Spain). No cuestionamos ya el importante papel que tienen las Tecnologías de la Información y la Comunicación (TIC) en nuestra realidad universitaria actual. El rápido devenir de estas herramientas ha supuesto la configuración de nuevos espacios de enseñanza-aprendizaje dentro de las distintas modalidades de enseñanza universitaria actuales. En este artículo se presentan los resultados de una investigación realizada tras la incorporación de un programa que permite realizar y corregir pruebas de evaluación de desarrollo a través de la red. Este programa permite hacer exámenes con preguntas de desarrollo y ofrece feedback al alumnado respecto a la prueba de evaluación realizada. Esta experiencia se llevó a cabo con un grupo de alumnos de la Licenciatura en Pedagogía de la Universidad de Murcia (España). Estos alumnos cursaban una asignatura optativa en red ofrecida por la titulación y se planteó la experiencia creando un entorno de evaluación en red que va más allá de las tradicionales pruebas tipo test utilizadas en los exámenes on-line. La finalidad de este análisis permite conocer no únicamente las posibilidades pedagógicas del entorno de evaluación en red, sino establecer además pautas de actuación para configurar futuros escenarios de evaluación en entornos virtuales con estas herramientas. Hemos de hacer constar que esta investigación ha sido realizada en el marco del Proyecto 08756/PI/08 “Plataforma Semántica de Formación a la Carta” financiado por la Fundación Séneca (Comunidad Autónoma de la Región de Murcia)

    MODELO INTELIGENTE PARA BASES DE DATOS DISTRIBUIDAS

    Get PDF
    RESUMENEn este trabajo trataremos el problema del diseño del “Modelo Inteligente para Sistemas de Bases de Datos Distribuidas”. Particularmente, nos proponemos diseñar el modelo canónico a través del manejo ontológico de la información. Para esto se diseñan ontologías que permitirán describir una base de datos como un conjunto de términos representacionales de sus diferentes componentes. En estas ontologías, las definiciones asocian clases, relaciones, funciones, entre otras cosas, de entidades en el universo del discurso de las bases de datos, para describir el significado de las bases de datos, sus componentes, restricciones, etc. La razón de usar ontologías es que ellas definen conceptos y relaciones dentro de un marco taxonómico, cuya conceptualización está representada de una manera formal, legible y utilizable. En trabajos anteriores [14] se ha propuesto un modelo de referencia y una arquitectura para la integración de Bases de Datos en donde se plantea la necesidad de definir un modelo canónico. Como continuación de estos trabajos, en este artículo se describen las taxonomías ontológicas que componen el modelo de referencia para la integración de bases datos, y se diseña el Modelo Canónico usando dicha noción ontológica. De esta manera, se define el proceso de integración entre los diferentes tipos de bases de datos, estas bases de datos componentes pueden ser: Relacionales, Orientadas a Objeto, Difusas, Inteligentes y Multimedia. Así, el esquema ontológico describe los conceptos, operaciones y restricciones, tanto de las bases de datos componentes como de su proceso de integración.Además en este trabajo se muestra también los axiomas para cada una de los esquemas ontológicos utilizando lógica de predicado de primer orden.PALABRAS CLAVESEsquema ontológicoModelo Canónico de DatosBases de Datos Distribuidas InteligentesIntegración de Bases de DatosABSTRACTIn this abstract we will analyze-look at with the problem of the design of the "Intelligent Model for Distributed Database System".We particularly set out to design the canonical model through the ontological handling of the information. To do so,ontology is designed that allow the description of a database like a set of representative terms of their different components. In this ontology, the definitions associate classes, relations, functions, among other things, of organizations in the speech universe of the data bases, to describe their meaning, its components, restrictions, etc. The reason for using ontology is that it defines concepts and relations within a taxonomic frame, whose conceptualization is represented in a formal, legible and usable way. In previous works [14] a reference model and architecture for the integration of database the need to define an intelligent canonical model was proposed. Like continuation of these works, in this article the ontological taxonomies are described, determining the component of the model of reference for the integration of database, and the Canonical Model is designed using this ontological notion. By doing so, the process of integration between the different types of database is defined. These component data bases can be: Relational, OO, Fuzzy, Intelligent and Multimedia. Thus, the ontological scheme describes the concepts, operations and restrictions, as well as, the component database and its process of integration. In this work there are also the axioms for each one of the ontological schemes using first-order predicate logic.KEYWORDSOntological SchemeCanonical data ModelDistributed Database IntelligentDatabase Integratio

    Informacijos saugos reikalavimų harmonizavimo, analizės ir įvertinimo automatizavimas

    Get PDF
    The growing use of Information Technology (IT) in daily operations of enterprises requires an ever-increasing level of protection over organization’s assets and information from unauthorised access, data leakage or any other type of information security breach. Because of that, it becomes vital to ensure the necessary level of protection. One of the best ways to achieve this goal is to implement controls defined in Information security documents. The problems faced by different organizations are related to the fact that often, organizations are required to be aligned with multiple Information security documents and their requirements. Currently, the organization’s assets and information protection are based on Information security specialist’s knowledge, skills and experience. Lack of automated tools for multiple Information security documents and their requirements harmonization, analysis and visualization lead to the situation when Information security is implemented by organizations in ineffective ways, causing controls duplication or increased cost of security implementation. An automated approach for Information security documents analysis, mapping and visualization would contribute to solving this issue. The dissertation consists of an introduction, three main chapters and general conclusions. The first chapter introduces existing Information security regulatory documents, current harmonization techniques, information security implementation cost evaluation methods and ways to analyse Information security requirements by applying graph theory optimisation algorithms (Vertex cover and Graph isomorphism). The second chapter proposes ways to evaluate information security implementation and costs through a controls-based approach. The effectiveness of this method could be improved by implementing automated initial data gathering from Business processes diagrams. In the third chapter, adaptive mapping on the basis of Security ontology is introduced for harmonization of different security documents; such an approach also allows to apply visualization techniques for harmonization results presentation. Graph optimization algorithms (vertex cover algorithm and graph isomorphism algorithm) for Minimum Security Baseline identification and verification of achieved results against controls implemented in small and medium-sized enterprises were proposed. It was concluded that the proposed methods provide sufficient data for adjustment and verification of security controls applicable by multiple Information security documents.Dissertatio

    An Automated Methodology For A Comprehensive Definition Of The Supply Chain Using Generic Ontological Components

    Get PDF
    Today, worldwide business communities are in the era of the Supply Chains. A Supply Chain is a collection of several independent enterprises that partner together to achieve specific goals. These enterprises may plan, source, produce, deliver, or transport materials to satisfy an immediate or projected market demand, and may provide the after sales support, warranty services, and returns. Each enterprise in the Supply Chain has roles and elements. The roles include supplier, customer, or carrier and the elements include functional units, processes, information, information resources, materials, objects, decisions, practices, and performance measures. Each enterprise, individually, manages these elements in addition to their flows, their interdependencies, and their complex interactions. Since a Supply Chain brings several enterprises together to complement each other to achieve a unified goal, the elements in each enterprise have to complement each other and have to be managed together as one unit to achieve the unified goal efficiently. Moreover, since there are a large number of elements to be defined and managed in a single enterprise, then the number of elements to be defined and managed when considering the whole Supply Chain is massive. The supply chain community is using the Supply Chain Operations Reference model (SCOR model) to define their supply chains. However, the SCOR model methodology is limited in defining the supply chain. The SCOR model defines the supply chain in terms of processes, performance metrics, and best practices. In fact, the supply chain community, SCOR users in particular, exerts massive effort to render an adequate supply chain definition that includes the other elements besides the elements covered in the SCOR model. Also, the SCOR model is delivered to the user in a document, which puts a tremendous burden on the user to use the model and makes it difficult to share the definition within the enterprise or across the supply chain. This research is directed towards overcoming the limitations and shortcomings of the current supply chain definition methodology. This research proposes a methodology and a tool that will enable an automated and comprehensive definition of the Supply Chain at any level of details. The proposed comprehensive definition methodology captures all the constituent parts of the Supply Chain at four different levels which are, the supply chain level, the enterprise level, the elements level, and the interaction level. At the Supply Chain level, the various enterprises that constitute the supply chain are defined. At the enterprise level, the enterprise elements are identified. At the enterprises\u27 elements level, each element in the enterprise is explicitly defined. At the interaction level, the flows, interdependence, and interactions that exist between and within the other three levels are identified and defined. The methodology utilized several modeling techniques to generate generic explicit views and models that represents the four levels. The developed views and models were transformed to a series of questions and answers, where the questions correspond to what a view provides and the answers are the knowledge captured and generated from the view. The questions and answers were integrated to render a generic multi-view of the supply chain. The methodology and the multi-view were implemented in an ontology-based tool. The ontology includes sets of generic supply chain ontological components that represent the supply chain elements and a set of automated procedures that can be utilized to define a specific supply chain. A specific supply chain can be defined by re-using the generic components and customizing them to the supply chain specifics. The ontology-based tool was developed to function in the supply chain dynamic, information intensive, geographically dispersed, and heterogeneous environment. To that end, the tool was developed to be generic, sharable, automated, customizable, extensible, and scalable

    Ontologies for automatic question generation

    Get PDF
    Assessment is an important tool for formal learning, especially in higher education. At present, many universities use online assessment systems where questions are entered manually into a question bank system. This kind of system requires the instructor’s time and effort to construct questions manually. The main aim of this thesis is, therefore, to contribute to the investigation of new question generation strategies for short/long answer questions in order to allow for the development of automatic factual question generation from an ontology for educational assessment purposes. This research is guided by four research questions: (1) How well can an ontology be used for generating factual assessment questions? (2) How can questions be generated from course ontology? (3) Are the ontological question generation strategies able to generate acceptable assessment questions? and (4) Do the topic-based indexing able to improve the feasibility of AQGen. We firstly conduct ontology validation to evaluate the appropriateness of concept representation using a competency question approach. We used revision questions from the textbook to obtain keyword (in revision questions) and a concept (in the ontology) matching. The results show that only half of the ontology concepts matched the keywords. We took further investigation on the unmatched concepts and found some incorrect concept naming and later suggest a guideline for an appropriate concept naming. At the same time, we introduce validation of ontology using revision questions as competency questions to check for ontology completeness. Furthermore, we also proposed 17 short/long answer question templates for 3 question categories, namely definition, concept completion and comparison. In the subsequent part of the thesis, we develop the AQGen tool and evaluate the generated questions. Two Computer Science subjects, namely OS and CNS, are chosen to evaluate AQGen generated questions. We conduct a questionnaire survey from 17 domain experts to identify experts’ agreement on the acceptability measure of AQGen generated questions. The experts’ agreements for acceptability measure are favourable, and it is reported that three of the four QG strategies proposed can generate acceptable questions. It has generated thousands of questions from the 3 question categories. AQGen is updated with question selection to generate a feasible question set from a tremendous amount of generated questions before. We have suggested topic-based indexing with the purpose to assert knowledge about topic chapters into ontology representation for question selection. The topic indexing shows a feasible result for filtering question by topics. Finally, our results contribute to an understanding of ontology element representation for question generations and how to automatically generate questions from ontology for education assessment
    corecore