863 research outputs found

    Ontology evolution: a process-centric survey

    Get PDF
    Ontology evolution aims at maintaining an ontology up to date with respect to changes in the domain that it models or novel requirements of information systems that it enables. The recent industrial adoption of Semantic Web techniques, which rely on ontologies, has led to the increased importance of the ontology evolution research. Typical approaches to ontology evolution are designed as multiple-stage processes combining techniques from a variety of fields (e.g., natural language processing and reasoning). However, the few existing surveys on this topic lack an in-depth analysis of the various stages of the ontology evolution process. This survey extends the literature by adopting a process-centric view of ontology evolution. Accordingly, we first provide an overall process model synthesized from an overview of the existing models in the literature. Then we survey the major approaches to each of the steps in this process and conclude on future challenges for techniques aiming to solve that particular stage

    TREE-D-SEEK: A Framework for Retrieving Three-Dimensional Scenes

    Get PDF
    In this dissertation, a strategy and framework for retrieving 3D scenes is proposed. The strategy is to retrieve 3D scenes based on a unified approach for indexing content from disparate information sources and information levels. The TREE-D-SEEK framework implements the proposed strategy for retrieving 3D scenes and is capable of indexing content from a variety of corpora at distinct information levels. A semantic annotation model for indexing 3D scenes in the TREE-D-SEEK framework is also proposed. The semantic annotation model is based on an ontology for rapid prototyping of 3D virtual worlds. With ongoing improvements in computer hardware and 3D technology, the cost associated with the acquisition, production and deployment of 3D scenes is decreasing. As a consequence, there is a need for efficient 3D retrieval systems for the increasing number of 3D scenes in corpora. An efficient 3D retrieval system provides several benefits such as enhanced sharing and reuse of 3D scenes and 3D content. Existing 3D retrieval systems are closed systems and provide search solutions based on a predefined set of indexing and matching algorithms Existing 3D search systems and search solutions cannot be customized for specific requirements, type of information source and information level. In this research, TREE-D-SEEK—an open, extensible framework for retrieving 3D scenes—is proposed. The TREE-D-SEEK framework is capable of retrieving 3D scenes based on indexing low level content to high-level semantic metadata. The TREE-D-SEEK framework is discussed from a software architecture perspective. The architecture is based on a common process flow derived from indexing disparate information sources. Several indexing and matching algorithms are implemented. Experiments are conducted to evaluate the usability and performance of the framework. Retrieval performance of the framework is evaluated using benchmarks and manually collected corpora. A generic, semantic annotation model is proposed for indexing a 3D scene. The primary objective of using the semantic annotation model in the TREE-D-SEEK framework is to improve retrieval relevance and to support richer queries within a 3D scene. The semantic annotation model is driven by an ontology. The ontology is derived from a 3D rapid prototyping framework. The TREE-D-SEEK framework supports querying by example, keyword based and semantic annotation based query types for retrieving 3D scenes

    Adaptive hypertext and hypermedia : workshop : proceedings, 3rd, Sonthofen, Germany, July 14, 2001 and Aarhus, Denmark, August 15, 2001

    Get PDF
    This paper presents two empirical usability studies based on techniques from Human-Computer Interaction (HeI) and software engineering, which were used to elicit requirements for the design of a hypertext generation system. Here we will discuss the findings of these studies, which were used to motivate the choice of adaptivity techniques. The results showed dependencies between different ways to adapt the explanation content and the document length and formatting. Therefore, the system's architecture had to be modified to cope with this requirement. In addition, the system had to be made adaptable, in addition to being adaptive, in order to satisfy the elicited users' preferences

    Adaptive hypertext and hypermedia : workshop : proceedings, 3rd, Sonthofen, Germany, July 14, 2001 and Aarhus, Denmark, August 15, 2001

    Get PDF
    This paper presents two empirical usability studies based on techniques from Human-Computer Interaction (HeI) and software engineering, which were used to elicit requirements for the design of a hypertext generation system. Here we will discuss the findings of these studies, which were used to motivate the choice of adaptivity techniques. The results showed dependencies between different ways to adapt the explanation content and the document length and formatting. Therefore, the system's architecture had to be modified to cope with this requirement. In addition, the system had to be made adaptable, in addition to being adaptive, in order to satisfy the elicited users' preferences

    A theory and model for the evolution of software services

    Get PDF
    Software services are subject to constant change and variation. To control service development, a service developer needs to know why a change was made, what are its implications and whether the change is complete. Typically, service clients do not perceive the upgraded service immediately. As a consequence, service-based applications may fail on the service client side due to changes carried out during a provider service upgrade. In order to manage changes in a meaningful and effective manner service clients must therefore be considered when service changes are introduced at the service provider's side. Otherwise such changes will most certainly result in severe application disruption. Eliminating spurious results and inconsistencies that may occur due to uncontrolled changes is therefore a necessary condition for the ability of services to evolve gracefully, ensure service stability, and handle variability in their behavior. Towards this goal, this work presents a model and a theoretical framework for the compatible evolution of services based on well-founded theories and techniques from a number of disparate fields.

    Generic adaptation framework for unifying adaptive web-based systems

    Get PDF
    The Generic Adaptation Framework (GAF) research project first and foremost creates a common formal framework for describing current and future adaptive hypermedia (AHS) and adaptive webbased systems in general. It provides a commonly agreed upon taxonomy and a reference model that encompasses the most general architectures of the present and future, including conventional AHS, and different types of personalization-enabling systems and applications such as recommender systems (RS) personalized web search, semantic web enabled applications used in personalized information delivery, adaptive e-Learning applications and many more. At the same time GAF is trying to bring together two (seemingly not intersecting) views on the adaptation: a classical pre-authored type, with conventional domain and overlay user models and data-driven adaptation which includes a set of data mining, machine learning and information retrieval tools. To bring these research fields together we conducted a number GAF compliance studies including RS, AHS, and other applications combining adaptation, recommendation and search. We also performed a number of real systems’ case-studies to prove the point and perform a detailed analysis and evaluation of the framework. Secondly it introduces a number of new ideas in the field of AH, such as the Generic Adaptation Process (GAP) which aligns with a layered (data-oriented) architecture and serves as a reference adaptation process. This also helps to understand the compliance features mentioned earlier. Besides that GAF deals with important and novel aspects of adaptation enabling and leveraging technologies such as provenance and versioning. The existence of such a reference basis should stimulate AHS research and enable researchers to demonstrate ideas for new adaptation methods much more quickly than if they had to start from scratch. GAF will thus help bootstrap any adaptive web-based system research, design, analysis and evaluation

    Supporting personalised content management in smart health information portals

    Get PDF
    Information portals are seen as an appropriate platform for personalised healthcare and wellbeing information provision. Efficient content management is a core capability of a successful smart health information portal (SHIP) and domain expertise is a vital input to content management when it comes to matching user profiles with the appropriate resources. The rate of generation of new health-related content far exceeds the numbers that can be manually examined by domain experts for relevance to a specific topic and audience. In this paper we investigate automated content discovery as a plausible solution to this shortcoming that capitalises on the existing database of expert-endorsed content as an implicit store of knowledge to guide such a solution. We propose a novel content discovery technique based on a text analytics approach that utilises an existing content repository to acquire new and relevant content. We also highlight the contribution of this technique towards realisation of smart content management for SHIPs.<br /

    Sustainability of systems interoperability in dynamic business networks

    Get PDF
    Dissertação para obtenção do Grau de Doutor em Engenharia Electrotécnica e de ComputadoresCollaborative networked environments emerged with the spread of the internet, contributing to overcome past communication barriers, and identifying interoperability as an essential property to support businesses development. When achieved seamlessly, efficiency is increased in the entire product life cycle support. However, due to the different sources of knowledge, models and semantics, enterprise organisations are experiencing difficulties exchanging critical information, even when they operate in the same business environments. To solve this issue, most of them try to attain interoperability by establishing peer-to-peer mappings with different business partners, or use neutral data and product standards as the core for information sharing, in optimized networks. In current industrial practice, the model mappings that regulate enterprise communications are only defined once, and most of them are hardcoded in the information systems. This solution has been effective and sufficient for static environments, where enterprise and product models are valid for decades. However, more and more enterprise systems are becoming dynamic, adapting and looking forward to meet further requirements; a trend that is causing new interoperability disturbances and efficiency reduction on existing partnerships. Enterprise Interoperability (EI) is a well established area of applied research, studying these problems, and proposing novel approaches and solutions. This PhD work contributes to that research considering enterprises as complex and adaptive systems, swayed to factors that are making interoperability difficult to sustain over time. The analysis of complexity as a neighbouring scientific domain, in which features of interoperability can be identified and evaluated as a benchmark for developing a new foundation of EI, is here proposed. This approach envisages at drawing concepts from complexity science to analyse dynamic enterprise networks and proposes a framework for sustaining systems interoperability, enabling different organisations to evolve at their own pace, answering the upcoming requirements but minimizing the negative impact these changes can have on their business environment

    An integrative framework for cooperative production resources in smart manufacturing

    Get PDF
    Under the push of Industry 4.0 paradigm modern manufacturing companies are dealing with a significant digital transition, with the aim to better address the challenges posed by the growing complexity of globalized businesses (Hermann, Pentek, & Otto, Design principles for industrie 4.0 scenarios, 2016). One basic principle of this paradigm is that products, machines, systems and business are always connected to create an intelligent network along the entire factory\u2019s value chain. According to this vision, manufacturing resources are being transformed from monolithic entities into distributed components, which are loosely coupled and autonomous but nevertheless provided of the networking and connectivity capabilities enabled by the increasingly widespread Industrial Internet of Things technology. Under these conditions, they become capable of working together in a reliable and predictable manner, collaborating among themselves in a highly efficient way. Such a mechanism of synergistic collaboration is crucial for the correct evolution of any organization ranging from a multi-cellular organism to a complex modern manufacturing system (Moghaddam & Nof, 2017). Specifically of the last scenario, which is the field of our study, collaboration enables involved resources to exchange relevant information about the evolution of their context. These information can be in turn elaborated to make some decisions, and trigger some actions. In this way connected resources can modify their structure and configuration in response to specific business or operational variations (Alexopoulos, Makris, Xanthakis, Sipsas, & Chryssolouris, 2016). Such a model of \u201csocial\u201d and context-aware resources can contribute to the realization of a highly flexible, robust and responsive manufacturing system, which is an objective particularly relevant in the modern factories, as its inclusion in the scope of the priority research lines for the H2020 three-year period 2018-2020 can demonstrate (EFFRA, 2016). Interesting examples of these resources are self-organized logistics which can react to unexpected changes occurred in production or machines capable to predict failures on the basis of the contextual information and then trigger adjustments processes autonomously. This vision of collaborative and cooperative resources can be realized with the support of several studies in various fields ranging from information and communication technologies to artificial intelligence. An update state of the art highlights significant recent achievements that have been making these resources more intelligent and closer to the user needs. However, we are still far from an overall implementation of the vision, which is hindered by three major issues. The first one is the limited capability of a large part of the resources distributed within the shop floor to automatically interpret the exchanged information in a meaningful manner (semantic interoperability) (Atzori, Iera, & Morabito, 2010). This issue is mainly due to the high heterogeneity of data model formats adopted by the different resources used within the shop floor (Modoni, Doukas, Terkaj, Sacco, & Mourtzis, 2016). Another open issue is the lack of efficient methods to fully virtualize the physical resources (Rosen, von Wichert, Lo, & Bettenhausen, 2015), since only pairing physical resource with its digital counterpart that abstracts the complexity of the real world, it is possible to augment communication and collaboration capabilities of the physical component. The third issue is a side effect of the ongoing technological ICT evolutions affecting all the manufacturing companies and consists in the continuous growth of the number of threats and vulnerabilities, which can both jeopardize the cybersecurity of the overall manufacturing system (Wells, Camelio, Williams, & White, 2014). For this reason, aspects related with cyber-security should be considered at the early stage of the design of any ICT solution, in order to prevent potential threats and vulnerabilities. All three of the above mentioned open issues have been addressed in this research work with the aim to explore and identify a precise, secure and efficient model of collaboration among the production resources distributed within the shop floor. This document illustrates main outcomes of the research, focusing mainly on the Virtual Integrative Manufacturing Framework for resources Interaction (VICKI), a potential reference architecture for a middleware application enabling semantic-based cooperation among manufacturing resources. Specifically, this framework provides a technological and service-oriented infrastructure offering an event-driven mechanism that dynamically propagates the changing factors to the interested devices. The proposed system supports the coexistence and combination of physical components and their virtual counterparts in a network of interacting collaborative elements in constant connection, thus allowing to bring back the manufacturing system to a cooperative Cyber-physical Production System (CPPS) (Monostori, 2014). Within this network, the information coming from the productive chain can be promptly and seamlessly shared, distributed and understood by any actor operating in such a context. In order to overcome the problem of the limited interoperability among the connected resources, the framework leverages a common data model based on the Semantic Web technologies (SWT) (Berners-Lee, Hendler, & Lassila, 2001). The model provides a shared understanding on the vocabulary adopted by the distributed resources during their knowledge exchange. In this way, this model allows to integrate heterogeneous data streams into a coherent semantically enriched scheme that represents the evolution of the factory objects, their context and their smart reactions to all kind of situations. The semantic model is also machine-interpretable and re-usable. In addition to modeling, the virtualization of the overall manufacturing system is empowered by the adoption of an agent-based modeling, which contributes to hide and abstract the control functions complexity of the cooperating entities, thus providing the foundations to achieve a flexible and reconfigurable system. Finally, in order to mitigate the risk of internal and external attacks against the proposed infrastructure, it is explored the potential of a strategy based on the analysis and assessment of the manufacturing systems cyber-security aspects integrated into the context of the organization\u2019s business model. To test and validate the proposed framework, a demonstration scenarios has been identified, which are thought to represent different significant case studies of the factory\u2019s life cycle. To prove the correctness of the approach, the validation of an instance of the framework is carried out within a real case study. Moreover, as for data intensive systems such as the manufacturing system, the quality of service (QoS) requirements in terms of latency, efficiency, and scalability are stringent, an evaluation of these requirements is needed in a real case study by means of a defined benchmark, thus showing the impact of the data storage, of the connected resources and of their requests
    • …
    corecore