70,259 research outputs found

    Semantic web technologies for video surveillance metadata

    Get PDF
    Video surveillance systems are growing in size and complexity. Such systems typically consist of integrated modules of different vendors to cope with the increasing demands on network and storage capacity, intelligent video analytics, picture quality, and enhanced visual interfaces. Within a surveillance system, relevant information (like technical details on the video sequences, or analysis results of the monitored environment) is described using metadata standards. However, different modules typically use different standards, resulting in metadata interoperability problems. In this paper, we introduce the application of Semantic Web Technologies to overcome such problems. We present a semantic, layered metadata model and integrate it within a video surveillance system. Besides dealing with the metadata interoperability problem, the advantages of using Semantic Web Technologies and the inherent rule support are shown. A practical use case scenario is presented to illustrate the benefits of our novel approach

    Framework for the interoperability of software engineering metamodels

    Full text link
    University of Technology, Sydney. Faculty of Engineering and Information Technology.A model, represented as a concrete artefact, is an abstraction of reality according to a certain conceptualization. A model can support communication and analysis about relevant aspects of the underlying domain. A model must be expressed in some language and such languages are defined using metamodels. Many metamodels have been proposed and used in the software engineering literature. Some define modelling languages that are general in nature but the literature of modelling is dominated by domain-specific modelling languages or metamodels. Most of these metamodels have been developed independently from each other and any shared concepts are only accidental. Widespread adoption of these metamodels is hindered by differences between metamodels' concepts. Using more than one modelling language during software development requires some sort of interoperability between the metamodels of those modelling languages. This interoperability is also required to allow mappings between models developed using different modelling languages. These metamodels are not static in nature and are continuously evolving. This evolution has increased their size and complexity over time. This complexity increases when more than one metamodel is used during software development. Interoperability of a pair of metamodels can reduce their joint size and complexity (elaborated in detail in Chapter 7). The need for interoperability between metamodels is also raised by many research communities. In this thesis, we have developed a framework that can be used for metamodel interoperability. The framework compares metamodel elements based on their syntax, semantics and structure. The semantics of metamodel elements are further investigated for linguistic and ontological semantics. Since terms such as interoperability, bridging, merging and mapping have all been used, often loosely, with reference to metamodel compatibility, we will define these terms under the generic term harmonization. Metamodels share some similarities with other domains, e.g. ontologies and schemas. In this thesis, we have also explored the techniques available in these domains that might be useful for metamodel interoperability. We have applied our framework to different metamodels and have shown how metamodels can be used in an interoperable fashion. The results achieved are analysed and we have shown how interoperability of metamodels can reduce their size and their joint complexity, hence making them easier to understand and use

    Knowledge formalization in experience feedback processes : an ontology-based approach

    Get PDF
    Because of the current trend of integration and interoperability of industrial systems, their size and complexity continue to grow making it more difficult to analyze, to understand and to solve the problems that happen in their organizations. Continuous improvement methodologies are powerful tools in order to understand and to solve problems, to control the effects of changes and finally to capitalize knowledge about changes and improvements. These tools involve suitably represent knowledge relating to the concerned system. Consequently, knowledge management (KM) is an increasingly important source of competitive advantage for organizations. Particularly, the capitalization and sharing of knowledge resulting from experience feedback are elements which play an essential role in the continuous improvement of industrial activities. In this paper, the contribution deals with semantic interoperability and relates to the structuring and the formalization of an experience feedback (EF) process aiming at transforming information or understanding gained by experience into explicit knowledge. The reuse of such knowledge has proved to have significant impact on achieving themissions of companies. However, the means of describing the knowledge objects of an experience generally remain informal. Based on an experience feedback process model and conceptual graphs, this paper takes domain ontology as a framework for the clarification of explicit knowledge and know-how, the aim of which is to get lessons learned descriptions that are significant, correct and applicable

    Design Considerations for Low Power Internet Protocols

    Full text link
    Over the past 10 years, low-power wireless networks have transitioned to supporting IPv6 connectivity through 6LoWPAN, a set of standards which specify how to aggressively compress IPv6 packets over low-power wireless links such as 802.15.4. We find that different low-power IPv6 stacks are unable to communicate using 6LoWPAN, and therefore IP, due to design tradeoffs between code size and energy efficiency. We argue that applying traditional protocol design principles to low-power networks is responsible for these failures, in part because receivers must accommodate a wide range of senders. Based on these findings, we propose three design principles for Internet protocols on low-power networks. These principles are based around the importance of providing flexible tradeoffs between code size and energy efficiency. We apply these principles to 6LoWPAN and show that the resulting design of the protocol provides developers a wide range of tradeoff points while allowing implementations with different choices to seamlessly communicate

    Sustainability of systems interoperability in dynamic business networks

    Get PDF
    DissertaĆ§Ć£o para obtenĆ§Ć£o do Grau de Doutor em Engenharia ElectrotĆ©cnica e de ComputadoresCollaborative networked environments emerged with the spread of the internet, contributing to overcome past communication barriers, and identifying interoperability as an essential property to support businesses development. When achieved seamlessly, efficiency is increased in the entire product life cycle support. However, due to the different sources of knowledge, models and semantics, enterprise organisations are experiencing difficulties exchanging critical information, even when they operate in the same business environments. To solve this issue, most of them try to attain interoperability by establishing peer-to-peer mappings with different business partners, or use neutral data and product standards as the core for information sharing, in optimized networks. In current industrial practice, the model mappings that regulate enterprise communications are only defined once, and most of them are hardcoded in the information systems. This solution has been effective and sufficient for static environments, where enterprise and product models are valid for decades. However, more and more enterprise systems are becoming dynamic, adapting and looking forward to meet further requirements; a trend that is causing new interoperability disturbances and efficiency reduction on existing partnerships. Enterprise Interoperability (EI) is a well established area of applied research, studying these problems, and proposing novel approaches and solutions. This PhD work contributes to that research considering enterprises as complex and adaptive systems, swayed to factors that are making interoperability difficult to sustain over time. The analysis of complexity as a neighbouring scientific domain, in which features of interoperability can be identified and evaluated as a benchmark for developing a new foundation of EI, is here proposed. This approach envisages at drawing concepts from complexity science to analyse dynamic enterprise networks and proposes a framework for sustaining systems interoperability, enabling different organisations to evolve at their own pace, answering the upcoming requirements but minimizing the negative impact these changes can have on their business environment

    Study of Tools Interoperability

    Get PDF
    Interoperability of tools usually refers to a combination of methods and techniques that address the problem of making a collection of tools to work together. In this study we survey different notions that are used in this context: interoperability, interaction and integration. We point out relation between these notions, and how it maps to the interoperability problem. We narrow the problem area to the tools development in academia. Tools developed in such environment have a small basis for development, documentation and maintenance. We scrutinise some of the problems and potential solutions related with tools interoperability in such environment. Moreover, we look at two tools developed in the Formal Methods and Tools group1, and analyse the use of different integration techniques

    Servicing the federation : the case for metadata harvesting

    Get PDF
    The paper presents a comparative analysis of data harvesting and distributed computing as complementary models of service delivery within large-scale federated digital libraries. Informed by requirements of flexibility and scalability of federated services, the analysis focuses on the identification and assessment of model invariants. In particular, it abstracts over application domains, services, and protocol implementations. The analytical evidence produced shows that the harvesting model offers stronger guarantees of satisfying the identified requirements. In addition, it suggests a first characterisation of services based on their suitability to either model and thus indicates how they could be integrated in the context of a single federated digital library
    • ā€¦
    corecore