95 research outputs found

    The EAGLES/ISLE initiative for setting standards: the Computational Lexicon Working Group for Multilingual Lexicons

    Get PDF
    ISLE (International Standards for Language Engineering), a transatlantic standards oriented initiative under the Human Language Technology (HLT) programme, is a continuation of the long standing EAGLES (Expert Advisory Group for Language Engineering Standards) initiative, carried out by European and American groups within the EU-US International Research Co-operation, supported by NSF and EC. The objective is to support HLT R&D international and national projects, and HLT industry, by developing and promoting widely agreed and urgently demanded HLT standards and guidelines for infrastructural language resources, tools, and HLT products. ISLE targets the areas of multilingual computational lexicons (MCL), natural interaction and multimodality (NIMM), and evaluation. For MCL, ISLE is working to: extend EAGLES work on lexical semantics, necessary to establish inter-language links; design standards for multilingual lexicons; develop a prototype tool to implement lexicon guidelines; create EAGLES-conformant sample lexicons and tag corpora for validation purposes; develop standardised evaluation procedures for lexicons. For NIMM, a rapidly innovating domain urgently requiring early standardisation, ISLE work is targeted to develop guidelines for: creation of NIMM data resources; interpretative annotation of NIMM data, including spoken dialogue; annotation of discourse phenomena. For evaluation, ISLE is working on: quality models for machine translation systems; maintenance of previous guidelines - in an ISO based framework. We concentrate in the paper on the Computational Lexicon Working Group, describing in detail the proposals of guidelines for the "Multilingual ISLE Lexical Entry" (MILE). We highlight some methodological principles applied in previous EAGLES, and followed in defining MILE. We also provide a description of the EU SIMPLE semantic lexicons built on the basis of previous EAGLES recommendations. Their importance is given by the fact that these lexicons are now enlarged to real-size lexicons within National Projects in 8 EU countries, thus building a really large infrastructural platform of harmonised lexicons in Europe. We will stress the relevance of standardised language resources also for the humanities applications. Numerous theories, approaches, systems are taken into account in ISLE, as any recommendation for harmonisation must build on the major contemporary approaches. Results will be widely disseminated, after validation in collaboration with EU and US HLT R&D projects, and industry. EAGLES work towards de facto standards has already allowed the field of Language Resources to establish broad consensus on key issues for some well-established areas - and will allow similar consensus to be achieved for other important areas through the ISLE project - providing thus a key opportunity for further consolidation and a basis for technological advance. EAGLES previous results in many areas have in fact already become de facto widely adopted standards, and EAGLES itself is a well-known trademark and a point of reference for HLT projects.Hosted by the Scholarly Text and Imaging Service (SETIS), the University of Sydney Library, and the Research Institute for Humanities and Social Sciences (RIHSS), the University of Sydney

    A meta-semantic language for smart component-adapters

    Get PDF
    The issues confronting the software development community today are significantly different from the problems it faced only a decade ago. Advances in software development tools and technologies during the last two decades have greatly enhanced the ability to leverage large amounts of software for creating new applications through the reuse of software libraries and application frameworks. The problems facing organizations today are increasingly focused around systems integration and the creation of information flows. Software modeling based on the assembly of reusable components to support software development has not been successfully implemented on a wide scale. Several models for reusable software components have been suggested which primarily address the wiring-level connectivity problem. While this is considered necessary, it is not sufficient to support an automated process of component assembly. Two critical issues that remain unresolved are: (1) semantic modeling of components, and (2) deployment process that supports automated assembly. The first issue can be addressed through domain-based standardization that would make it possible for independent developers to produce interoperable components based on a common set of vocabulary and understanding of the problem domain. This is important not only for providing a semantic basis for developing components but also for the interoperability between systems. The second issue is important for two reasons: (a) eliminate the need for developers to be involved in the final assembly of software components, and (b) provide a basis for the development process to be potentially driven by the user. To resolve the above remaining issues (1) and (2) a late binding mechanism between components based on meta-protocols is required. In this dissertation we address the above issues by proposing a generic framework for the development of software components and an interconnection language, COMPILE, for the specification of software systems from components. The computational model of the COMPILE language is based on late and dynamic binding of the components\u27 control, data, and function properties. The use of asynchronous callbacks for method invocation allows control binding among components to be late and dynamic. Data exchanged between components is defined through the use of a meta- language that can describe the semantics of the information but without being bound to any specific programming language type representation. Late binding to functions is accomplished by maintaining domain-based semantics as component metainformation. This information allows clients of components to map generic requested service to specific functions

    Linked Data based Health Information Representation, Visualization and Retrieval System on the Semantic Web

    Get PDF
    Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.To better facilitate health information dissemination, using flexible ways to represent, query and visualize health data becomes increasingly important. Semantic Web technologies, which provide a common framework by allowing data to be shared and reused between applications, can be applied to the management of health data. Linked open data - a new semantic web standard to publish and link heterogonous data- allows not only human, but also machine to brows data in unlimited way. Through a use case of world health organization HIV data of sub Saharan Africa - which is severely affected by HIV epidemic, this thesis built a linked data based health information representation, querying and visualization system. All the data was represented with RDF, by interlinking it with other related datasets, which are already on the cloud. Over all, the system have more than 21,000 triples with a SPARQL endpoint; where users can download and use the data and – a SPARQL query interface where users can put different type of query and retrieve the result. Additionally, It has also a visualization interface where users can visualize the SPARQL result with a tool of their preference. For users who are not familiar with SPARQL queries, they can use the linked data search engine interface to search and browse the data. From this system we can depict that current linked open data technologies have a big potential to represent heterogonous health data in a flexible and reusable manner and they can serve in intelligent queries, which can support decision-making. However, in order to get the best from these technologies, improvements are needed both at the level of triple stores performance and domain-specific ontological vocabularies

    A MODEL FOR ESTIMATING THE COST TRADEOFFS ASSOCIATED WITH OPEN ELECTRONIC SYSTEMS

    Get PDF
    An open systems approach (OSA), especially when used in conjunction with modular architecture, reuse, and harnessing of existing (COTS or proprietary) technologies, is commonly associated with cost avoidances resulting from: more efficient design; increased competition among suppliers; more efficient innovation and technology insertion; and modularization of qualification. However, OSA strategies require investment and may increase risk exposure. To determine if openness should be pursued, and to what degree, a quantitative model assessing the costs associated with openness is required. Previous attempts to measure openness rely on qualitative measures, and cannot be used to estimate the life cycle cost impacts of openness. The model developed in this thesis quantitatively determines the effects of openness on life cycle cost. The life cycle cost difference between two implementations with differing levels of openness was calculated for a case study of an ARCI sonar system, providing insight into the value of openness. The case study performed in this thesis provides the first known quantitative support for Abts' COTS-LIMO hypothesis that increasing CFD increases cost avoidance. However, these results challenge Henderson's implicit assumption that marginal openness is always positive (increasing openness is always beneficial)

    A holonic manufacturing architecture for line-less mobile assembly systems operations planning and control

    Get PDF
    Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Engenharia de Automação e Sistemas, Florianópolis, 2022.O Line-Less Mobile Assembly Systems (LMAS) é um paradigma de fabricação que visa maximizar a resposta às tendências do mercado através de configurações adaptáveis de fábrica utilizando recursos de montagem móvel. Tais sistemas podem ser caracterizados como holonic manufacturing systems (HMS), cujas chamadas holonic control architecture (HCA) são recentemente retratadas como abordagens habilitadoras da Indústria 4.0 devido a suas relações de entidades temporárias (hierárquicas e/ou heterárquicas). Embora as estruturas de referência HCA como PROSA ou ADACOR/ADACOR² tenham sido muito discutidas na literatura, nenhuma delas pode ser aplicada diretamente ao contexto LMAS. Assim, esta dissertação visa responder à pergunta \"Como uma arquitetura de produção e sistema de controle LMAS precisa ser projetada?\" apresentando os modelos de projeto de arquitetura desenvolvidos de acordo com as etapas da metodologia para desenvolvimento de sistemas holônicos multi-agentes ANEMONA. A fase de análise da ANEMONA resulta em uma especificação do caso de uso, requisitos, objetivos do sistema, simplificações e suposições. A fase de projeto resulta nos modelos de organização, interação e agentes, seguido de uma breve análise de sua cobertura comportamental. O resultado da fase de implementação é um protótipo (realizado com o Robot Operation System) que implementa os modelos ANEMONA e uma ontologia LMAS, que reutiliza elementos de ontologias de referência do domínio de manufatura. A fim de testar o protótipo, um algoritmo para geração de dados para teste baseado na complexidade dos produtos e na flexibilidade do chão de fábrica é apresentado. A validação qualitativa dos modelos HCA é baseada em como o HCA proposto atende a critérios específicos para avaliar sistemas HCA. A validação é complementada por uma análise quantitativa considerando o comportamento dos modelos implementados durante a execução normal e a execução interrompida (e.g. equipamento defeituoso) em um ambiente simulado. A validação da execução normal concentra-se no desvio de tempo entre as agendas planejadas e executadas, o que provou ser em média irrelevante dentro do caso simulado considerando a ordem de magnitude das operações típicas demandadas. Posteriormente, durante a execução do caso interrompido, o sistema é testado sob a simulação de uma falha, onde duas estratégias são aplicadas, LOCAL\_FIX e REORGANIZATION, e seu resultado é comparado para decidir qual é a opção apropriada quando o objetivo é reduzir o tempo total de execução. Finalmente, é apresentada uma análise sobre a cobertura desta dissertação culminando em diretrizes que podem ser vistas como uma resposta possível (entre muitas outras) para a questão de pesquisa apresentada. Além disso, são apresentados pontos fortes e fracos dos modelos desenvolvidos, e possíveis melhorias e idéias para futuras contribuições para a implementação de sistemas de controle holônico para LMAS.Abstract: The Line-Less Mobile Assembly Systems (LMAS) is a manufacturing paradigm aiming to maximize responsiveness to market trends (product-individualization and ever-shortening product lifecycles) by adaptive factory configurations utilizing mobile assembly resources. Such responsive systems can be characterized as holonic manufacturing systems (HMS), whose so-called holonic control architectures (HCA) are recently portrayed as Industry 4.0-enabling approaches due to their mixed-hierarchical and -heterarchical temporary entity relationships. They are particularly suitable for distributed and flexible systems as the Line-Less Mobile Assembly or Matrix-Production, as they meet reconfigurability capabilities. Though HCA reference structures as PROSA or ADACOR/ADACOR² have been heavily discussed in the literature, neither can directly be applied to the LMAS context. Methodologies such as ANEMONA provide guidelines and best practices for the development of holonic multi-agent systems. Accordingly, this dissertation aims to answer the question \"How does an LMAS production and control system architecture need to be designed?\" presenting the architecture design models developed according to the steps of the ANEMONA methodology. The ANEMONA analysis phase results in a use case specification, requirements, system goals, simplifications, and assumptions. The design phase results in an LMAS architecture design consisting of the organization, interaction, and agent models followed by a brief analysis of its behavioral coverage. The implementation phase result is an LMAS ontology, which reuses elements from the widespread manufacturing domain ontologies MAnufacturing's Semantics Ontology (MASON) and Manufacturing Resource Capability Ontology (MaRCO) enriched with essential holonic concepts. The architecture approach and ontology are implemented using the Robot Operating System (ROS) robotic framework. In order to create test data sets validation, an algorithm for test generation based on the complexity of products and the shopfloor flexibility is presented considering a maximum number of operations per work station and the maximum number of simultaneous stations. The validation phase presents a two-folded validation: qualitative and quantitative. The qualitative validation of the HCA models is based on how the proposed HCA attends specific criteria for evaluating HCA systems (e.g., modularity, integrability, diagnosability, fault tolerance, distributability, developer training requirements). The validation is complemented by a quantitative analysis considering the behavior of the implemented models during the normal execution and disrupted execution (e.g.; defective equipment) in a simulated environment (in the form of a software prototype). The normal execution validation focuses on the time drift between the planned and executed schedules, which has proved to be irrelevant within the simulated case considering the order of magnitude of the typical demanded operations. Subsequently, during the disrupted case execution, the system is tested under the simulation of a failure, where two strategies are applied, LOCAL\_FIX and REORGANIZATION, and their outcome is compared to decide which one is the appropriate option when the goal is to reduce the overall execution time. Ultimately, it is presented an analysis about the coverage of this dissertation culminating into guidelines that can be seen as one possible answer (among many others) for the presented research question. Furthermore, strong and weak points of the developed models are presented, and possible improvements and ideas for future contributions towards the implementation of holonic control systems for LMAS

    A Multi-Agent Architecture for An Intelligent Web-Based Educational System

    Get PDF
    An intelligent educational system must constitute an adaptive system built on multi-agent system architecture. The multi-agent architecture component provides self-organization, self-direction, and other control functionalities that are crucially important for an educational system. On the other hand, the adaptiveness of the system is necessary to provide customization, diversification, and interactional functionalities. Therefore, an educational system architecture that integrates multi-agent functionality [50] with adaptiveness can offer the learner the required independent learning experience. An educational system architecture is a complex structure with an intricate hierarchal organization where the functional components of the system undergo sophisticated and unpredictable internal interactions to perform its function. Hence, the system architecture must constitute adaptive and autonomous agents differentiated according to their functions, called multi-agent systems (MASs). The research paper proposes an adaptive hierarchal multi-agent educational system (AHMAES) [51] as an alternative to the traditional education delivery method. The document explains the various architectural characteristics of an adaptive multi-agent educational system and critically analyzes the system’s factors for software quality attributes

    A Generic Network and System Management Framework

    Get PDF
    Networks and distributed systems have formed the basis of an ongoing communications revolution that has led to the genesis of a wide variety of services. The constantly increasing size and complexity of these systems does not come without problems. In some organisations, the deployment of Information Technology has reached a state where the benefits from downsizing and rightsizing by adding new services are undermined by the effort required to keep the system running. Management of networks and distributed systems in general has a straightforward goal: to provide a productive environment in which work can be performed effectively. The work required for management should be a small fraction of the total effort. Most IT systems are still managed in an ad hoc style without any carefully elaborated plan. In such an environment the success of management decisions depends totally on the qualification and knowledge of the administrator. The thesis provides an analysis of the state of the art in the area of Network and System Management and identifies the key requirements that must be addressed for the provisioning of Integrated Management Services. These include the integration of the different management related aspects (i.e. integration of heterogeneous Network, System and Service Management). The thesis then proposes a new framework, INSMware, for the provision of Management Services. It provides a fundamental basis for the realisation of a new approach to Network and System Management. It is argued that Management Systems can be derived from a set of pre-fabricated and reusable Building Blocks that break up the required functionality into a number of separate entities rather than being developed from scratch. It proposes a high-level logical model in order to accommodate the range of requirements and environments applicable to Integrated Network and System Management that can be used as a reference model. A development methodology is introduced that reflects principles of the proposed approach, and provides guidelines to structure the analysis, design and implementation phases of a management system. The INSMware approach can further be combined with the componentware paradigm for the implementation of the management system. Based on these principles, a prototype for the management of SNMP systems has been implemented using industry standard middleware technologies. It is argued that development of a management system based on Componentware principles can offer a number of benefits. INSMware Components may be re-used and system solutions will become more modular and thereby easier to construct and maintain

    A component-based approach to human–machine interface systems that support agile manufacturing

    Get PDF
    The development of next generation manufacturing systems is currently an active area of research worldwide. Globalisation is placing new demands on the manufacturing industry with products having shorter lifecycles and being required in more variants. Manufacturing systems must therefore be agile to support frequent manufacturing system reconfiguration involving globally distributed engineering partners. The research described in this thesis addresses one aspect within this research area, the Human Machine Interface (HMI) system that support the personnel involved in the monitoring, diagnostics and reconfiguration of automated manufacturing production machinery. Current HMI systems are monolithic in their design, generally offer poor connectivity to other manufacturing systems and require highly skilled personnel to develop and maintain them. The new approach established in the research and presented in this thesis provides a specification capture technique (using a novel storyboarding modelling notation) that enables the end users HMI functionality to be specified and rapidly developed into fully functional End User HMI's via automated generation tools. A novel feature in this HMI system architecture that all machine information is stored in a common unified machine data model which ensures consistent accurate machine data is available to all machine lifecycle engineering tools including the HMI. The system's run-time architecture enables remote monitoring and diagnostics capabilities to be available to geographically distributed engineering partners using standard internet technologies. The implementation of this novel HMI approach has been prototyped and evaluated using the industrial collaborators full scale demonstrator machines within cylinder head machining and engine assembly applications

    An Agile Roadmap for Live, Virtual and Constructive-Integrating Training Architecture (LVC-ITA): A Case Study Using a Component based Integrated Simulation Engine

    Get PDF
    Conducting seamless Live Virtual Constructive (LVC) simulation remains the most challenging issue of Modeling and Simulation (M&S). There is a lack of interoperability, limited reuse and loose integration between the Live, Virtual and/or Constructive assets across multiple Standard Simulation Architectures (SSAs). There have been various theoretical research endeavors about solving these problems but their solutions resulted in complex and inflexible integration, long user-usage time and high cost for LVC simulation. The goal of this research is to provide an Agile Roadmap for the Live Virtual Constructive-Integrating Training Architecture (LVC-ITA) that will address the above problems and introduce interoperable LVC simulation. Therefore, this research describes how the newest M&S technologies can be utilized for LVC simulation interoperability and integration. Then, we will examine the optimal procedure to develop an agile roadmap for the LVC-ITA. In addition, this research illustrated a case study using an Adaptive distributed parallel Simulation environment for Interoperable and reusable Model (AddSIM) that is a component based integrated simulation engine. The agile roadmap of the LVC-ITA that reflects the lessons learned from the case study will contribute to guide M&S communities to an efficient path to increase interaction of M&S simulation across systems
    corecore