112 research outputs found

    Training of Crisis Mappers and Map Production from Multi-sensor Data: Vernazza Case Study (Cinque Terre National Park, Italy)

    Get PDF
    This aim of paper is to presents the development of a multidisciplinary project carried out by the cooperation between Politecnico di Torino and ITHACA (Information Technology for Humanitarian Assistance, Cooperation and Action). The goal of the project was the training in geospatial data acquiring and processing for students attending Architecture and Engineering Courses, in order to start up a team of "volunteer mappers". Indeed, the project is aimed to document the environmental and built heritage subject to disaster; the purpose is to improve the capabilities of the actors involved in the activities connected in geospatial data collection, integration and sharing. The proposed area for testing the training activities is the Cinque Terre National Park, registered in the World Heritage List since 1997. The area was affected by flood on the 25th of October 2011. According to other international experiences, the group is expected to be active after emergencies in order to upgrade maps, using data acquired by typical geomatic methods and techniques such as terrestrial and aerial Lidar, close-range and aerial photogrammetry, topographic and GNSS instruments etc.; or by non conventional systems and instruments such us UAV, mobile mapping etc. The ultimate goal is to implement a WebGIS platform to share all the data collected with local authorities and the Civil Protectio

    Emergency Response Information System Interoperability: Development of Chemical Incident Response Data Model

    Get PDF
    Emergency response requires an efficient information supply chain for the smooth operations of intra- and inter-organizational emergency management processes. However, the breakdown of this information supply chain due to the lack of consistent data standards presents a significant problem. In this paper, we adopt a theory- driven novel approach to develop an XML-based data model that prescribes a comprehensive set of data standards (semantics and internal structures) for emergency management to better address the challenges of information interoperability. Actual documents currently being used in mitigating chemical emergencies from a large number of incidents are used in the analysis stage. The data model development is guided by Activity Theory and is validated through a RFC-like process used in standards development. This paper applies the standards to the real case of a chemical incident scenario. Further, it complies with the national leading initiatives in emergency standards (National Information Exchange Model

    A holonic manufacturing architecture for line-less mobile assembly systems operations planning and control

    Get PDF
    Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Engenharia de Automação e Sistemas, Florianópolis, 2022.O Line-Less Mobile Assembly Systems (LMAS) é um paradigma de fabricação que visa maximizar a resposta às tendências do mercado através de configurações adaptáveis de fábrica utilizando recursos de montagem móvel. Tais sistemas podem ser caracterizados como holonic manufacturing systems (HMS), cujas chamadas holonic control architecture (HCA) são recentemente retratadas como abordagens habilitadoras da Indústria 4.0 devido a suas relações de entidades temporárias (hierárquicas e/ou heterárquicas). Embora as estruturas de referência HCA como PROSA ou ADACOR/ADACOR² tenham sido muito discutidas na literatura, nenhuma delas pode ser aplicada diretamente ao contexto LMAS. Assim, esta dissertação visa responder à pergunta \"Como uma arquitetura de produção e sistema de controle LMAS precisa ser projetada?\" apresentando os modelos de projeto de arquitetura desenvolvidos de acordo com as etapas da metodologia para desenvolvimento de sistemas holônicos multi-agentes ANEMONA. A fase de análise da ANEMONA resulta em uma especificação do caso de uso, requisitos, objetivos do sistema, simplificações e suposições. A fase de projeto resulta nos modelos de organização, interação e agentes, seguido de uma breve análise de sua cobertura comportamental. O resultado da fase de implementação é um protótipo (realizado com o Robot Operation System) que implementa os modelos ANEMONA e uma ontologia LMAS, que reutiliza elementos de ontologias de referência do domínio de manufatura. A fim de testar o protótipo, um algoritmo para geração de dados para teste baseado na complexidade dos produtos e na flexibilidade do chão de fábrica é apresentado. A validação qualitativa dos modelos HCA é baseada em como o HCA proposto atende a critérios específicos para avaliar sistemas HCA. A validação é complementada por uma análise quantitativa considerando o comportamento dos modelos implementados durante a execução normal e a execução interrompida (e.g. equipamento defeituoso) em um ambiente simulado. A validação da execução normal concentra-se no desvio de tempo entre as agendas planejadas e executadas, o que provou ser em média irrelevante dentro do caso simulado considerando a ordem de magnitude das operações típicas demandadas. Posteriormente, durante a execução do caso interrompido, o sistema é testado sob a simulação de uma falha, onde duas estratégias são aplicadas, LOCAL\_FIX e REORGANIZATION, e seu resultado é comparado para decidir qual é a opção apropriada quando o objetivo é reduzir o tempo total de execução. Finalmente, é apresentada uma análise sobre a cobertura desta dissertação culminando em diretrizes que podem ser vistas como uma resposta possível (entre muitas outras) para a questão de pesquisa apresentada. Além disso, são apresentados pontos fortes e fracos dos modelos desenvolvidos, e possíveis melhorias e idéias para futuras contribuições para a implementação de sistemas de controle holônico para LMAS.Abstract: The Line-Less Mobile Assembly Systems (LMAS) is a manufacturing paradigm aiming to maximize responsiveness to market trends (product-individualization and ever-shortening product lifecycles) by adaptive factory configurations utilizing mobile assembly resources. Such responsive systems can be characterized as holonic manufacturing systems (HMS), whose so-called holonic control architectures (HCA) are recently portrayed as Industry 4.0-enabling approaches due to their mixed-hierarchical and -heterarchical temporary entity relationships. They are particularly suitable for distributed and flexible systems as the Line-Less Mobile Assembly or Matrix-Production, as they meet reconfigurability capabilities. Though HCA reference structures as PROSA or ADACOR/ADACOR² have been heavily discussed in the literature, neither can directly be applied to the LMAS context. Methodologies such as ANEMONA provide guidelines and best practices for the development of holonic multi-agent systems. Accordingly, this dissertation aims to answer the question \"How does an LMAS production and control system architecture need to be designed?\" presenting the architecture design models developed according to the steps of the ANEMONA methodology. The ANEMONA analysis phase results in a use case specification, requirements, system goals, simplifications, and assumptions. The design phase results in an LMAS architecture design consisting of the organization, interaction, and agent models followed by a brief analysis of its behavioral coverage. The implementation phase result is an LMAS ontology, which reuses elements from the widespread manufacturing domain ontologies MAnufacturing's Semantics Ontology (MASON) and Manufacturing Resource Capability Ontology (MaRCO) enriched with essential holonic concepts. The architecture approach and ontology are implemented using the Robot Operating System (ROS) robotic framework. In order to create test data sets validation, an algorithm for test generation based on the complexity of products and the shopfloor flexibility is presented considering a maximum number of operations per work station and the maximum number of simultaneous stations. The validation phase presents a two-folded validation: qualitative and quantitative. The qualitative validation of the HCA models is based on how the proposed HCA attends specific criteria for evaluating HCA systems (e.g., modularity, integrability, diagnosability, fault tolerance, distributability, developer training requirements). The validation is complemented by a quantitative analysis considering the behavior of the implemented models during the normal execution and disrupted execution (e.g.; defective equipment) in a simulated environment (in the form of a software prototype). The normal execution validation focuses on the time drift between the planned and executed schedules, which has proved to be irrelevant within the simulated case considering the order of magnitude of the typical demanded operations. Subsequently, during the disrupted case execution, the system is tested under the simulation of a failure, where two strategies are applied, LOCAL\_FIX and REORGANIZATION, and their outcome is compared to decide which one is the appropriate option when the goal is to reduce the overall execution time. Ultimately, it is presented an analysis about the coverage of this dissertation culminating into guidelines that can be seen as one possible answer (among many others) for the presented research question. Furthermore, strong and weak points of the developed models are presented, and possible improvements and ideas for future contributions towards the implementation of holonic control systems for LMAS

    INVESTIGATION OF THE ROLE OF SERVICE LEVEL AGREEMENTS IN WEB SERVICE QUALITY

    Get PDF
    Context/Background: Use of Service Level Agreements (SLAs) is crucial to provide the value added services to consumers to achieve their requirements successfully. SLAs also ensure the expected Quality of Service to consumers. Aim: This study investigates how efficient structural representation and management of SLAs can help to ensure the Quality of Service (QoS) in Web services during Web service composition. Method: Existing specifications and structures for SLAs for Web services do not fully formalize and provide support for different automatic and dynamic behavioral aspects needed for QoS calculation. This study addresses the issues on how to formalize and document the structures of SLAs for better service utilization and improved QoS results. The Service Oriented Architecture (SOA) is extended in this study with addition of an SLAAgent, which helps to automate the QoS calculation using Fuzzy Inference Systems, service discovery, service selection, SLA monitoring and management during service composition with the help of structured SLA documents. Results: The proposed framework improves the ways of how to structure, manage and monitor SLAs during Web service composition to achieve the better Quality of Service effectively and efficiently. Conclusions: To deal with different types of computational requirements the automation of SLAs is a challenge during Web service composition. This study shows the significance of the SLAs for better QoS during composition of services in SOA

    Framework for collaborative knowledge management in organizations

    Get PDF
    Nowadays organizations have been pushed to speed up the rate of industrial transformation to high value products and services. The capability to agilely respond to new market demands became a strategic pillar for innovation, and knowledge management could support organizations to achieve that goal. However, current knowledge management approaches tend to be over complex or too academic, with interfaces difficult to manage, even more if cooperative handling is required. Nevertheless, in an ideal framework, both tacit and explicit knowledge management should be addressed to achieve knowledge handling with precise and semantically meaningful definitions. Moreover, with the increase of Internet usage, the amount of available information explodes. It leads to the observed progress in the creation of mechanisms to retrieve useful knowledge from the huge existent amount of information sources. However, a same knowledge representation of a thing could mean differently to different people and applications. Contributing towards this direction, this thesis proposes a framework capable of gathering the knowledge held by domain experts and domain sources through a knowledge management system and transform it into explicit ontologies. This enables to build tools with advanced reasoning capacities with the aim to support enterprises decision-making processes. The author also intends to address the problem of knowledge transference within an among organizations. This will be done through a module (part of the proposed framework) for domain’s lexicon establishment which purpose is to represent and unify the understanding of the domain’s used semantic
    corecore