740 research outputs found

    Governance of Autonomous Agents on the Web: Challenges and Opportunities

    Get PDF
    International audienceThe study of autonomous agents has a long tradition in the Multiagent System and the Semantic Web communities, with applications ranging from automating business processes to personal assistants. More recently, the Web of Things (WoT), which is an extension of the Internet of Things (IoT) with metadata expressed in Web standards, and its community provide further motivation for pushing the autonomous agents research agenda forward. Although representing and reasoning about norms, policies and preferences is crucial to ensuring that autonomous agents act in a manner that satisfies stakeholder requirements, normative concepts, policies and preferences have yet to be considered as first-class abstractions in Web-based multiagent systems. Towards this end, this paper motivates the need for alignment and joint research across the Multiagent Systems, Semantic Web, and WoT communities, introduces a conceptual framework for governance of autonomous agents on the Web, and identifies several research challenges and opportunities

    Software Technologies - 8th International Joint Conference, ICSOFT 2013 : Revised Selected Papers

    Get PDF

    Team Learning: A Theoretical Integration and Review

    Get PDF
    With the increasing emphasis on work teams as the primary architecture of organizational structure, scholars have begun to focus attention on team learning, the processes that support it, and the important outcomes that depend on it. Although the literature addressing learning in teams is broad, it is also messy and fraught with conceptual confusion. This chapter presents a theoretical integration and review. The goal is to organize theory and research on team learning, identify actionable frameworks and findings, and emphasize promising targets for future research. We emphasize three theoretical foci in our examination of team learning, treating it as multilevel (individual and team, not individual or team), dynamic (iterative and progressive; a process not an outcome), and emergent (outcomes of team learning can manifest in different ways over time). The integrative theoretical heuristic distinguishes team learning process theories, supporting emergent states, team knowledge representations, and respective influences on team performance and effectiveness. Promising directions for theory development and research are discussed

    A holonic manufacturing architecture for line-less mobile assembly systems operations planning and control

    Get PDF
    Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Engenharia de Automação e Sistemas, Florianópolis, 2022.O Line-Less Mobile Assembly Systems (LMAS) é um paradigma de fabricação que visa maximizar a resposta às tendências do mercado através de configurações adaptáveis de fábrica utilizando recursos de montagem móvel. Tais sistemas podem ser caracterizados como holonic manufacturing systems (HMS), cujas chamadas holonic control architecture (HCA) são recentemente retratadas como abordagens habilitadoras da Indústria 4.0 devido a suas relações de entidades temporárias (hierárquicas e/ou heterárquicas). Embora as estruturas de referência HCA como PROSA ou ADACOR/ADACOR² tenham sido muito discutidas na literatura, nenhuma delas pode ser aplicada diretamente ao contexto LMAS. Assim, esta dissertação visa responder à pergunta \"Como uma arquitetura de produção e sistema de controle LMAS precisa ser projetada?\" apresentando os modelos de projeto de arquitetura desenvolvidos de acordo com as etapas da metodologia para desenvolvimento de sistemas holônicos multi-agentes ANEMONA. A fase de análise da ANEMONA resulta em uma especificação do caso de uso, requisitos, objetivos do sistema, simplificações e suposições. A fase de projeto resulta nos modelos de organização, interação e agentes, seguido de uma breve análise de sua cobertura comportamental. O resultado da fase de implementação é um protótipo (realizado com o Robot Operation System) que implementa os modelos ANEMONA e uma ontologia LMAS, que reutiliza elementos de ontologias de referência do domínio de manufatura. A fim de testar o protótipo, um algoritmo para geração de dados para teste baseado na complexidade dos produtos e na flexibilidade do chão de fábrica é apresentado. A validação qualitativa dos modelos HCA é baseada em como o HCA proposto atende a critérios específicos para avaliar sistemas HCA. A validação é complementada por uma análise quantitativa considerando o comportamento dos modelos implementados durante a execução normal e a execução interrompida (e.g. equipamento defeituoso) em um ambiente simulado. A validação da execução normal concentra-se no desvio de tempo entre as agendas planejadas e executadas, o que provou ser em média irrelevante dentro do caso simulado considerando a ordem de magnitude das operações típicas demandadas. Posteriormente, durante a execução do caso interrompido, o sistema é testado sob a simulação de uma falha, onde duas estratégias são aplicadas, LOCAL\_FIX e REORGANIZATION, e seu resultado é comparado para decidir qual é a opção apropriada quando o objetivo é reduzir o tempo total de execução. Finalmente, é apresentada uma análise sobre a cobertura desta dissertação culminando em diretrizes que podem ser vistas como uma resposta possível (entre muitas outras) para a questão de pesquisa apresentada. Além disso, são apresentados pontos fortes e fracos dos modelos desenvolvidos, e possíveis melhorias e idéias para futuras contribuições para a implementação de sistemas de controle holônico para LMAS.Abstract: The Line-Less Mobile Assembly Systems (LMAS) is a manufacturing paradigm aiming to maximize responsiveness to market trends (product-individualization and ever-shortening product lifecycles) by adaptive factory configurations utilizing mobile assembly resources. Such responsive systems can be characterized as holonic manufacturing systems (HMS), whose so-called holonic control architectures (HCA) are recently portrayed as Industry 4.0-enabling approaches due to their mixed-hierarchical and -heterarchical temporary entity relationships. They are particularly suitable for distributed and flexible systems as the Line-Less Mobile Assembly or Matrix-Production, as they meet reconfigurability capabilities. Though HCA reference structures as PROSA or ADACOR/ADACOR² have been heavily discussed in the literature, neither can directly be applied to the LMAS context. Methodologies such as ANEMONA provide guidelines and best practices for the development of holonic multi-agent systems. Accordingly, this dissertation aims to answer the question \"How does an LMAS production and control system architecture need to be designed?\" presenting the architecture design models developed according to the steps of the ANEMONA methodology. The ANEMONA analysis phase results in a use case specification, requirements, system goals, simplifications, and assumptions. The design phase results in an LMAS architecture design consisting of the organization, interaction, and agent models followed by a brief analysis of its behavioral coverage. The implementation phase result is an LMAS ontology, which reuses elements from the widespread manufacturing domain ontologies MAnufacturing's Semantics Ontology (MASON) and Manufacturing Resource Capability Ontology (MaRCO) enriched with essential holonic concepts. The architecture approach and ontology are implemented using the Robot Operating System (ROS) robotic framework. In order to create test data sets validation, an algorithm for test generation based on the complexity of products and the shopfloor flexibility is presented considering a maximum number of operations per work station and the maximum number of simultaneous stations. The validation phase presents a two-folded validation: qualitative and quantitative. The qualitative validation of the HCA models is based on how the proposed HCA attends specific criteria for evaluating HCA systems (e.g., modularity, integrability, diagnosability, fault tolerance, distributability, developer training requirements). The validation is complemented by a quantitative analysis considering the behavior of the implemented models during the normal execution and disrupted execution (e.g.; defective equipment) in a simulated environment (in the form of a software prototype). The normal execution validation focuses on the time drift between the planned and executed schedules, which has proved to be irrelevant within the simulated case considering the order of magnitude of the typical demanded operations. Subsequently, during the disrupted case execution, the system is tested under the simulation of a failure, where two strategies are applied, LOCAL\_FIX and REORGANIZATION, and their outcome is compared to decide which one is the appropriate option when the goal is to reduce the overall execution time. Ultimately, it is presented an analysis about the coverage of this dissertation culminating into guidelines that can be seen as one possible answer (among many others) for the presented research question. Furthermore, strong and weak points of the developed models are presented, and possible improvements and ideas for future contributions towards the implementation of holonic control systems for LMAS

    Trusted resource allocation in volunteer edge-cloud computing for scientific applications

    Get PDF
    Data-intensive science applications in fields such as e.g., bioinformatics, health sciences, and material discovery are becoming increasingly dynamic and demanding with resource requirements. Researchers using these applications which are based on advanced scientific workflows frequently require a diverse set of resources that are often not available within private servers or a single Cloud Service Provider (CSP). For example, a user working with Precision Medicine applications would prefer only those CSPs who follow guidelines from HIPAA (Health Insurance Portability and Accountability Act) for implementing their data services and might want services from other CSPs for economic viability. With the generation of more and more data these workflows often require deployment and dynamic scaling of multi-cloud resources in an efficient and high-performance manner (e.g., quick setup, reduced computation time, and increased application throughput). At the same time, users seek to minimize the costs of configuring the related multi-cloud resources. While performance and cost are among the key factors to decide upon CSP resource selection, the scientific workflows often process proprietary/confidential data that introduces additional constraints of security postures. Thus, users have to make an informed decision on the selection of resources that are most suited for their applications while trading off between the key factors of resource selection which are performance, agility, cost, and security (PACS). Furthermore, even with the most efficient resource allocation across multi-cloud, the cost to solution might not be economical for all users which have led to the development of new paradigms of computing such as volunteer computing where users utilize volunteered cyber resources to meet their computing requirements. For economical and readily available resources, it is essential that such volunteered resources can integrate well with cloud resources for providing the most efficient computing infrastructure for users. In this dissertation, individual stages such as user requirement collection, user's resource preferences, resource brokering and task scheduling, in lifecycle of resource brokering for users are tackled. For collection of user requirements, a novel approach through an iterative design interface is proposed. In addition, fuzzy interference-based approach is proposed to capture users' biases and expertise for guiding their resource selection for their applications. The results showed improvement in performance i.e. time to execute in 98 percent of the studied applications. The data collected on user's requirements and preferences is later used by optimizer engine and machine learning algorithms for resource brokering. For resource brokering, a new integer linear programming based solution (OnTimeURB) is proposed which creates multi-cloud template solutions for resource allocation while also optimizing performance, agility, cost, and security. The solution was further improved by the addition of a machine learning model based on naive bayes classifier which captures the true QoS of cloud resources for guiding template solution creation. The proposed solution was able to improve the time to execute for as much as 96 percent of the largest applications. As discussed above, to fulfill necessity of economical computing resources, a new paradigm of computing viz-a-viz Volunteer Edge Computing (VEC) is proposed which reduces cost and improves performance and security by creating edge clusters comprising of volunteered computing resources close to users. The initial results have shown improved time of execution for application workflows against state-of-the-art solutions while utilizing only the most secure VEC resources. Consequently, we have utilized reinforcement learning based solutions to characterize volunteered resources for their availability and flexibility towards implementation of security policies. The characterization of volunteered resources facilitates efficient allocation of resources and scheduling of workflows tasks which improves performance and throughput of workflow executions. VEC architecture is further validated with state-of-the-art bioinformatics workflows and manufacturing workflows.Includes bibliographical references

    Annual Report Of Research and Creative Productions by Faculty and Staff from January to December, 2005.

    Get PDF
    Annual Report Of Research and Creative Productions by Faculty and Staff from January to December, 2005

    Factories of the Future

    Get PDF
    Engineering; Industrial engineering; Production engineerin
    • …
    corecore