98 research outputs found

    Virtual Organization Clusters: Self-Provisioned Clouds on the Grid

    Get PDF
    Virtual Organization Clusters (VOCs) provide a novel architecture for overlaying dedicated cluster systems on existing grid infrastructures. VOCs provide customized, homogeneous execution environments on a per-Virtual Organization basis, without the cost of physical cluster construction or the overhead of per-job containers. Administrative access and overlay network capabilities are granted to Virtual Organizations (VOs) that choose to implement VOC technology, while the system remains completely transparent to end users and non-participating VOs. Unlike alternative systems that require explicit leases, VOCs are autonomically self-provisioned according to configurable usage policies. As a grid computing architecture, VOCs are designed to be technology agnostic and are implementable by any combination of software and services that follows the Virtual Organization Cluster Model. As demonstrated through simulation testing and evaluation of an implemented prototype, VOCs are a viable mechanism for increasing end-user job compatibility on grid sites. On existing production grids, where jobs are frequently submitted to a small subset of sites and thus experience high queuing delays relative to average job length, the grid-wide addition of VOCs does not adversely affect mean job sojourn time. By load-balancing jobs among grid sites, VOCs can reduce the total amount of queuing on a grid to a level sufficient to counteract the performance overhead introduced by virtualization

    Collision-free path coordination and cycle time optimization of industrial robot cells

    Get PDF
    In industry, short ramp-up times, product quality, product customization and high production rates are among the main drivers of technological progress. This is especially true for automotive manufacturers whose market is very competitive, constantly pushing for new solutions. In this industry, many of the processes are carried out by robots: for example, operations such as stud/spot welding, sealing, painting and inspection. Besides higher production rates, the improvement of these processes is important from a sustainability perspective, since an optimized equipment utilization may be achieved, in terms of resources used, including such things as robots, energy, and physical prototyping. The achievements of such goals may, nowadays, be reached also thanks to virtual methods, which make modeling, simulation and optimization of industrial processes possible. The work in this thesis may be positioned in this area and focuses on virtual product and production development for throughput improvement of robotics processes in the automotive industry. Specifically, the thesis presents methods, algorithms and tools to avoid collisions and minimize cycle time in multi-robot stations. It starts with an overview of the problem, providing insights into the relationship between the volumes shared by the robots\u27 workspaces and more abstract modeling spaces. It then describes a computational method for minimizing cycle time when robot paths are geometrically fixed and only velocity tuning is allowed to avoid collisions. Additional requirements are considered for running these solutions in industrial setups, specifically the time delays introduced when stopping robots to exchange information with a programmable logic controller (PLC). A post-processing step is suggested, with algorithms taking into account these practical constraints. When no communication at all with the PLC is highly desirable, a method of providing such programs is described to give completely separated robot workspaces. Finally, when this is not possible (in very cluttered environments and with densely distributed tasks, for example), robot routes are modified by changing the order of operations to avoid collisions between robots.In summary, by requiring fewer iterations between different planning stages, using automatic tools to optimize the process and by reducing physical prototyping, the research presented in this thesis (and the corresponding implementation in software platforms) will improve virtual product and production realization for robotic applications

    Multi-objective task allocation for collaborative robot systems with an Industry 5.0 human-centered perspective

    Get PDF
    The migration from Industry 4.0 to Industry 5.0 is becoming more relevant nowadays, with a consequent increase in interest in the operators' wellness in their working environment. In modern industry, there are different activities that require the flexibility of human operators in performing different tasks, while some others can be performed by collaborative robots (cobots), which promote a fair division of the tasks among the resources in industrial applications. Initially, these robots were used to increase productivity, in particular in assembly systems; currently, new goals have been introduced, such as reducing operator's fatigue, so that he/she can be more effective in the tasks that require his/her flexibility. For this purpose, a model that aims to realize a multi-objective optimization for task allocation is here proposed. It includes makespan minimization, but also the operator's energy expenditure and average mental workload reduction. The first objective is to reach the required high productivity standards, while the latter is to realize a human-centered workplace, as required by the Industry 5.0 paradigms. A method for average mental workload evaluation in the entire assembly process and a new constraint, related to resources' idleness, are here suggested, together with the evaluation of the methodology in a real case study. The results show that it is possible to combine all these elements finding a procedure to define the optimal task allocation that improves the performance of the systems, both for efficiency and for workers' well-being

    HDeepRM: Deep Reinforcement Learning para la Gestión de Cargas de Trabajo en Clústeres Heterogéneos

    Get PDF
    ABSTRACT: High Performance Computing (HPC) environments offer users computational capability as a service. They are constituted by computing clusters, which are groups of resources available for processing jobs sent by the users. Heterogeneous configurations of these clusters allow for providing resources fitted to a wider spectrum of workloads, superior to that of traditional homogeneous approaches. This in turn improves the computational and energetic efficiency of the service. Scheduling of resources for incoming jobs is undertaken by a workload manager following a established policy. Classic policies have been developed for homogeneous environments, with literature focusing on improving job selection policies. Nevertheless, in heterogeneous configurations the resource selection is as relevant for optimizing the offered service. Complexity of scheduling policies grows with the number of resources and degree of heterogeneity in the service. Deep Reinforcement Learning (DRL) has been recently evaluated in homogeneous workload management scenarios as an alternative to deal with complex patterns. It introduces an artificial agent which estimates via learning the optimal scheduling policy for a given system. In this thesis, HDeepRM, a novel framework for the study of DRL agents in heterogeneous clusters is designed, implemented, tested and distributed. This leverages a state-of-the-art simulator, and offers users a clean interface for developing their own bespoke agents, as well as evaluating them before going into production. Evaluations have been undertaken to demonstrate the validity of the framework. Two agents based on well-known reinforcement learning algorithms are implemented over HDeepRM, and results show the research potential in this area for the scientific community.RESUMEN: Los entornos de High Performance Computing (HPC) ofrecen capacidad computacional como servicio a sus usuarios. Están formados por clústeres de cómputo, grupos de recursos que aceptan y procesan trabajos enviados por los usuarios. Las configuraciones heterogéneas permiten disponer de recursos adecuados a un espectro de cargas de trabajo superior al de los clústeres homogéneos tradicionales, mejorando la eficiencia computacional y energética del servicio. La asociación de trabajos con recursos del sistema es llevada a cabo por un gestor de cargas de trabajo siguiendo una política de planificación. Las políticas clásicas han sido desarrolladas para entornos homogéneos, y la literatura se centra en la selección del trabajo. Sin embargo, en entornos heterogéneos la selección del recurso es de relevancia para la optimización del servicio. La complejidad de las políticas de planificación crece con el número de recursos y la heterogeneidad del sistema. El Aprendizaje Profundo por Refuerzo o Deep Reinforcement Learning (DRL) ha sido recientemente objeto de estudio como alternativa para la gestión de cargas de trabajo. En él, se propone un agente artificial que estima mediante aprendizaje la política de planificación óptima para un determinado sistema. En esta tesis se describe el proceso de creación de HDeepRM, un nuevo marco de trabajo cuyo objetivo es el estudio de agentes basados en DRL para la estimación de políticas de planificación en clústeres heterogéneos. Implementado sobre un simulador actual, HDeepRM permite crear y evaluar nuevos agentes antes de llevarlos a producción. Se ha llevado a cabo el diseño, implementación, pruebas y empaquetado del software para poder distribuirlo a la comunidad científica. Finalmente, en las evaluaciones se demuestra la validez del marco de trabajo, y se implementan sobre él dos agentes basados en algoritmos de DRL. La comparación de estos con políticas clásicas muestra el potencial de investigación en este área.Máster en Ingeniería Informátic

    A simulation modelling approach to improve the OEE of a bottling line

    Get PDF
    This dissertation presents a simulation approach to improve the efficiency performance, in terms of OEE, of an automated bottling line. A simulation model of the system is created by means of the software AnyLogic; it is used to solve the case. The problems faced are a sequencing problem related to the order the formats of bottles are processed and the buffer sizing problem. Either theoretical aspects on OEE, job sequencing and simulation and practical aspects are presented

    Particle Swarm Optimization

    Get PDF
    Particle swarm optimization (PSO) is a population based stochastic optimization technique influenced by the social behavior of bird flocking or fish schooling.PSO shares many similarities with evolutionary computation techniques such as Genetic Algorithms (GA). The system is initialized with a population of random solutions and searches for optima by updating generations. However, unlike GA, PSO has no evolution operators such as crossover and mutation. In PSO, the potential solutions, called particles, fly through the problem space by following the current optimum particles. This book represents the contributions of the top researchers in this field and will serve as a valuable tool for professionals in this interdisciplinary field

    Toolpath Planning Methodology for Multi-Gantry Fused Filament Fabrication 3D Printing

    Get PDF
    Additive manufacturing (AM) has revolutionized the way industries manufacture and prototype products. Fused filament fabrication (FFF) is one of the most popular processes in AM as it is inexpensive, requires low maintenance, and has high material utilization. However, the biggest drawback that prevents FFF printing from being widely implemented in large-scale production is the cycle time. The most practical approach is to allow multiple collaborating printheads to work simultaneously on different parts of the same object. However, little research has been introduced to support the aforementioned approach. Hence a new toolpath planning methodology is proposed in this paper. The objectives are to create a collision-free toolpath for each printhead while maintaining the mechanical performance of the printed model. The proposed method utilizes the Tabu Search heuristic and a combination of two subroutines: collision checking and collision resolution (TS-CCR). A computer simulation was used to compare the performance of the proposed method with the industry-standard approach in terms of cycle time. Physical experimentation is conducted to validate the mechanical strength of the TS-CCR specimens. The experiment also validated that the proposed toolpath can be executed on a custom multi-gantry setup without a collision. Experimental results indicated that the proposed TS-CCR can create toolpaths with shorter makespans than the current standard approach while achieving better ultimate tensile strength (UTS). This research represents opportunities for developing general toolpath planning for concurrent 3D printing

    Efficient multilevel scheduling in grids and clouds with dynamic provisioning

    Get PDF
    Tesis de la Universidad Complutense de Madrid, Facultad de Informática, Departamento de Arquitectura de Computadores y Automática, leída el 12-01-2016La consolidación de las grandes infraestructuras para la Computación Distribuida ha resultado en una plataforma de Computación de Alta Productividad que está lista para grandes cargas de trabajo. Los mejores exponentes de este proceso son las federaciones grid actuales. Por otro lado, la Computación Cloud promete ser más flexible, utilizable, disponible y simple que la Computación Grid, cubriendo además muchas más necesidades computacionales que las requeridas para llevar a cabo cálculos distribuidos. En cualquier caso, debido al dinamismo y la heterogeneidad presente en grids y clouds, encontrar la asignación ideal de las tareas computacionales en los recursos disponibles es, por definición un problema NP-completo, y sólo se pueden encontrar soluciones subóptimas para estos entornos. Sin embargo, la caracterización de estos recursos en ambos tipos de infraestructuras es deficitaria. Los sistemas de información disponibles no proporcionan datos fiables sobre el estado de los recursos, lo cual no permite la planificación avanzada que necesitan los diferentes tipos de aplicaciones distribuidas. Durante la última década esta cuestión no ha sido resuelta para la Computación Grid y las infraestructuras cloud establecidas recientemente presentan el mismo problema. En este marco, los planificadores (brokers) sólo pueden mejorar la productividad de las ejecuciones largas, pero no proporcionan ninguna estimación de su duración. La planificación compleja ha sido abordada tradicionalmente por otras herramientas como los gestores de flujos de trabajo, los auto-planificadores o los sistemas de gestión de producción pertenecientes a ciertas comunidades de investigación. Sin embargo, el bajo rendimiento obtenido con estos mecanismos de asignación anticipada (early-binding) es notorio. Además, la diversidad en los proveedores cloud, la falta de soporte de herramientas de planificación y de interfaces de programación estandarizadas para distribuir la carga de trabajo, dificultan la portabilidad masiva de aplicaciones legadas a los entornos cloud...The consolidation of large Distributed Computing infrastructures has resulted in a High-Throughput Computing platform that is ready for high loads, whose best proponents are the current grid federations. On the other hand, Cloud Computing promises to be more flexible, usable, available and simple than Grid Computing, covering also much more computational needs than the ones required to carry out distributed calculations. In any case, because of the dynamism and heterogeneity that are present in grids and clouds, calculating the best match between computational tasks and resources in an effectively characterised infrastructure is, by definition, an NP-complete problem, and only sub-optimal solutions (schedules) can be found for these environments. Nevertheless, the characterisation of the resources of both kinds of infrastructures is far from being achieved. The available information systems do not provide accurate data about the status of the resources that can allow the advanced scheduling required by the different needs of distributed applications. The issue was not solved during the last decade for grids and the cloud infrastructures recently established have the same problem. In this framework, brokers only can improve the throughput of very long calculations, but do not provide estimations of their duration. Complex scheduling was traditionally tackled by other tools such as workflow managers, self-schedulers and the production management systems of certain research communities. Nevertheless, the low performance achieved by these earlybinding methods is noticeable. Moreover, the diversity of cloud providers and mainly, their lack of standardised programming interfaces and brokering tools to distribute the workload, hinder the massive portability of legacy applications to cloud environments...Depto. de Arquitectura de Computadores y AutomáticaFac. de InformáticaTRUEsubmitte

    Acta Cybernetica : Volume 15. Number 2.

    Get PDF

    Systems Engineering: Availability and Reliability

    Get PDF
    Current trends in Industry 4.0 are largely related to issues of reliability and availability. As a result of these trends and the complexity of engineering systems, research and development in this area needs to focus on new solutions in the integration of intelligent machines or systems, with an emphasis on changes in production processes aimed at increasing production efficiency or equipment reliability. The emergence of innovative technologies and new business models based on innovation, cooperation networks, and the enhancement of endogenous resources is assumed to be a strong contribution to the development of competitive economies all around the world. Innovation and engineering, focused on sustainability, reliability, and availability of resources, have a key role in this context. The scope of this Special Issue is closely associated to that of the ICIE’2020 conference. This conference and journal’s Special Issue is to present current innovations and engineering achievements of top world scientists and industrial practitioners in the thematic areas related to reliability and risk assessment, innovations in maintenance strategies, production process scheduling, management and maintenance or systems analysis, simulation, design and modelling
    corecore