485 research outputs found

    Artificial Intelligence as an Enabler of Quick and Effective Production Repurposing Manufactur-ing: An Exploratory Review and Future Research Propositions

    Get PDF
    The outbreak of Covid-19 created disruptions in manufacturing operations. One of the most serious negative impacts is the shortage of critical medical supplies. Manufacturing firms faced pressure from governments to use their manufacturing capacity to repurpose their production for meeting the critical demand for necessary products. For this purpose, recent advancements in technology and artificial intelligence (AI) could act as response solutions to conquer the threats linked with repurposing manufacturing (RM). The study’s purpose is to investigate the significance of AI in RM through a systematic literature review (SLR). This study gathered around 453 articles from the SCOPUS database in the selected research field. Structural Topic Modeling (STM) was utilized to generate emerging research themes from the selected documents on AI in RM. In addition, to study the research trends in the field of AI in RM, a bibliometric analysis was undertaken using the R-package. The findings of the study showed that there is a vast scope for research in this area as the yearly global production of articles in this field is limited. However, it is an evolving field and many research collaborations were identified. The study proposes a comprehensive research framework and propositions for future research development

    EFFICIENCY OF FLEXIBLE FIXTURES: DESIGN AND CONTROL

    Get PDF
    The manufacturing industries have been using flexible production technologies to meet the demand for customisation. As a part of production, fixtures have remained limited to dedicated technologies, even though numerous flexible fixtures have been studied and proposed by both academia and industry. The integration of flexible fixtures has shown that such efforts did not yield the anticipated performance and resulted in inefficiencies of cost and time. The fundamental formulation of this thesis addresses this issue and aims to increase the efficiency of flexible fixtures.To realise this aim, the research in this thesis poses three research questions. The first research question investigates the efficiency description of flexible fixtures in terms of the criteria. Relative to this, the second research question investigates the use of efficiency metrics to integrate efficiency criteria into a design procedure. Once the efficiency and design aspects have been established, the third research question investigates the active control of flexible fixtures to increase their efficiency. The results of this thesis derive from the outcome of seven studies investigating the automotive and aerospace industries. The results that answer the first research question use five criteria to establish the efficiency of flexible fixtures. These are: fundamental, flexibility, cost, time and quality. By incorporating design characteristics in respect of production system paradigms, each criterion is elaborated upon using relevant sub-criteria and metrics. Moreover, a comparative design procedure is presented for the second research question and comprising four stages (including mechanical, control and software aspects). Initially, the design procedure proposes conceptual design and verification stages to determine the most promising flexible fixture for a target production system. By executing detailed design and verification, the design procedure enables a fixture designer to finalise the flexible fixture and determine its efficiency. Furthermore, a novel parallel kinematics machine is presented to demonstrate the applicability of the design procedure’s analytical steps and illustrate how appropriate kinematic structures can facilitate the efficiency-orientated design of flexible fixtures.Based on the correlation established by the controller software’s design procedure, the active control of flexible fixtures directly affects the quality criterion of flexible fixture efficiency. This provides the answer to the third research question, on general control strategies for active control of flexible fixtures. The introduction of a system model and manipulator dynamics proposes force and position control strategies. It is shown that any flexible fixture using a kinematic class can be controlled, to regulate the force and position of a workpiece and ensure that process nominals are preserved. Moreover, using both direct and indirect force control strategies, a flexible fixture’s role in active control can be expanded into a system of actively controlled fixtures that are useful in various processes. Finally, a position controller is presented which has the capacity to regulate both periodic and non-periodic signals. This controller uses an additional feedforward scheme (based on the Hilbert transform) in parallel with a feedback mechanism. Thus, the position controller enables flexible fixtures to regulate the position of a workpiece in respect of any kind of disturbance

    Stochastic Model Predictive Control and Machine Learning for the Participation of Virtual Power Plants in Simultaneous Energy Markets

    Get PDF
    The emergence of distributed energy resources in the electricity system involves new scenarios in which domestic consumers (end-users) can be aggregated to participate in energy markets, acting as prosumers. Every prosumer is considered to work as an individual energy node, which has its own renewable generation source, its controllable and non-controllable energy loads, or even its own individual tariffs to trade. The nodes can build aggregations which are managed by a system operator. The participation in energy markets is not trivial for individual prosumers due to different aspects such as the technical requirements which must be satisfied, or the need to trade with a minimum volume of energy. These requirements can be solved by the definition of aggregated participations. In this context, the aggregators handle the difficult task of coordinating and stabilizing the prosumers' operations, not only at an individual level, but also at a system level, so that the set of energy nodes behaves as a single entity with respect to the market. The system operators can act as a trading-distributing company, or only as a trading one. For this reason, the optimization model must consider not only aggregated tariffs, but also individual tariffs to allow individual billing for each energy node. The energy node must have the required technical and legal competences, as well as the necessary equipment to manage their participation in energy markets or to delegate it to the system operator. This aggregation, according to business rules and not only to physical locations, is known as virtual power plant. The optimization of the aggregated participation in the different energy markets requires the introduction of the concept of dynamic storage virtualization. Therefore, every energy node in the system under study will have a battery installed to store excess energy. This dynamic virtualization defines logical partitions in the storage system to allow its use for different purposes. As an example, two different partitions can be defined: one for the aggregated participation in the day-ahead market, and the other one for the demand-response program. There are several criteria which must be considered when defining the participation strategy. A risky strategy will report more benefits in terms of trading; however, this strategy will also be more likely to get penalties for not meeting the contract due to uncertainties or operation errors. On the other hand, a conservative strategy would result worse economically in terms of trading, but it will reduce these potential penalties. The inclusion of dynamic intent profiles allows to set risky bids when there exist a potential low error of forecast in terms of generation, load or failures; and conservative bids otherwise. The system operator is the agent who decides how much energy will be reserved to trade, how much to energy node self consumption, how much to demand-response program participation etc. The large number of variables and states makes this problem too complex to be solved by classical methods, especially considering the fact that slight differences in wrong decisions would imply important economic issues in the short term. The concept of dynamic storage virtualization has been studied and implemented to allow the simultaneous participation in multiple energy markets. The simultaneous participations can be optimized considering the objective of potential profits, potential risks or even a combination of both considering more advanced criteria related to the system operator's know-how. Day-ahead bidding algorithms, demand-response program participation optimization and a penalty-reduction operation control algorithm have been developed. A stochastic layer has been defined and implemented to improve the robustness inherent to forecast-dependent systems. This layer has been developed with chance-constraints, which includes the possibility of combining an intelligent agent based on a encoder-decoder arquitecture built with neural networks composed of gated recurrent units. The formulation and the implementation allow a total decouplement among all the algorithms without any dependency among them. Nevertheless, they are completely engaged because the individual execution of each one considers both the current scenario and the selected strategy. This makes possible a wider and better context definition and a more real and accurate situation awareness. In addition to the relevant simulation runs, the platform has also been tested on a real system composed of 40 energy nodes during one year in the German island of Borkum. This experience allowed the extraction of very satisfactory conclusions about the deployment of the platform in real environments.La irrupción de los sistemas de generación distribuidos en los sistemas eléctricos dan lugar a nuevos escenarios donde los consumidores domésticos (usuarios finales) pueden participar en los mercados de energía actuando como prosumidores. Cada prosumidor es considerado como un nodo de energía con su propia fuente de generación de energía renovable, sus cargas controlables y no controlables e incluso sus propias tarifas. Los nodos pueden formar agregaciones que serán gestionadas por un agente denominado operador del sistema. La participación en los mercados energéticos no es trivial, bien sea por requerimientos técnicos de instalación o debido a la necesidad de cubrir un volumen mínimo de energía por transacción, que cada nodo debe cumplir individualmente. Estas limitaciones hacen casi imposible la participación individual, pero pueden ser salvadas mediante participaciones agregadas. El agregador llevará a cabo la ardua tarea de coordinar y estabilizar las operaciones de los nodos de energía, tanto individualmente como a nivel de sistema, para que todo el conjunto se comporte como una unidad con respecto al mercado. Las entidades que gestionan el sistema pueden ser meras comercializadoras, o distribuidoras y comercializadoras simultáneamente. Por este motivo, el modelo de optimización sobre el que basarán sus decisiones deberá considerar, además de las tarifas agregadas, otras individuales para permitir facturaciones independientes. Los nodos deberán tener autonomía legal y técnica, así como el equipamiento necesario y suficiente para poder gestionar, o delegar en el operador del sistema, su participación en los mercados de energía. Esta agregación atendiendo a reglas de negocio y no solamente a restricciones de localización física es lo que se conoce como Virtual Power Plant. La optimización de la participación agregada en los mercados, desde el punto de vista técnico y económico, requiere de la introducción del concepto de virtualización dinámica del almacenamiento, para lo que será indispensable que los nodos pertenecientes al sistema bajo estudio consten de una batería para almacenar la energía sobrante. Esta virtualización dinámica definirá particiones lógicas en el sistema de almacenamiento para dedicar diferentes porcentajes de la energía almacenada para propósitos distintos. Como ejemplo, se podría hacer una virtualización en dos particiones lógicas diferentes: una de demand-response. Así, el sistema podría operar y satisfacer ambos mercados de manera simultánea con el mismo grid y el mismo almacenamiento. El potencial de estas particiones lógicas es que se pueden definir de manera dinámica, dependiendo del contexto de ejecución y del estado, tanto de la red, como de cada uno de los nodos a nivel individual. Para establecer una estrategia de participación se pueden considerar apuestas arriesgadas que reportarán más beneficios en términos de compra-venta, pero también posibles penalizaciones por no poder cumplir con el contrato. Por el contrario, una estrategia conservadora podría resultar menos beneficiosa económicamente en dichos términos de compra-venta, pero reducirá las penalizaciones. La inclusión del concepto de perfiles de intención dinámicos permitirá hacer pujas que sean arriesgadas, cuando existan errores de predicción potencialmente pequeños en términos de generación, consumo o fallos; y pujas más conservadoras en caso contrario. El operador del sistema es el agente que definirá cuánta energía utiliza para comercializar, cuánta para asegurar autoconsumo, cuánta desea tener disponible para participar en el programa de demand-response etc. El gran número de variables y de situaciones posibles hacen que este problema sea muy costoso y complejo de resolver mediante métodos clásicos, sobre todo teniendo en cuenta que pequeñas variaciones en la toma de decisiones pueden tener grandes implicaciones económicas incluso a corto plazo. En esta tesis se ha investigado en el concepto de virtualización dinámica del almacenamiento para permitir una participación simultánea en múltiples mercados. La estrategia de optimización definida permite participaciones simultáneas en diferentes mercados que pueden ser controladas con el objetivo de optimizar el beneficio potencial, el riesgo potencial, o incluso una combinación mixta de ambas en base a otros criterios más avanzados marcados por el know-how del operador del sistema. Se han desarrollado algoritmos de optimización para el mercado del day-ahead, para la participación en el programa de demand-response y un algoritmo de control para reducir las penalizaciones durante la operación mediante modelos de control predictivo. Se ha realizado la definición e implementación de un componente estocástico para hacer el sistema más robusto frente a la incertidumbre inherente a estos sistemas en los que hay tanto peso de una componente de tipo forecasing. La formulación de esta capa se ha realizado mediante chance-constraints, que incluye la posibilidad de combinar diferentes componentes para mejorar la precisión de la optimización. Para el caso de uso presentado se ha elegido la combinación de métodos estadísticos por probabilidad junto a un agente inteligente basado en una arquitectura de codificador-decodificador construida con redes neuronales compuestas de Gated Recurrent Units. La formulación y la implementación utilizada permiten que, aunque todos los algoritmos estén completamente desacoplados y no presenten dependencias entre ellos, todos se actual como la estrategia seleccionada. Esto permite la definición de un contexto mucho más amplio en la ejecución de las optimizaciones y una toma de decisiones más consciente, real y ajustada a la situación que condiciona al proceso. Además de las pertinentes pruebas de simulación, parte de la herramienta ha sido probada en un sistema real compuesto por 40 nodos domésticos, convenientemente equipados, durante un año en una infraestructura implantada en la isla alemana de Borkum. Esta experiencia ha permitido extraer conclusiones muy interesantes sobre la implantación de la plataforma en entornos reales

    Smart Agents in Industrial Cyber–Physical Systems

    Full text link

    Bandwidth Allocation Mechanism based on Users' Web Usage Patterns for Campus Networks

    Get PDF
    Managing the bandwidth in campus networks becomes a challenge in recent years. The limited bandwidth resource and continuous growth of users make the IT managers think on the strategies concerning bandwidth allocation. This paper introduces a mechanism for allocating bandwidth based on the users’ web usage patterns. The main purpose is to set a higher bandwidth to the users who are inclined to browsing educational websites compared to those who are not. In attaining this proposed technique, some stages need to be done. These are the preprocessing of the weblogs, class labeling of the dataset, computation of the feature subspaces, training for the development of the ANN for LDA/GSVD algorithm, visualization, and bandwidth allocation. The proposed method was applied to real weblogs from university’s proxy servers. The results indicate that the proposed method is useful in classifying those users who used the internet in an educational way and those who are not. Thus, the developed ANN for LDA/GSVD algorithm outperformed the existing algorithm up to 50% which indicates that this approach is efficient. Further, based on the results, few users browsed educational contents. Through this mechanism, users will be encouraged to use the internet for educational purposes. Moreover, IT managers can make better plans to optimize the distribution of bandwidth

    Artificial cognitive architecture with self-learning and self-optimization capabilities. Case studies in micromachining processes

    Full text link
    Tesis doctoral inédita leída en la Universidad Autónoma de Madrid, Escuela Politécnica Superior, Departamento de Ingeniería Informática. Fecha de lectura : 22-09-201

    Dependable Embedded Systems

    Get PDF
    This Open Access book introduces readers to many new techniques for enhancing and optimizing reliability in embedded systems, which have emerged particularly within the last five years. This book introduces the most prominent reliability concerns from today’s points of view and roughly recapitulates the progress in the community so far. Unlike other books that focus on a single abstraction level such circuit level or system level alone, the focus of this book is to deal with the different reliability challenges across different levels starting from the physical level all the way to the system level (cross-layer approaches). The book aims at demonstrating how new hardware/software co-design solution can be proposed to ef-fectively mitigate reliability degradation such as transistor aging, processor variation, temperature effects, soft errors, etc. Provides readers with latest insights into novel, cross-layer methods and models with respect to dependability of embedded systems; Describes cross-layer approaches that can leverage reliability through techniques that are pro-actively designed with respect to techniques at other layers; Explains run-time adaptation and concepts/means of self-organization, in order to achieve error resiliency in complex, future many core systems

    Mass Production Processes

    Get PDF
    It is always hard to set manufacturing systems to produce large quantities of standardized parts. Controlling these mass production lines needs deep knowledge, hard experience, and the required related tools as well. The use of modern methods and techniques to produce a large quantity of products within productive manufacturing processes provides improvements in manufacturing costs and product quality. In order to serve these purposes, this book aims to reflect on the advanced manufacturing systems of different alloys in production with related components and automation technologies. Additionally, it focuses on mass production processes designed according to Industry 4.0 considering different kinds of quality and improvement works in mass production systems for high productive and sustainable manufacturing. This book may be interesting to researchers, industrial employees, or any other partners who work for better quality manufacturing at any stage of the mass production processes

    Failure analysis informing intelligent asset management

    Get PDF
    With increasing demands on the UK’s power grid it has become increasingly important to reform the methods of asset management used to maintain it. The science of Prognostics and Health Management (PHM) presents interesting possibilities by allowing the online diagnosis of faults in a component and the dynamic trending of its remaining useful life (RUL). Before a PHM system can be developed an extensive failure analysis must be conducted on the asset in question to determine the mechanisms of failure and their associated data precursors that precede them. In order to gain experience in the development of prognostic systems we have conducted a study of commercial power relays, using a data capture regime that revealed precursors to relay failure. We were able to determine important failure precursors for both stuck open failures caused by contact erosion and stuck closed failures caused by material transfer and are in a position to develop a more detailed prognostic system from this base. This research when expanded and applied to a system such as the power grid, presents an opportunity for more efficient asset management when compared to maintenance based upon time to replacement or purely on condition
    • …
    corecore