1,039 research outputs found

    Application of reinforcement learning in robotic disassembly operations

    Get PDF
    Disassembly is a key step in remanufacturing. To increase the level of automation in disassembly, it is necessary to use robots that can learn to perform new tasks by themselves rather than having to be manually reprogrammed every time there is a different job. Reinforcement Learning (RL) is a machine learning technique that enables the robots to learn by trial and error rather than being explicitly programmed. In this thesis, the application of RL to robotic disassembly operations has been studied. Firstly, a literature review on robotic disassembly and the application of RL in contact-rich tasks has been conducted in Chapter 2. To physically implement RL in robotic disassembly, the task of removing a bolt from a door chain lock has been selected as a case study, and a robotic training platform has been built for this implementation in Chapter 3. This task is chosen because it can demonstrate the capabilities of RL to pathfinding and dealing with reaction forces without explicitly specifying the target coordinates or building a force feedback controller. The robustness of the learned policies against the imprecision of the robot is studied by a proposed method to actively lower the precision of the robots. It has been found that the robot can learn successfully even when the precision is lowered to as low as ±0.5mm. This work also investigates whether learned policies can be transferred among robots with different precisions. Experiments have been performed by training a robot with a certain precision on a task and replaying the learned skills on a robot with different precision. It has been found that skills learned by a low-precision robot can perform better on a robot with higher precision, and skills learned by a high-precision robot have worse performance on robots with lower precision, as it is suspected that the policies trained on high-precision robots have been overfitted to the precise robots. In Chapter 4, the approach of using a digital-twin-assisted simulation-to-reality transfer to accelerate the learning performance of the RL has been investigated. To address the issue of identifying the system parameters, such as the stiffness and damping of the contact models, that are difficult to measure directly but are critical for building the digital twins of the environments, system identification method is used to minimise the discrepancy between the response generated from the physical and digital environments by using the Bees Algorithm. It is found that the proposed method effectively increases RL's learning performance. It is also found that it is possible to have worse performance with the sim-to-real transfer if the reality gap is not effectively addressed. However, increasing the size of the dataset and optimisation cycles have been demonstrated to reduce the reality gap and lead to successful sim-to-real transfers. Based on the training task described in Chapters 4 and 5, a full factorial study has been conducted to identify patterns when selecting the appropriate hyper-parameters when applying the Deep Deterministic Policy Gradient (DDPG) algorithm to the robotic disassembly task. Four hyper-parameters that directly influence the decision-making Artificial Neural Network (ANN) update have been chosen for the study, with three levels assigned to each hyper-parameter. After running 241 simulations, it is found that for this particular task, the learning rates of the actor and critic networks are the most influential hyper-parameters, while the batch size and soft update rate have relatively limited influence. Finally, the thesis is concluded in Chapter 6 with a summary of findings and suggested future research directions

    Systemic Circular Economy Solutions for Fiber Reinforced Composites

    Get PDF
    This open access book provides an overview of the work undertaken within the FiberEUse project, which developed solutions enhancing the profitability of composite recycling and reuse in value-added products, with a cross-sectorial approach. Glass and carbon fiber reinforced polymers, or composites, are increasingly used as structural materials in many manufacturing sectors like transport, constructions and energy due to their better lightweight and corrosion resistance compared to metals. However, composite recycling is still a challenge since no significant added value in the recycling and reprocessing of composites is demonstrated. FiberEUse developed innovative solutions and business models towards sustainable Circular Economy solutions for post-use composite-made products. Three strategies are presented, namely mechanical recycling of short fibers, thermal recycling of long fibers and modular car parts design for sustainable disassembly and remanufacturing. The validation of the FiberEUse approach within eight industrial demonstrators shows the potentials towards new Circular Economy value-chains for composite materials

    Investigating signalling pathways that influence stem cell self-renewal and differentiation

    Get PDF
    Adult tissue homeostasis relies on stem cells dividing to provide cells that differentiate and replenish lost cells. To prevent depletion of the stem cell pool, some of the daughter cells resulting from stem cell divisions retain stem cell identity and continue to proliferate, or self-renew. How stem cell self-renewal and differentiation are balanced is poorly understood. The niche, in which stem cells reside, provides spatially restricted signals that promote self-renewal. Daughter cells displaced away from the niche are thought to differentiate upon losing access to such signals. The sequence of events that occurs leading up to and following stem cell division is not well-understood. The Drosophila testis is a well-characterised model to study these behaviours. Here, cyst stem cells (CySCs) give rise to cyst cells that support germline development. Previous work in fixed tissues has shown that dysregulating CySC signalling can bias CySC fate outcomes. It remains to be demonstrated that, under normal physiological conditions, endogenous differences in signalling pathway activity are responsible for stem cell fate. In this thesis, I present evidence that differentiation in the Drosophila testis is not a default state upon losing access to niche-derived signals but is actively induced by signalling, and that the germline also plays a role in CySC differentiation. To visualise signalling dynamics in vivo, we have adapted a kinase activity biosensor from mammalian cell culture. I demonstrate that this biosensor faithfully reports kinase activity in both larval and adult tissues, and that it can be implemented with live imaging to study real-time signalling dynamics in individual cells. Finally, I characterise CySC behaviours under normal physiological circumstances using time-lapse live imaging of adult Drosophila testis explants. Based on these data, I discuss the role that signalling pathways play in maintaining the balance between self-renewal and differentiation of stem cells during normal homeostasis

    Towards trustworthy computing on untrustworthy hardware

    Get PDF
    Historically, hardware was thought to be inherently secure and trusted due to its obscurity and the isolated nature of its design and manufacturing. In the last two decades, however, hardware trust and security have emerged as pressing issues. Modern day hardware is surrounded by threats manifested mainly in undesired modifications by untrusted parties in its supply chain, unauthorized and pirated selling, injected faults, and system and microarchitectural level attacks. These threats, if realized, are expected to push hardware to abnormal and unexpected behaviour causing real-life damage and significantly undermining our trust in the electronic and computing systems we use in our daily lives and in safety critical applications. A large number of detective and preventive countermeasures have been proposed in literature. It is a fact, however, that our knowledge of potential consequences to real-life threats to hardware trust is lacking given the limited number of real-life reports and the plethora of ways in which hardware trust could be undermined. With this in mind, run-time monitoring of hardware combined with active mitigation of attacks, referred to as trustworthy computing on untrustworthy hardware, is proposed as the last line of defence. This last line of defence allows us to face the issue of live hardware mistrust rather than turning a blind eye to it or being helpless once it occurs. This thesis proposes three different frameworks towards trustworthy computing on untrustworthy hardware. The presented frameworks are adaptable to different applications, independent of the design of the monitored elements, based on autonomous security elements, and are computationally lightweight. The first framework is concerned with explicit violations and breaches of trust at run-time, with an untrustworthy on-chip communication interconnect presented as a potential offender. The framework is based on the guiding principles of component guarding, data tagging, and event verification. The second framework targets hardware elements with inherently variable and unpredictable operational latency and proposes a machine-learning based characterization of these latencies to infer undesired latency extensions or denial of service attacks. The framework is implemented on a DDR3 DRAM after showing its vulnerability to obscured latency extension attacks. The third framework studies the possibility of the deployment of untrustworthy hardware elements in the analog front end, and the consequent integrity issues that might arise at the analog-digital boundary of system on chips. The framework uses machine learning methods and the unique temporal and arithmetic features of signals at this boundary to monitor their integrity and assess their trust level

    SET2022 : 19th International Conference on Sustainable Energy Technologies 16th to 18th August 2022, Turkey : Sustainable Energy Technologies 2022 Conference Proceedings. Volume 4

    Get PDF
    Papers submitted and presented at SET2022 - the 19th International Conference on Sustainable Energy Technologies in Istanbul, Turkey in August 202

    Multi-Scale Fluctuations in Non-Equilibrium Systems: Statistical Physics and Biological Application

    Get PDF
    Understanding how fluctuations continuously propagate across spatial scales is fundamental for our understanding of inanimate matter. This is exemplified by self-similar fluctuations in critical phenomena and the propagation of energy fluctuations described by the Kolmogorov-Law in turbulence. Our understanding is based on powerful theoretical frameworks that integrate fluctuations on intermediary scales, as in renormalisation group or coupled mode theory. In striking contrast to typical inanimate systems, living matter is typically organised into a hierarchy of processes on a discrete set of spatial scales: from biochemical processes embedded in dynamic subcellular compartments to cells giving rise to tissues. Therefore, the understanding of living matter requires novel theories that predict the interplay of fluctuations on multiple scales of biological organisation and the ensuing emergent degrees of freedom. In this thesis, we derive a general theory of the multi-scale propagation of fluctuations in non-equilibrium systems and show that such processes underlie the regulation of cellular behaviour. Specifically, we draw on paradigmatic systems comprising stochastic many-particle systems undergoing dynamic compartmentalisation. We first derive a theory for emergent degrees of freedom in open systems, where the total mass is not conserved. We show that the compartment dynamics give rise to the localisation of probability densities in phase space resembling quasi-particle behaviour. This emergent quasi-particle exhibits fundamentally different response kinetics and steady states compared to systems lacking compartment dynamics. In order to investigate a potential biological function of such quasi-particle dynamics, we then apply this theory to the regulation of cell death. We derive a model describing the subcellular processes that regulate cell death and show that the quasi-particle dynamics gives rise to a kinetic low-pass filter which suppresses the response of the cell to fast fluituations in cellular stress signals. We test our predictions experimentally by quantifying cell death in cell cultures subject to stress stimuli varying in strength and duration. In closed systems, where the total mass is conserved, the effect of dynamic compartmentalisation depends on details of the kinetics on the scale of the stochastic many-particle dynamics. Using a second quantisation approach, we derive a commutator relation between the kinetic operators and the change in total entropy. Drawing on this, we show that the compartment dynamics alters the total entropy if the kinetics of the stochastic many-particle dynamics violate detailed balance. We apply this mechanism to the activation of cellular immune responses to RNA-virus infections. We show that dynamic compartmentalisation in closed systems gives rise to giant density fluctuations. This facilitates the emergence of gelation under conditions that violate theoretical gelation criteria in the absence of compartment dynamics. We show that such multi-scale gelation of protein complexes on the membranes of dynamic mitochondria governs the innate immune response. Taken together, we provide a general theory describing the multi-scale propagation of fluctuations in biological systems. Our work pioneers the development of a statistical physics of such systems and highlights emergent degrees of freedom spanning different scales of biological organisation. By demonstrating that cells manipulate how fluctuations propagate across these scales, our work motivates a rethinking of how the behaviour of cells is regulated

    Fitness landscape analysis of the simple assembly line balancing problem type 1

    Get PDF
    As the simple assembly line balancing problem type 1 (SALBP1) has been proven to be NP-hard, heuristic and metaheuristic approaches are widely used for solving middle to large instances. Nevertheless, the characteristics (fitness landscape) of the problem’s search space have not been studied so far and no rigorous justification for implementing various metaheuristic methods has been presented. Aiming to fill this gap in the literature, this study presents the first comprehensive and in-depth Fitness Landscape Analysis (FLA) study for SALBP1. The FLA was performed by generating a population of 1000 random solutions and improving them to local optimal solution, and then measuring various statistical indices such as average distance, gap, entropy, amplitude, length of the walk, autocorrelation, and fitness-distance among all solutions, to understand the complexity, structure, and topology of the solution space. We solved 83 benchmark problems with various cycle times taken from Scholl’s dataset which required 83000 local searches from initial to optimal solutions. The analysis showed that locally optimal assembly line balances in SALBP1 are distributed nearly uniformly in the landscape of the problem, and the small average difference between the amplitudes of the initial and optimal solutions implies that the landscape was almost plain. In addition, the large average gap between local and global solutions showed that global optimum solutions in SALBP1 are difficult to find, but the problem can be effectively solved using a single-solution-based metaheuristic to near-optimality. In addition to the FLA, a new mathematical formulation for the entropy (diversity) of solutions in the search space for SALBP1 is also presented in this paper

    Metodología de desarrollo de herramientas de ayuda al diseño de matrices de extrusión de aluminio, basadas en el análisis de los parámetros que determinan su rendimiento

    Get PDF
    Programa de Doctorado en Tecnologías Industriales y de TelecomunicaciónLa producción de aluminio extruido en el mundo no deja de crecer año tras año debido a la continua introducción de nuevas aplicaciones comerciales de este metal. Los principales factores comerciales de la extrusión de aluminio: productividad, coste efectivo y grado de calidad del perfil extruido; están directamente relacionados con el rendimiento de la matriz en la prensa de extrusión. Por ello, es muy importante alcanzar una optimización máxima en el proceso de diseño y fabricación de matrices. En el proceso de diseño existen diferentes tareas críticas a realizar. El enfoque tradicional para lograr un diseño óptimo se basa en el uso de fórmulas empíricas y en la experiencia del propio diseñador. En ciertos casos, también se hace uso de la simulación por el método de los elementos finitos (MEF) para realizar pruebas virtuales de extrusión. El presente trabajo de investigación aborda en primer lugar la determinación de los parámetros principales a tener en cuenta para llevar a cabo un diseño de matriz, y las relaciones que existen entre ellos. Y a partir de ahí, el desarrollo de una herramienta de ayuda al diseño simple, fiable y utilizable en las fases iniciales del diseño. La metodología utilizada en el trabajo sigue varias etapas secuenciales: definición de la metodología general para la determinación de los parámetros y sus relaciones, proposición de modelos de herramienta de ayuda al diseño de matrices, validación de los modelos mediante su aplicación a casos prácticos de diseño, comparación de los modelos en base a los resultados de la simulación MEF y, por último, conclusiones y definición del modelo o modelos óptimos. Se han utilizado los datos de geométricos de 596 cámaras de alimentación de 88 matrices de probada eficacia, proporcionados por una empresa de reconocido prestigio. A partir de ellos, se han extraído las variables fundamentales y se han definido dos modelos de ayuda al diseño. Se trata de herramientas de ayuda para dimensionar las cámaras de alimentación de los puentes de matrices tubulares de 4 salidas y 4 cámaras por salida: uno de ellos basado en estadística inferencial y otro basado en Machine Learning (ML). Utilizando estos modelos, se ha desarrollado la optimización de los diseños de dos matrices reales. Y para cada una de ellas, se han analizado y comparado mediante la simulación MEF los resultados obtenidos con ambos modelos, utilizando la desviación de la velocidad a la salida de la prensa de extrusión como parámetro de valoración. La conclusión obtenida es que ambos modelos alcanzan su cometido, pues logran limitar aceptablemente las desviaciones de la velocidad durante el proceso y permiten su utilización en las fases iniciales del proceso de diseño, reduciendo los tiempos y recursos empleados sin su ayuda. Entre ellos, el modelo basado en ML alcanza unas reducciones mayores de la desviación de la velocidad en el proceso y, gracias a su explicabilidad, permite su uso por diseñadores con limitada experiencia en el proceso productivo. Como limitaciones, los modelos sólo se muestran útiles para la tipología concreta de matriz para la que han sido desarrollados, y requieren de un proceso iterativo de modificación del diseño para alcanzar el ajuste óptimo al modelo. Como futuras líneas de investigación, cabe destacar la creación de nuevos modelos para otros tipos de matrices, u otros procesos productivos, basándose en la misma metodología general definida, así como la exploración de la posibilidad de automatizar el proceso de cálculo y modificación iterativa en base a una herramienta de CAD paramétrica.The production of extruded aluminium in the world continues to grow each year due to the continuous introduction of new commercial applications of this metal. The main commercial factors of aluminium extrusion - productivity, cost effectiveness and quality grade of the extruded profile - are directly related to the performance of the die in the extrusion press. Therefore, it is very important to achieve maximum optimisation of the die design and manufacturing process. In the design process there are several critical tasks to be performed. The traditional approach to achieve an optimal design is based on the use of empirical formulas and the designer's experience. In some cases, Finite Element Method (FEM) simulation is also used to perform virtual extrusion trials. This research firstly deals with the determination of the main parameters to be considered when carrying out a die design, and the relationships that exist between them. And from there, it develops a simple, reliable and usable design aid tool for the initial phases of the die design. The methodology used in the research follows several sequential stages: definition of the general methodology for the determination of the parameters and their relationships, proposal of models for the die design aid tool, validation of the models by their application to practical design cases, comparison of the models based on the results of the FEM simulation and, finally, conclusions and definition of the optimal model or models. Geometric data from 596 ports of 88 proven dies, provided by a prestigious company, have been used. From these data, the fundamental variables have been extracted and two design aid models have been defined. They are tools to help in the sizing of the ports of porthole die mandrels with 4 cavities and 4 ports per cavity: one of them based on inferential statistics and the other one based on Machine Learning (ML). By using these models, the optimisation of the designs of two real dies has been developed. And for each of them, the results obtained with both models have been analysed and compared by means of FEM simulation, using the deviation of the velocity at the extrusion press outlet as the evaluation parameter. The conclusion obtained is both models achieve their purpose, as they are capable of acceptably limiting the velocity deviations during the process and allow their use in the initial phases of the design process, reducing the time and resources consumed without their help. Among them, the ML-based model achieves higher reductions of the velocity deviation in the process and, thanks to its explainability, allows its use by designers with limited experience in the production process. The limitations of the models are that they are only useful for the specific type of die for which they have been developed, and they require an iterative process of design modification to achieve the optimum fit to the model. Future research lines include the creation of new models for other types of dies, or other production processes, based on the same general methodology defined, as well as the exploration of the possibility of automating the process of calculation and iterative modification based on a parametric CAD tool

    Digital Twins of production systems - Automated validation and update of material flow simulation models with real data

    Get PDF
    Um eine gute Wirtschaftlichkeit und Nachhaltigkeit zu erzielen, müssen Produktionssysteme über lange Zeiträume mit einer hohen Produktivität betrieben werden. Dies stellt produzierende Unternehmen insbesondere in Zeiten gesteigerter Volatilität, die z.B. durch technologische Umbrüche in der Mobilität, sowie politischen und gesellschaftlichen Wandel ausgelöst wird, vor große Herausforderungen, da sich die Anforderungen an das Produktionssystem ständig verändern. Die Frequenz von notwendigen Anpassungsentscheidungen und folgenden Optimierungsmaßnahmen steigt, sodass der Bedarf nach Bewertungsmöglichkeiten von Szenarien und möglichen Systemkonfigurationen zunimmt. Ein mächtiges Werkzeug hierzu ist die Materialflusssimulation, deren Einsatz aktuell jedoch durch ihre aufwändige manuelle Erstellung und ihre zeitlich begrenzte, projektbasierte Nutzung eingeschränkt wird. Einer längerfristigen, lebenszyklusbegleitenden Nutzung steht momentan die arbeitsintensive Pflege des Simulationsmodells, d.h. die manuelle Anpassung des Modells bei Veränderungen am Realsystem, im Wege. Das Ziel der vorliegenden Arbeit ist die Entwicklung und Umsetzung eines Konzeptes inkl. der benötigten Methoden, die Pflege und Anpassung des Simulationsmodells an die Realität zu automatisieren. Hierzu werden die zur Verfügung stehenden Realdaten genutzt, die aufgrund von Trends wie Industrie 4.0 und allgemeiner Digitalisierung verstärkt vorliegen. Die verfolgte Vision der Arbeit ist ein Digitaler Zwilling des Produktionssystems, der durch den Dateninput zu jedem Zeitpunkt ein realitätsnahes Abbild des Systems darstellt und zur realistischen Bewertung von Szenarien verwendet werden kann. Hierfür wurde das benötigte Gesamtkonzept entworfen und die Mechanismen zur automatischen Validierung und Aktualisierung des Modells entwickelt. Im Fokus standen dabei unter anderem die Entwicklung von Algorithmen zur Erkennung von Veränderungen in der Struktur und den Abläufen im Produktionssystem, sowie die Untersuchung des Einflusses der zur Verfügung stehenden Daten. Die entwickelten Komponenten konnten an einem realen Anwendungsfall der Robert Bosch GmbH erfolgreich eingesetzt werden und führten zu einer Steigerung der Realitätsnähe des Digitalen Zwillings, der erfolgreich zur Produktionsplanung und -optimierung eingesetzt werden konnte. Das Potential von Lokalisierungsdaten für die Erstellung von Digitalen Zwillingen von Produktionssystem konnte anhand der Versuchsumgebung der Lernfabrik des wbk Instituts für Produktionstechnik demonstriert werden
    corecore