8,670 research outputs found

    Experimental evaluation into novel, low cost, modular PEMFC stack

    Get PDF
    Attribution-NonCommercial-NoDerivs 3.0 Unported (CC BY-NC-ND 3.0)The Polymer Electrolyte Membrane Fuel Cell (PEMFC), despite being regarded as an ideal replacement to the internal combustion engine, is still not an economically attractive pri-mover due to a number of key challenges that have yet to be fully resolved; some of which include degradation to cell components resulting in inadequate lifetimes, specialised and costly manufacturing processes and poor gravimetric/volumetric energy densities. This paper presents a novel stack concept which removes the conventional bi polar plate (BPP), a component that is responsible for up to 80% of total stack weight and 90+% of stack volume in some designs. The removal of said component not only improves the volumetric and gravimetric energy density of the PEMFC stack but drastically reduces the cost of the stack by removing all costly manufacturing processes associated with PEMFC component machining while the functionality of the traditional BPP is still retained by the unique stack design. The stack architecture is first presented and then the characterisation of the PEMFC is shown over a wide range of operating scenarios. The experimental studies suggest that the performance of the new design is comparable to that of traditional stacks but at significantly less cost price.Final Published versio

    Experimental evaluation into novel, low cost, modular PEMFC stack

    Get PDF
    Attribution-NonCommercial-NoDerivs 3.0 Unported (CC BY-NC-ND 3.0)The Polymer Electrolyte Membrane Fuel Cell (PEMFC), despite being regarded as an ideal replacement to the internal combustion engine, is still not an economically attractive pri-mover due to a number of key challenges that have yet to be fully resolved; some of which include degradation to cell components resulting in inadequate lifetimes, specialised and costly manufacturing processes and poor gravimetric/volumetric energy densities. This paper presents a novel stack concept which removes the conventional bi polar plate (BPP), a component that is responsible for up to 80% of total stack weight and 90+% of stack volume in some designs. The removal of said component not only improves the volumetric and gravimetric energy density of the PEMFC stack but drastically reduces the cost of the stack by removing all costly manufacturing processes associated with PEMFC component machining while the functionality of the traditional BPP is still retained by the unique stack design. The stack architecture is first presented and then the characterisation of the PEMFC is shown over a wide range of operating scenarios. The experimental studies suggest that the performance of the new design is comparable to that of traditional stacks but at significantly less cost price.Final Published versio

    A Framework of Dynamic Data Driven Digital Twin for Complex Engineering Products: the Example of Aircraft Engine Health Management

    Get PDF
    Digital twin is a vital enabling technology for smart manufacturing in the era of Industry 4.0. Digital twin effectively replicates its physical asset enabling easy visualization, smart decision-making and cognitive capability in the system. In this paper, a framework of dynamic data driven digital twin for complex engineering products was proposed. To illustrate the proposed framework, an example of health management on aircraft engines was studied. This framework models the digital twin by extracting information from the various sensors and Industry Internet of Things (IIoT) monitoring the remaining useful life (RUL) of an engine in both cyber and physical domains. Then, with sensor measurements selected from linear degradation models, a long short-term memory (LSTM) neural network is proposed to dynamically update the digital twin, which can estimate the most up-to-date RUL of the physical aircraft engine. Through comparison with other machine learning algorithms, including similarity based linear regression and feed forward neural network, on RUL modelling, this LSTM based dynamical data driven digital twin provides a promising tool to accurately replicate the health status of aircraft engines. This digital twin based RUL technique can also be extended for health management and remote operation of manufacturing systems

    Further Developing a Procedural Digital Twin for Road Tunnels

    Get PDF
    A Digital Twin is a close to as possible replica of a real world application, in the digital world. The Digital Twin aims to simulate a process, in real time so the user can generate information and value from the process. It is a representation of a physical asset, object or service. Often the Digital Twin takes data from real sensor to mimic the live process in a digital representation. In this thesis, the focus is on Digital Twin representations of road tunnels in Norway. This thesis builds further on previously designed implementation and thesis written at University of Stavanger. The project creates tunnel twins for any given Norwegian tunnel, by using Satens Vegvesen public API for tunnel information. Extending work made previously that connects static models with generated sensor data to display sensors in a tunnel made in Unity. This thesis furthers develops this project, by aiming to find ways to improve the architecture made in the current implementation while also finding new features to add based on what is seen in other Digital Twins in the academic world. Implementing these new features and changes to see how they are helpful to the continuous work of this Digital Twin project

    A comparison of processing techniques for producing prototype injection moulding inserts.

    Get PDF
    This project involves the investigation of processing techniques for producing low-cost moulding inserts used in the particulate injection moulding (PIM) process. Prototype moulds were made from both additive and subtractive processes as well as a combination of the two. The general motivation for this was to reduce the entry cost of users when considering PIM. PIM cavity inserts were first made by conventional machining from a polymer block using the pocket NC desktop mill. PIM cavity inserts were also made by fused filament deposition modelling using the Tiertime UP plus 3D printer. The injection moulding trials manifested in surface finish and part removal defects. The feedstock was a titanium metal blend which is brittle in comparison to commodity polymers. That in combination with the mesoscale features, small cross-sections and complex geometries were considered the main problems. For both processing methods, fixes were identified and made to test the theory. These consisted of a blended approach that saw a combination of both the additive and subtractive processes being used. The parts produced from the three processing methods are investigated and their respective merits and issues are discussed

    Reducing risk in pre-production investigations through undergraduate engineering projects.

    Get PDF
    This poster is the culmination of final year Bachelor of Engineering Technology (B.Eng.Tech) student projects in 2017 and 2018. The B.Eng.Tech is a level seven qualification that aligns with the Sydney accord for a three-year engineering degree and hence is internationally benchmarked. The enabling mechanism of these projects is the industry connectivity that creates real-world projects and highlights the benefits of the investigation of process at the technologist level. The methodologies we use are basic and transparent, with enough depth of technical knowledge to ensure the industry partners gain from the collaboration process. The process we use minimizes the disconnect between the student and the industry supervisor while maintaining the academic freedom of the student and the commercial sensitivities of the supervisor. The general motivation for this approach is the reduction of the entry cost of the industry to enable consideration of new technologies and thereby reducing risk to core business and shareholder profits. The poster presents several images and interpretive dialogue to explain the positive and negative aspects of the student process

    Modelling granules size distribution produced on a continuous manufacturating line with non-linear autoregressive artificial neural networks

    Get PDF
    Tese de mestrado, Engenharia Farmacêutica, Universidade de Lisboa, Faculdade de Farmácia, 2018Particle size is a critical quality parameter in several pharmaceutical unit operations. An adequate particle size distribution is essential to ensure optimal manufacturability which, in turn, has an important impact on the safety, efficacy and quality of the end product. Thus, the monitoring and control of the particle size via in-process size measurements is crucial to the pharmaceutical industry. Currently, a wide range of techniques are available for the determination of particle size distribution, however a technique that enables relevant real-time process data is highly preferable, as a better understanding and control over the process is offered. The pharmaceutical industry follows the “technology-push model” as it depends on scientific and technological advances. Hence, optimization of product monitoring technologies for drug products have been receiving more attention as it helps to increase profitability. An increasing interest in the usage of virtual instruments as an alternative to physical instruments has arisen in recent years. A software sensor utilizes information collected from a process operation to estimate values of some property of interest, typically difficult to measure experimentally. One of the most significant benefits of the computational approach is the possibility to adapt the measuring system through several optimization solutions. The present thesis focuses on the development of a mathematical dynamic model capable of predicting particle size distribution in-real time. For this purpose, multivariate data coming from univariate sensors placed in multiple locations of the continuous production line, ConsiGmaTM-25, was utilized to determine the size distribution (d50) of granules evaluated at a specific site within the line. The ConsiGmaTM-25 system is a continuous granulation line developed by GEA Pharma. It consists of three modules: a continuous twin-screw granulation module, a six-segmented cell fluid bed dryer and a product control unit. In the continuous granulation module, granules are produced inside the twin-screw granulator via mixing of the powder and the granulation liquid (water) fed into the granulation barrel. Once finalized the granulation operation, the produced granules are then pneumatically transferred to the fluid bed dryer module. In the dryer module, the granules are relocated to one specific dryer cell, where drying is performed for a pre-defined period of time. The dry granules are formerly transported to the product control hopper with an integrated mill situated in the product control unit. The granules are milled, and the resulting product is gravitationally discharged and can undergo further processing steps, such as blending, tableting and coating. The size distribution (d50) of the granules to be determined in this work were assessed inside dryer cell no.4, located at the dryer module. The size distribution was measured every ten seconds by a focused beam reflectance measurement technique. A non-linear autoregressive with exogenous inputs network was developed to achieve accurate predictions of granules size distribution values. The development of the predictive model consisted of the implementation of an optimization strategy in terms of topology, inputs, delays and training methodology. The network was trained against the d50 obtained from particle size distribution collected in-situ by the focused beam reflectance measurement technique mentioned above. The model presented the ability to predict the d50 value from the beginning to the end of the several drying cycles. The accuracy of the artificial neural network was determined by a root mean squared error of prediction of 6.9%, which demonstrated the capability to produce close results to the experimental data of the cycles/runs included on the testing set. The predictive ability of the neural network, however, could not be extended to drying cycle that presented irregular fluctuations. Due to the importance of the precise monitoring of the size distribution within pharmaceutical operations, a future adjustment of the optimization strategy is of great interest. In the future, a higher number of experimental runs/cycles can be used during the training process to enable the network to identify and predict more easily atypical cases. In addition, a more realistic optimization strategy could be performed for all process parameters in simultaneous through the implementation of a genetic algorithm, for example. Changes in terms of network topology can also be considered.O tamanho de partícula é um parâmetro crítico de qualidade em diversas operações unitárias da indústria farmacêutica. Uma distribuição de tamanho de partícula adequada é essencial para garantir condições ideais de fabrico, o que por sua vez, possui um impacto significativo na segurança, eficácia e qualidade do produto final. Deste modo, a monitorização e controlo do tamanho de partícula através de medições efetuadas durante o processo são consideradas cruciais para a indústria. Atualmente, uma ampla gama de técnicas encontra-se disponível para a determinação da distribuição de tamanho de partícula. Contudo, uma técnica que permita a obtenção de dados relevantes em tempo real é altamente preferível, visto que um melhor entendimento e controlo sobre o processo é obtido. A indústria farmacêutica encontra-se altamente dependente de avanços científicos e tecnológicos. Nos últimos anos, um interesse crescente no uso de instrumentos virtuais como alternativa à instrumentalização física na monitorização de produto é evidente. Um sensor virtual faz uso da informação contida num determinado conjunto de dados para efetuar medições adequadas de uma propriedade de interesse. Uma das vantagens mais importantes desta abordagem computacional corresponde à possibilidade de adaptação do sistema de medição, recorrendo a variados métodos de otimização. A presente tese encontra-se focada no desenvolvimento de um modelo matemático dinâmico capaz de prever a distribuição de tamanho de partícula em tempo real. Para o efeito, dados multivariados gerados, a cada segundo, por sensores localizados em múltiplos locais da linha de produção contínua, ConsiGmaTM-25, foram utilizados para determinar a distribuição de tamanho (d50) de grânulos avaliada num ponto específico da linha. O sistema ConsiGmaTM-25 trata-se de uma linha contínua de produção de grânulos, que pode ser dividida, essencialmente, em três módulos principais: granulador contínuo, secador de leito fluido e unidade de acondicionamento de produto. No módulo de granulação, ocorre a produção de grânulos através da mistura de pó e água (líquido de granulação). Uma vez finalizada a operação unitária, os grânulos produzidos são pneumaticamente transferidos para o secador de leito fluido. Neste local, os grânulos são introduzidos numa das seis células de secagem, onde ocorre o processo de secagem durante um período de tempo pré-definido. Os grânulos secos resultantes são, de seguida, transferidos para a unidade de acondicionamento de produto, integrado por um moinho, responsável pela operação de moagem. O material moído é gravitacionalmente descarregado e pode ser novamente processado através de operações como a mistura, compressão ou revestimento. A distribuição de tamanho (d50) dos grânulos a ser determinada neste trabalho foi medida, a cada dez segundos, através da técnica de reflectância por um feixe de luz focalizado. Um total de dezasseis corridas realizadas no mês de agosto foram utilizadas neste trabalho. Para cada corrida, dados relativos a parâmetros de processo tais como, pressões, temperaturas, fluxos de ar, entre outros, bem como, a distribuição do tamanho (d50) dos grânulos foram disponibilizados. Com base na discrepância temporal verificada entre os dados de processo e os valores de distribuição de tamanho (d50) dos grânulos, diversas etapas de processamento foi executadas. O processamento de dados foi realizado, essencialmente, em três fases distintas: alinhamento, filtragem e organização/fragmentação. Uma vez finalizado o processamento, os dados foram utilizados no desenvolvimento do modelo preditivo (rede neural). Uma rede neuronal não-linear autorregressiva com três entradas exógenas foi desenvolvida para realizar previsões da distribuição de tamanho (d50) dos grânulos. O desenvolvimento do modelo preditivo consistiu na implementação de uma estratégia de otimização em termos de topologia, atrasos, dados de entrada, seleção de corridas e metodologia de treino. Para cada variável de processo (entrada), um atraso foi assinalado com base em pressupostos fundamentados por estudos relativos ao tempo de residência dos três módulos da linha contínua. Os dados de entrada foram definidos com base no resultado de um modelo matemático desenvolvido para designar o conjunto de variáveis para o qual se observava um menor erro médio quadrático de previsão da propriedade de interesse, d50. De forma a possibilitar o treino da rede, os dados fragmentados foram divididos em dois principais conjuntos: treino e teste. A rede foi treinada e validada com dados de treino, sendo os dados de teste seguidamente utilizados para avaliar a capacidade preditiva do modelo otimizado. O modelo apresentou a capacidade de prever o valor de d50 ao longo dos vários ciclos de secagem. A precisão da rede neural foi determinada por um valor de erro médio quadrático de previsão de 6,9%, demonstrando sua capacidade de produzir resultados próximos aos dados experimentais incluídos no conjunto de teste. A capacidade preditiva da rede neural, no entanto, não foi capaz de abranger casos atípicos. Considerando a importância de uma monitorização precisa da distribuição de tamanho nas operações farmacêuticas, uma futura alteração na estratégia de otimização implementada é altamente aconselhável. No futuro, o uso de um número mais elevado de ciclos/corridas de secagem durante o processo de treino da rede poderá permitir que esta seja capaz de identificar e prever com maior facilidade casos atípicos. Adicionalmente, uma abordagem mais realista da estratégia de otimização poderá ser executada para todas os parâmetros de processo em simultâneo através da implementação de um algoritmo genético. Ainda, alterações na topologia da rede poderão ser também consideradas

    Towards a Cyber-Physical Manufacturing Cloud through Operable Digital Twins and Virtual Production Lines

    Get PDF
    In last decade, the paradigm of Cyber-Physical Systems (CPS) has integrated industrial manufacturing systems with Cloud Computing technologies for Cloud Manufacturing. Up to 2015, there were many CPS-based manufacturing systems that collected real-time machining data to perform remote monitoring, prognostics and health management, and predictive maintenance. However, these CPS-integrated and network ready machines were not directly connected to the elements of Cloud Manufacturing and required human-in-the-loop. Addressing this gap, we introduced a new paradigm of Cyber-Physical Manufacturing Cloud (CPMC) that bridges a gap between physical machines and virtual space in 2017. CPMC virtualizes machine tools in cloud through web services for direct monitoring and operations through Internet. Fundamentally, CPMC differs with contemporary modern manufacturing paradigms. For instance, CPMC virtualizes machining tools in cloud using remote services and establish direct Internet-based communication, which is overlooked in existing Cloud Manufacturing systems. Another contemporary, namely cyber-physical production systems enable networked access to machining tools. Nevertheless, CPMC virtualizes manufacturing resources in cloud and monitor and operate them over the Internet. This dissertation defines the fundamental concepts of CPMC and expands its horizon in different aspects of cloud-based virtual manufacturing such as Digital Twins and Virtual Production Lines. Digital Twin (DT) is another evolving concept since 2002 that creates as-is replicas of machining tools in cyber space. Up to 2018, many researchers proposed state-of-the-art DTs, which only focused on monitoring production lifecycle management through simulations and data driven analytics. But they overlooked executing manufacturing processes through DTs from virtual space. This dissertation identifies that DTs can be made more productive if they engage directly in direct execution of manufacturing operations besides monitoring. Towards this novel approach, this dissertation proposes a new operable DT model of CPMC that inherits the features of direct monitoring and operations from cloud. This research envisages and opens the door for future manufacturing systems where resources are developed as cloud-based DTs for remote and distributed manufacturing. Proposed concepts and visions of DTs have spawned the following fundamental researches. This dissertation proposes a novel concept of DT based Virtual Production Lines (VPL) in CPMC in 2019. It presents a design of a service-oriented architecture of DTs that virtualizes physical manufacturing resources in CPMC. Proposed DT architecture offers a more compact and integral service-oriented virtual representations of manufacturing resources. To re-configure a VPL, one requirement is to establish DT-to-DT collaborations in manufacturing clouds, which replicates to concurrent resource-to-resource collaborations in shop floors. Satisfying the above requirements, this research designs a novel framework to easily re-configure, monitor and operate VPLs using DTs of CPMC. CPMC publishes individual web services for machining tools, which is a traditional approach in the domain of service computing. But this approach overcrowds service registry databases. This dissertation introduces a novel fundamental service publication and discovery approach in 2020, OpenDT, which publishes DTs with collections of services. Experimental results show easier discovery and remote access of DTs while re-configuring VPLs. Proposed researches in this dissertation have received numerous citations both from industry and academia, clearly proving impacts of research contributions
    corecore