8,606 research outputs found

    Meso-scale FDM material layout design strategies under manufacturability constraints and fracture conditions

    Get PDF
    In the manufacturability-driven design (MDD) perspective, manufacturability of the product or system is the most important of the design requirements. In addition to being able to ensure that complex designs (e.g., topology optimization) are manufacturable with a given process or process family, MDD also helps mechanical designers to take advantage of unique process-material effects generated during manufacturing. One of the most recognizable examples of this comes from the scanning-type family of additive manufacturing (AM) processes; the most notable and familiar member of this family is the fused deposition modeling (FDM) or fused filament fabrication (FFF) process. This process works by selectively depositing uniform, approximately isotropic beads or elements of molten thermoplastic material (typically structural engineering plastics) in a series of pre-specified traces to build each layer of the part. There are many interesting 2-D and 3-D mechanical design problems that can be explored by designing the layout of these elements. The resulting structured, hierarchical material (which is both manufacturable and customized layer-by-layer within the limits of the process and material) can be defined as a manufacturing process-driven structured material (MPDSM). This dissertation explores several practical methods for designing these element layouts for 2-D and 3-D meso-scale mechanical problems, focusing ultimately on design-for-fracture. Three different fracture conditions are explored: (1) cases where a crack must be prevented or stopped, (2) cases where the crack must be encouraged or accelerated, and (3) cases where cracks must grow in a simple pre-determined pattern. Several new design tools, including a mapping method for the FDM manufacturability constraints, three major literature reviews, the collection, organization, and analysis of several large (qualitative and quantitative) multi-scale datasets on the fracture behavior of FDM-processed materials, some new experimental equipment, and the refinement of a fast and simple g-code generator based on commercially-available software, were developed and refined to support the design of MPDSMs under fracture conditions. The refined design method and rules were experimentally validated using a series of case studies (involving both design and physical testing of the designs) at the end of the dissertation. Finally, a simple design guide for practicing engineers who are not experts in advanced solid mechanics nor process-tailored materials was developed from the results of this project.U of I OnlyAuthor's request

    Multiscale structural optimisation with concurrent coupling between scales

    Get PDF
    A robust three-dimensional multiscale topology optimisation framework with concurrent coupling between scales is presented. Concurrent coupling ensures that only the microscale data required to evaluate the macroscale model during each iteration of optimisation is collected and results in considerable computational savings. This represents the principal novelty of the framework and permits a previously intractable number of design variables to be used in the parametrisation of the microscale geometry, which in turn enables accessibility to a greater range of mechanical point properties during optimisation. Additionally, the microscale data collected during optimisation is stored in a re-usable database, further reducing the computational expense of subsequent iterations or entirely new optimisation problems. Application of this methodology enables structures with precise functionally-graded mechanical properties over two-scales to be derived, which satisfy one or multiple functional objectives. For all applications of the framework presented within this thesis, only a small fraction of the microstructure database is required to derive the optimised multiscale solutions, which demonstrates a significant reduction in the computational expense of optimisation in comparison to contemporary sequential frameworks. The derivation and integration of novel additive manufacturing constraints for open-walled microstructures within the concurrently coupled multiscale topology optimisation framework is also presented. Problematic fabrication features are discouraged through the application of an augmented projection filter and two relaxed binary integral constraints, which prohibit the formation of unsupported members, isolated assemblies of overhanging members and slender members during optimisation. Through the application of these constraints, it is possible to derive self-supporting, hierarchical structures with varying topology, suitable for fabrication through additive manufacturing processes.Open Acces

    Gasificação direta de biomassa para produção de gás combustível

    Get PDF
    The excessive consumption of fossil fuels to satisfy the world necessities of energy and commodities led to the emission of large amounts of greenhouse gases in the last decades, contributing significantly to the greatest environmental threat of the 21st century: Climate Change. The answer to this man-made disaster is not simple and can only be made if distinct stakeholders and governments are brought to cooperate and work together. This is mandatory if we want to change our economy to one more sustainable and based in renewable materials, and whose energy is provided by the eternal nature energies (e.g., wind, solar). In this regard, biomass can have a main role as an adjustable and renewable feedstock that allows the replacement of fossil fuels in various applications, and the conversion by gasification allows the necessary flexibility for that purpose. In fact, fossil fuels are just biomass that underwent extreme pressures and heat for millions of years. Furthermore, biomass is a resource that, if not used or managed, increases wildfire risks. Consequently, we also have the obligation of valorizing and using this resource. In this work, it was obtained new scientific knowledge to support the development of direct (air) gasification of biomass in bubbling fluidized bed reactors to obtain a fuel gas with suitable properties to replace natural gas in industrial gas burners. This is the first step for the integration and development of gasification-based biorefineries, which will produce a diverse number of value-added products from biomass and compete with current petrochemical refineries in the future. In this regard, solutions for the improvement of the raw producer gas quality and process efficiency parameters were defined and analyzed. First, addition of superheated steam as primary measure allowed the increase of H2 concentration and H2/CO molar ratio in the producer gas without compromising the stability of the process. However, the measure mainly showed potential for the direct (air) gasification of high-density biomass (e.g., pellets), due to the necessity of having char accumulation in the reactor bottom bed for char-steam reforming reactions. Secondly, addition of refused derived fuel to the biomass feedstock led to enhanced gasification products, revealing itself as a highly promising strategy in terms of economic viability and environmental benefits of future gasification-based biorefineries, due to the high availability and low costs of wastes. Nevertheless, integrated techno economic and life cycle analyses must be performed to fully characterize the process. Thirdly, application of low-cost catalyst as primary measure revealed potential by allowing the improvement of the producer gas quality (e.g., H2 and CO concentration, lower heating value) and process efficiency parameters with distinct solid materials; particularly, the application of concrete, synthetic fayalite and wood pellets chars, showed promising results. Finally, the economic viability of the integration of direct (air) biomass gasification processes in the pulp and paper industry was also shown, despite still lacking interest to potential investors. In this context, the role of government policies and appropriate economic instruments are of major relevance to increase the implementation of these projects.O consumo excessivo de combustíveis fósseis para garantir as necessidades e interesses da sociedade conduziu à emissão de elevadas quantidades de gases com efeito de estufa nas últimas décadas, contribuindo significativamente para a maior ameaça ambiental do século XXI: Alterações Climáticas. A solução para este desastre de origem humana é de caráter complexo e só pode ser atingida através da cooperação de todos os governos e partes interessadas. Para isto, é obrigatória a criação de uma bioeconomia como base de um futuro mais sustentável, cujas necessidades energéticas e materiais sejam garantidas pelas eternas energias da natureza (e.g., vento, sol). Neste sentido, a biomassa pode ter um papel principal como uma matéria prima ajustável e renovável que permite a substituição de combustíveis fósseis num variado número de aplicações, e a sua conversão através da gasificação pode ser a chave para este propósito. Afinal, na prática, os combustíveis fósseis são apenas biomassa sujeita a elevada temperatura e pressão durante milhões de anos. Além do mais, a gestão eficaz da biomassa é fundamental para a redução dos riscos de incêndio florestal e, como tal, temos o dever de utilizar e valorizar este recurso. Neste trabalho, foi obtido novo conhecimento científico para suporte do desenvolvimento das tecnologias de gasificação direta (ar) de biomassa em leitos fluidizados borbulhantes para produção de gás combustível, com o objetivo da substituição de gás natural em queimadores industriais. Este é o primeiro passo para o desenvolvimento de biorrefinarias de gasificação, uma potencial futura indústria que irá providenciar um variado número de produtos de valor acrescentado através da biomassa e competir com a atual indústria petroquímica. Neste sentido, foram analisadas várias medidas para a melhoria da qualidade do gás produto bruto e dos parâmetros de eficiência do processo. Em primeiro, a adição de vapor sobreaquecido como medida primária permitiu o aumento da concentração de H2 e da razão molar H2/CO no gás produto sem comprometer a estabilidade do processo. No entanto, esta medida somente revelou potencial para a gasificação direta (ar) de biomassa de alta densidade (e.g., pellets) devido à necessidade da acumulação de carbonizados no leito do reator para a ocorrência de reações de reforma com vapor. Em segundo, a mistura de combustíveis derivados de resíduos e biomassa residual florestal permitiu a melhoria dos produtos de gasificação, constituindo desta forma uma estratégia bastante promissora a nível económico e ambiental, devido à elevada abundância e baixo custo dos resíduos urbanos. Contudo, devem ser efetuadas análises técnico-económicas e de ciclo de vida para a completa caraterização do processo. Em terceiro, a aplicação de catalisadores de baixo custo como medida primária demonstrou elevado potencial para a melhoria do gás produto (e.g., concentração de H2 e CO, poder calorífico inferior) e para o incremento dos parâmetros de eficiência do processo; em particular, a aplicação de betão, faialite sintética e carbonizados de pellets de madeira, demonstrou resultados promissores. Finalmente, foi demonstrada a viabilidade económica da integração do processo de gasificação direta (ar) de biomassa na indústria da pasta e papel, apesar dos parâmetros determinados não serem atrativos para potenciais investidores. Neste contexto, a intervenção dos governos e o desenvolvimento de instrumentos de apoio económico é de grande relevância para a implementação destes projetos.Este trabalho foi financiado pela The Navigator Company e por Fundos Nacionais através da Fundação para a Ciência e a Tecnologia (FCT).Programa Doutoral em Engenharia da Refinação, Petroquímica e Químic

    Digital asset management via distributed ledgers

    Get PDF
    Distributed ledgers rose to prominence with the advent of Bitcoin, the first provably secure protocol to solve consensus in an open-participation setting. Following, active research and engineering efforts have proposed a multitude of applications and alternative designs, the most prominent being Proof-of-Stake (PoS). This thesis expands the scope of secure and efficient asset management over a distributed ledger around three axes: i) cryptography; ii) distributed systems; iii) game theory and economics. First, we analyze the security of various wallets. We start with a formal model of hardware wallets, followed by an analytical framework of PoS wallets, each outlining the unique properties of Proof-of-Work (PoW) and PoS respectively. The latter also provides a rigorous design to form collaborative participating entities, called stake pools. We then propose Conclave, a stake pool design which enables a group of parties to participate in a PoS system in a collaborative manner, without a central operator. Second, we focus on efficiency. Decentralized systems are aimed at thousands of users across the globe, so a rigorous design for minimizing memory and storage consumption is a prerequisite for scalability. To that end, we frame ledger maintenance as an optimization problem and design a multi-tier framework for designing wallets which ensure that updates increase the ledger’s global state only to a minimal extent, while preserving the security guarantees outlined in the security analysis. Third, we explore incentive-compatibility and analyze blockchain systems from a micro and a macroeconomic perspective. We enrich our cryptographic and systems' results by analyzing the incentives of collective pools and designing a state efficient Bitcoin fee function. We then analyze the Nash dynamics of distributed ledgers, introducing a formal model that evaluates whether rational, utility-maximizing participants are disincentivized from exhibiting undesirable infractions, and highlighting the differences between PoW and PoS-based ledgers, both in a standalone setting and under external parameters, like market price fluctuations. We conclude by introducing a macroeconomic principle, cryptocurrency egalitarianism, and then describing two mechanisms for enabling taxation in blockchain-based currency systems

    Thermodynamics and inflammation: insights into quantum biology and ageing

    Get PDF
    Inflammation as a biological concept has been around a long time and derives from the Latin “to set on fire” and refers to the redness and heat, and usually swelling, which accompanies injury and infection. Chronic inflammation is also associated with ageing and is described by the term “inflammaging”. Likewise, the biological concept of hormesis, in the guise of what “does not kill you, makes you stronger”, has long been recognized, but in contrast, seems to have anti-inflammatory and age-slowing characteristics. As both phenomena act to restore homeostasis, they may share some common underlying principles. Thermodynamics describes the relationship between heat and energy, but is also intimately related to quantum mechanics. Life can be viewed as a series of self-renewing dissipative structures existing far from equilibrium as vortexes of “negentropy” that ages and dies; but, through reproduction and speciation, new robust structures are created, enabling life to adapt and continue in response to ever changing environments. In short, life can be viewed as a natural consequence of thermodynamics to dissipate energy to restore equilibrium; each component of this system is replaceable. However, at the molecular level, there is perhaps a deeper question: is life dependent on, or has it enhanced, quantum effects in space and time beyond those normally expected at the atomistic scale and temperatures that life operates at? There is some evidence it has. Certainly, the dissipative adaptive mechanism described by thermodynamics is now being extended into the quantum realm. Fascinating though this topic is, does exploring the relationship between quantum mechanics, thermodynamics, and biology give us a greater insight into ageing and, thus, medicine? It could be said that hormesis and inflammation are expressions of thermodynamic and quantum principles that control ageing via natural selection that could operate at all scales of life. Inflammation could be viewed as a mechanism to remove inefficient systems in response to stress to enable rebuilding of more functional dissipative structures, and hormesis as the process describing the ability to adapt; underlying this is the manipulation of fundamental quantum principles. Defining what “quantum biological normality” is has been a long-term problem, but perhaps we do not need to, as it is simply an expression of one end of the normal quantum mechanical spectrum, implying that biology could inform us as to how we can define the quantum world

    Regularized interior point methods for convex programming

    Get PDF
    Interior point methods (IPMs) constitute one of the most important classes of optimization methods, due to their unparalleled robustness, as well as their generality. It is well known that a very large class of convex optimization problems can be solved by means of IPMs, in a polynomial number of iterations. As a result, IPMs are being used to solve problems arising in a plethora of fields, ranging from physics, engineering, and mathematics, to the social sciences, to name just a few. Nevertheless, there remain certain numerical issues that have not yet been addressed. More specifically, the main drawback of IPMs is that the linear algebra task involved is inherently ill-conditioned. At every iteration of the method, one has to solve a (possibly large-scale) linear system of equations (also known as the Newton system), the conditioning of which deteriorates as the IPM converges to an optimal solution. If these linear systems are of very large dimension, prohibiting the use of direct factorization, then iterative schemes may have to be employed. Such schemes are significantly affected by the inherent ill-conditioning within IPMs. One common approach for improving the aforementioned numerical issues, is to employ regularized IPM variants. Such methods tend to be more robust and numerically stable in practice. Over the last two decades, the theory behind regularization has been significantly advanced. In particular, it is well known that regularized IPM variants can be interpreted as hybrid approaches combining IPMs with the proximal point method. However, it remained unknown whether regularized IPMs retain the polynomial complexity of their non-regularized counterparts. Furthermore, the very important issue of tuning the regularization parameters appropriately, which is also crucial in augmented Lagrangian methods, was not addressed. In this thesis, we focus on addressing the previous open questions, as well as on creating robust implementations that solve various convex optimization problems. We discuss in detail the effect of regularization, and derive two different regularization strategies; one based on the proximal method of multipliers, and another one based on a Bregman proximal point method. The latter tends to be more efficient, while the former is more robust and has better convergence guarantees. In addition, we discuss the use of iterative linear algebra within the presented algorithms, by proposing some general purpose preconditioning strategies (used to accelerate the iterative schemes) that take advantage of the regularized nature of the systems being solved. In Chapter 2 we present a dynamic non-diagonal regularization for IPMs. The non-diagonal aspect of this regularization is implicit, since all the off-diagonal elements of the regularization matrices are cancelled out by those elements present in the Newton system, which do not contribute important information in the computation of the Newton direction. Such a regularization, which can be interpreted as the application of a Bregman proximal point method, has multiple goals. The obvious one is to improve the spectral properties of the Newton system solved at each IPM iteration. On the other hand, the regularization matrices introduce sparsity to the aforementioned linear system, allowing for more efficient factorizations. We propose a rule for tuning the regularization dynamically based on the properties of the problem, such that sufficiently large eigenvalues of the non-regularized system are perturbed insignificantly. This alleviates the need of finding specific regularization values through experimentation, which is the most common approach in the literature. We provide perturbation bounds for the eigenvalues of the non-regularized system matrix, and then discuss the spectral properties of the regularized matrix. Finally, we demonstrate the efficiency of the method applied to solve standard small- and medium-scale linear and convex quadratic programming test problems. In Chapter 3 we combine an IPM with the proximal method of multipliers (PMM). The resulting algorithm (IP-PMM) is interpreted as a primal-dual regularized IPM, suitable for solving linearly constrained convex quadratic programming problems. We apply few iterations of the interior point method to each sub-problem of the proximal method of multipliers. Once a satisfactory solution of the PMM sub-problem is found, we update the PMM parameters, form a new IPM neighbourhood, and repeat this process. Given this framework, we prove polynomial complexity of the algorithm, under standard assumptions. To our knowledge, this is the first polynomial complexity result for a primal-dual regularized IPM. The algorithm is guided by the use of a single penalty parameter; that of the logarithmic barrier. In other words, we show that IP-PMM inherits the polynomial complexity of IPMs, as well as the strong convexity of the PMM sub-problems. The updates of the penalty parameter are controlled by IPM, and hence are well-tuned, and do not depend on the problem solved. Furthermore, we study the behavior of the method when it is applied to an infeasible problem, and identify a necessary condition for infeasibility. The latter is used to construct an infeasibility detection mechanism. Subsequently, we provide a robust implementation of the presented algorithm and test it over a set of small to large scale linear and convex quadratic programming problems, demonstrating the benefits of using regularization in IPMs as well as the reliability of the approach. In Chapter 4 we extend IP-PMM to the case of linear semi-definite programming (SDP) problems. In particular, we prove polynomial complexity of the algorithm, under mild assumptions, and without requiring exact computations for the Newton directions. We furthermore provide a necessary condition for lack of strong duality, which can be used as a basis for constructing detection mechanisms for identifying pathological cases within IP-PMM. In Chapter 5 we present general-purpose preconditioners for regularized Newton systems arising within regularized interior point methods. We discuss positive definite preconditioners, suitable for iterative schemes like the conjugate gradient (CG), or the minimal residual (MINRES) method. We study the spectral properties of the preconditioned systems, and discuss the use of each presented approach, depending on the properties of the problem under consideration. All preconditioning strategies are numerically tested on various medium- to large-scale problems coming from standard test sets, as well as problems arising from partial differential equation (PDE) optimization. In Chapter 6 we apply specialized regularized IPM variants to problems arising from portfolio optimization, machine learning, image processing, and statistics. Such problems are usually solved by specialized first-order approaches. The efficiency of the proposed regularized IPM variants is confirmed by comparing them against problem-specific state--of--the--art first-order alternatives given in the literature. Finally, in Chapter 7 we present some conclusions as well as open questions, and possible future research directions

    Three decades of statistical pattern recognition paradigm for SHM of bridges

    Get PDF
    This is the author accepted manuscript. The final version is available from SAGE Publications via the DOI in this recordBridges play a crucial role in modern societies, regardless of their culture, geographical location, or economic development. The safest, economical, and most resilient bridges are those that are well managed and maintained. In the last three decades, structural health monitoring (SHM) has been a promising tool in management activities of bridges as potentially it permits one to perform condition assessment to reduce uncertainty in the planning and designing of maintenance activities as well as to increase the service performance and safety of operation. The general idea has been the transformation of massive data obtained from monitoring systems and numerical models into meaningful information. To deal with large amounts of data and perform the damage identification automatically, SHM has been cast in the context of the statistical pattern recognition (SPR) paradigm, where machine learning plays an important role. Meanwhile, recent technologies have unveiled alternative sensing opportunities and new perspectives to manage and observe the response of bridges, but it is widely recognized that bridge SHM is not yet fully capable of producing reliable global information on the presence of damage. While there have been multiple review studies published on SHM and vibration-based structural damage detection for wider scopes, there have not been so many reviews on SHM of bridges in the context of the SPR paradigm. Besides, some of those reviews become obsolete quite fast, and they are usually biased towards applications falling outside of bridge engineering. Therefore, the main goal of this article is to summarize the concept of SHM and point out key developments in research and applications of the SPR paradigm observed in bridges in the last three decades, including developments in sensing technology and data analysis, and to identify current and future trends to promote more coordinated and interdisciplinary research in the SHM of bridges

    Mixed Criticality Systems - A Review : (13th Edition, February 2022)

    Get PDF
    This review covers research on the topic of mixed criticality systems that has been published since Vestal’s 2007 paper. It covers the period up to end of 2021. The review is organised into the following topics: introduction and motivation, models, single processor analysis (including job-based, hard and soft tasks, fixed priority and EDF scheduling, shared resources and static and synchronous scheduling), multiprocessor analysis, related topics, realistic models, formal treatments, systems issues, industrial practice and research beyond mixed-criticality. A list of PhDs awarded for research relating to mixed-criticality systems is also included

    A Syntactical Reverse Engineering Approach to Fourth Generation Programming Languages Using Formal Methods

    Get PDF
    Fourth-generation programming languages (4GLs) feature rapid development with minimum configuration required by developers. However, 4GLs can suffer from limitations such as high maintenance cost and legacy software practices. Reverse engineering an existing large legacy 4GL system into a currently maintainable programming language can be a cheaper and more effective solution than rewriting from scratch. Tools do not exist so far, for reverse engineering proprietary XML-like and model-driven 4GLs where the full language specification is not in the public domain. This research has developed a novel method of reverse engineering some of the syntax of such 4GLs (with Uniface as an exemplar) derived from a particular system, with a view to providing a reliable method to translate/transpile that system's code and data structures into a modern object-oriented language (such as C\#). The method was also applied, although only to a limited extent, to some other 4GLs, Informix and Apex, to show that it was in principle more broadly applicable. A novel testing method that the syntax had been successfully translated was provided using 'abstract syntax trees'. The novel method took manually crafted grammar rules, together with Encapsulated Document Object Model based data from the source language and then used parsers to produce syntactically valid and equivalent code in the target/output language. This proof of concept research has provided a methodology plus sample code to automate part of the process. The methodology comprised a set of manual or semi-automated steps. Further automation is left for future research. In principle, the author's method could be extended to allow the reverse engineering recovery of the syntax of systems developed in other proprietary 4GLs. This would reduce time and cost for the ongoing maintenance of such systems by enabling their software engineers to work using modern object-oriented languages, methodologies, tools and techniques
    corecore