13,396 research outputs found

    Meso-scale FDM material layout design strategies under manufacturability constraints and fracture conditions

    Get PDF
    In the manufacturability-driven design (MDD) perspective, manufacturability of the product or system is the most important of the design requirements. In addition to being able to ensure that complex designs (e.g., topology optimization) are manufacturable with a given process or process family, MDD also helps mechanical designers to take advantage of unique process-material effects generated during manufacturing. One of the most recognizable examples of this comes from the scanning-type family of additive manufacturing (AM) processes; the most notable and familiar member of this family is the fused deposition modeling (FDM) or fused filament fabrication (FFF) process. This process works by selectively depositing uniform, approximately isotropic beads or elements of molten thermoplastic material (typically structural engineering plastics) in a series of pre-specified traces to build each layer of the part. There are many interesting 2-D and 3-D mechanical design problems that can be explored by designing the layout of these elements. The resulting structured, hierarchical material (which is both manufacturable and customized layer-by-layer within the limits of the process and material) can be defined as a manufacturing process-driven structured material (MPDSM). This dissertation explores several practical methods for designing these element layouts for 2-D and 3-D meso-scale mechanical problems, focusing ultimately on design-for-fracture. Three different fracture conditions are explored: (1) cases where a crack must be prevented or stopped, (2) cases where the crack must be encouraged or accelerated, and (3) cases where cracks must grow in a simple pre-determined pattern. Several new design tools, including a mapping method for the FDM manufacturability constraints, three major literature reviews, the collection, organization, and analysis of several large (qualitative and quantitative) multi-scale datasets on the fracture behavior of FDM-processed materials, some new experimental equipment, and the refinement of a fast and simple g-code generator based on commercially-available software, were developed and refined to support the design of MPDSMs under fracture conditions. The refined design method and rules were experimentally validated using a series of case studies (involving both design and physical testing of the designs) at the end of the dissertation. Finally, a simple design guide for practicing engineers who are not experts in advanced solid mechanics nor process-tailored materials was developed from the results of this project.U of I OnlyAuthor's request

    Sensitivity analysis for ReaxFF reparameterization using the Hilbert-Schmidt independence criterion

    Full text link
    We apply a global sensitivity method, the Hilbert-Schmidt independence criterion (HSIC), to the reparameterization of a Zn/S/H ReaxFF force field to identify the most appropriate parameters for reparameterization. Parameter selection remains a challenge in this context as high dimensional optimizations are prone to overfitting and take a long time, but selecting too few parameters leads to poor quality force fields. We show that the HSIC correctly and quickly identifies the most sensitive parameters, and that optimizations done using a small number of sensitive parameters outperform those done using a higher dimensional reasonable-user parameter selection. Optimizations using only sensitive parameters: 1) converge faster, 2) have loss values comparable to those found with the naive selection, 3) have similar accuracy in validation tests, and 4) do not suffer from problems of overfitting. We demonstrate that an HSIC global sensitivity is a cheap optimization pre-processing step that has both qualitative and quantitative benefits which can substantially simplify and speedup ReaxFF reparameterizations.Comment: author accepted manuscrip

    Increased lifetime of Organic Photovoltaics (OPVs) and the impact of degradation, efficiency and costs in the LCOE of Emerging PVs

    Get PDF
    Emerging photovoltaic (PV) technologies such as organic photovoltaics (OPVs) and perovskites (PVKs) have the potential to disrupt the PV market due to their ease of fabrication (compatible with cheap roll-to-roll processing) and installation, as well as their significant efficiency improvements in recent years. However, rapid degradation is still an issue present in many emerging PVs, which must be addressed to enable their commercialisation. This thesis shows an OPV lifetime enhancing technique by adding the insulating polymer PMMA to the active layer, and a novel model for quantifying the impact of degradation (alongside efficiency and cost) upon levelized cost of energy (LCOE) in real world emerging PV installations. The effect of PMMA morphology on the success of a ternary strategy was investigated, leading to device design guidelines. It was found that either increasing the weight percent (wt%) or molecular weight (MW) of PMMA resulted in an increase in the volume of PMMA-rich islands, which provided the OPV protection against water and oxygen ingress. It was also found that adding PMMA can be effective in enhancing the lifetime of different active material combinations, although not to the same extent, and that processing additives can have a negative impact in the devices lifetime. A novel model was developed taking into account realistic degradation profile sourced from a literature review of state-of-the-art OPV and PVK devices. It was found that optimal strategies to improve LCOE depend on the present characteristics of a device, and that panels with a good balance of efficiency and degradation were better than panels with higher efficiency but higher degradation as well. Further, it was found that low-cost locations were more favoured from reductions in the degradation rate and module cost, whilst high-cost locations were more benefited from improvements in initial efficiency, lower discount rates and reductions in install costs

    Addressing infrastructure challenges posed by the Harwich Formation through understanding its geological origins

    Get PDF
    Variable deposits known to make up the sequence of the Harwich Formation in London have been the subject of ongoing uncertainty within the engineering industry. Current stratigraphical subdivisions do not account for the systematic recognition of individual members in unexposed ground where recovered material is usually disturbed - fines are flushed out during the drilling process and loose materials are often lost or mixed with the surrounding layers. Most engineering problems associated with the Harwich Formation deposits are down to their unconsolidated nature and irregular cementation within layers. The consequent engineering hazards are commonly reflected in high permeability, raised groundwater pressures, ground settlements - when found near the surface and poor stability - when exposed during excavations or tunnelling operations. This frequently leads to sudden design changes or requires contingency measures during construction. All of these can result in damaged equipment, slow progress, and unforeseen costs. This research proposes a facies-based approach where the lithological facies assigned were identified based on reinterpretation of available borehole data from various ground investigations in London, supported by visual inspection of deposits in-situ and a selection of laboratory testing including Particle Size Distribution, Optical and Scanning Electron Microscopy and X-ray Diffraction analyses. Two ground models were developed as a result: 1st a 3D geological model (MOVE model) of the stratigraphy found within the study area that explores the influence of local structural processes controlling/affecting these sediments pre-, syn- and post- deposition and 2nd a sequence stratigraphic model (Dionisos Flow model) unveiling stratal geometries of facies at various stages of accretion. The models present a series of sediment distribution maps, localised 3D views and cross-sections that aim to provide a novel approach to assist the geotechnical industry in predicting the likely distribution of the Harwich Formation deposits, decreasing the engineering risks associated with this stratum.Open Acces

    Digital asset management via distributed ledgers

    Get PDF
    Distributed ledgers rose to prominence with the advent of Bitcoin, the first provably secure protocol to solve consensus in an open-participation setting. Following, active research and engineering efforts have proposed a multitude of applications and alternative designs, the most prominent being Proof-of-Stake (PoS). This thesis expands the scope of secure and efficient asset management over a distributed ledger around three axes: i) cryptography; ii) distributed systems; iii) game theory and economics. First, we analyze the security of various wallets. We start with a formal model of hardware wallets, followed by an analytical framework of PoS wallets, each outlining the unique properties of Proof-of-Work (PoW) and PoS respectively. The latter also provides a rigorous design to form collaborative participating entities, called stake pools. We then propose Conclave, a stake pool design which enables a group of parties to participate in a PoS system in a collaborative manner, without a central operator. Second, we focus on efficiency. Decentralized systems are aimed at thousands of users across the globe, so a rigorous design for minimizing memory and storage consumption is a prerequisite for scalability. To that end, we frame ledger maintenance as an optimization problem and design a multi-tier framework for designing wallets which ensure that updates increase the ledger’s global state only to a minimal extent, while preserving the security guarantees outlined in the security analysis. Third, we explore incentive-compatibility and analyze blockchain systems from a micro and a macroeconomic perspective. We enrich our cryptographic and systems' results by analyzing the incentives of collective pools and designing a state efficient Bitcoin fee function. We then analyze the Nash dynamics of distributed ledgers, introducing a formal model that evaluates whether rational, utility-maximizing participants are disincentivized from exhibiting undesirable infractions, and highlighting the differences between PoW and PoS-based ledgers, both in a standalone setting and under external parameters, like market price fluctuations. We conclude by introducing a macroeconomic principle, cryptocurrency egalitarianism, and then describing two mechanisms for enabling taxation in blockchain-based currency systems

    CONTROL OF ADVENTITIOUS ROOT FORMATION IN ARABIDOPSIS

    Get PDF
    Adventitious or de novo root organogenesis is a process that occurs from wounded or detached plant tissues or organs. In tissue culture experiments, the available hormone concentrations in the medium play significant roles in inducing adventitious roots. However, regeneration from detached organs in natural conditions depends on endogenous hormones. To imitate natural conditions, Arabidopsis thaliana Col-0 leaf explants were cultured on B5 medium without any added hormones, in order to investigate the endogenous hormonal signalling and molecular mechanisms that lead to de novo root organogenesis. Use was made of a series of hormone signalling reporter lines in transgenic Arabidopsis, to understand better the roles of auxin, cytokinin, ethylene and gibberellin signalling. Cell proliferation was monitored over a developmental time course, and the expression of a number of genes, and their functional roles through mutant analysis, was also investigated during the regeneration process. It was demonstrated that auxin, gibberellin and cytokinin signalling becomes focused at the wound site in the petiole, associated with the induction of adventitious roots. Auxin signalling-defective mutants such as axr1, axr3 and pls were unable to form adventitious roots as well as wild type, reflected in defective expression of auxin pathway genes such as YUC family genes and WOX5. pls and axr1 were also found to be defective in the expression of the transcription factor gene NAC1. Mutants and transgenic overexpression lines for transcriptional regulators RAP2.7, MDF1 and NAC1 showed that the three genes are required for adventitious root formation, and function in an auxin-independent manner to mediate root regeneration. Adventitious root formation from the Arabidopsis leaf therefore requires coordinated expression of a number of transcription factors that work in both an auxin-dependent and -independent manner, and cross talk between auxin and other hormones is important for correct organogenesis

    Understanding novel EGFP-Ubx protein-based film formation

    Get PDF
    Protein-based materials are currently the subject of intense research interest since they have an extended range of potential applications, such as im-proved bio-membrane biocompatibility for implanted medical devices and the creation of platform materials for novel biosensors. Monomers from Ultrabithorax (Ubx) transcription factor are known to spontaneously self-assemble at an air-water interface to form a monolayer, which has then been used as a basis for forming biopolymeric ˝bers. Here we used the Lang-muir trough technique, Brewster angle microscopy (BAM), ellipsometry and neutron re˛ectometry (NR) to investigate the in˛uences of di˙erent exper-imental conditions on EGFP-Ubx monolayer formation and the impact on biopolymeric ˝ber structure. We varied protein concentration, bu˙er prop-erties and waiting times prior to forming biopolymeric ˝bers. Interestingly, we found 3 phases of material formation which brought us to a new protocol for forming ˝bers that reduced protein concentration by 5-fold and wait-ing times by 100-fold. Moreover, an in-house developed MATLAB code was used to analyze SEM images and obtain quantitative structural information about the biopolymeric ˝bers that were correlated directly to the surface ˝lm characteristics measured in the LB trough. These new insights into ˝ber formation and structure enhance the usefulness of the Ubx-based biopolymer for biomedical applications

    Synthesis and Characterisation of Low-cost Biopolymeric/mineral Composite Systems and Evaluation of their Potential Application for Heavy Metal Removal

    Get PDF
    Heavy metal pollution and waste management are two major environmental problems faced in the world today. Anthropogenic sources of heavy metals, especially effluent from industries, are serious environmental and health concerns by polluting surface and ground waters. Similarly, on a global scale, thousands of tonnes of industrial and agricultural waste are discarded into the environment annually. There are several conventional methods to treat industrial effluents, including reverse osmosis, oxidation, filtration, flotation, chemical precipitation, ion exchange resins and adsorption. Among them, adsorption and ion exchange are known to be effective mechanisms for removing heavy metal pollution, especially if low-cost materials can be used. This thesis was a study into materials that can be used to remove heavy metals from water using low-cost feedstock materials. The synthesis of low-cost composite matrices from agricultural and industrial by-products and low-cost organic and mineral sources was carried out. The feedstock materials being considered include chitosan (generated from industrial seafood waste), coir fibre (an agricultural by-product), spent coffee grounds (a by-product from coffee machines), hydroxyapatite (from bovine bone), and naturally sourced aluminosilicate minerals such as zeolite. The novel composite adsorbents were prepared using commercially sourced HAp and bovine sourced HAp, with two types of adsorbents being synthesized, including two- and three-component composites. Standard synthetic methods such as precipitation were developed to synthesize these materials, followed by characterization of their structural, physical, and chemical properties (by using FTIR, TGA, SEM, EDX and XRD). The synthesized materials were then evaluated for their ability to remove metal ions from solutions of heavy metals using single-metal ion type and two-metal ion type solution systems, using the model ion solutions, with quantification of their removal efficiency. It was followed by experimentation using the synthesized adsorbents for metal ion removal in complex systems such as an industrial input stream solution system obtained from a local timber treatment company. Two-component composites were considered as control composites to compare the removal efficiency of the three-component composites against. The heavy metal removal experiments were conducted under a range of experimental conditions (e.g., pH, sorbent dose, initial metal ion concentration, time of contact). Of the four metal ion systems considered in this study (Cd2+, Pb2+, Cu2+ and Cr as chromate ions), Pb2+ ion removal by the composites was found to be the highest in single-metal and two-metal ion type solution systems, while chromate ion removal was found to be the lowest. The bovine bone-based hydroxyapatite (bHAp) composites were more efficient at removing the metal cations than composites formed from a commercially sourced hydroxyapatite (cHAp). In industrial input stream solution systems (containing Cu, Cr and As), the Cu2+ ion removal was the highest, which aligned with the observations recorded in the single and two-metal ion type solution systems. Arsenate ion was removed to a higher extent than chromate ion using the three-component composites, while the removal of chromate ion was found to be higher than arsenate ion when using the two-component composites (i.e., the control system). The project also aimed to elucidate the removal mechanisms of these synthesized composite materials by using appropriate adsorption and kinetic models. The adsorption of metal ions exhibited a range of adsorption behaviours as both the models (Langmuir and Freundlich) were found to fit most of the data recorded in different adsorption systems studied. The pseudo-second-order model was found to be the best fitted to describe the kinetics of heavy metal ion adsorption in all the composite adsorbent systems studied, in single-metal ion type and two-metal ion type solution systems. The ion-exchange mechanism was considered as one of the dominant mechanisms for the removal of cations (in single-metal and two-metal ion type solution systems) and arsenate ions (in industrial input stream solution systems) along with other adsorption mechanisms. In contrast, electrostatic attractions were considered to be the dominant mechanism of removal for chromate ions

    Machine learning for managing structured and semi-structured data

    Get PDF
    As the digitalization of private, commercial, and public sectors advances rapidly, an increasing amount of data is becoming available. In order to gain insights or knowledge from these enormous amounts of raw data, a deep analysis is essential. The immense volume requires highly automated processes with minimal manual interaction. In recent years, machine learning methods have taken on a central role in this task. In addition to the individual data points, their interrelationships often play a decisive role, e.g. whether two patients are related to each other or whether they are treated by the same physician. Hence, relational learning is an important branch of research, which studies how to harness this explicitly available structural information between different data points. Recently, graph neural networks have gained importance. These can be considered an extension of convolutional neural networks from regular grids to general (irregular) graphs. Knowledge graphs play an essential role in representing facts about entities in a machine-readable way. While great efforts are made to store as many facts as possible in these graphs, they often remain incomplete, i.e., true facts are missing. Manual verification and expansion of the graphs is becoming increasingly difficult due to the large volume of data and must therefore be assisted or substituted by automated procedures which predict missing facts. The field of knowledge graph completion can be roughly divided into two categories: Link Prediction and Entity Alignment. In Link Prediction, machine learning models are trained to predict unknown facts between entities based on the known facts. Entity Alignment aims at identifying shared entities between graphs in order to link several such knowledge graphs based on some provided seed alignment pairs. In this thesis, we present important advances in the field of knowledge graph completion. For Entity Alignment, we show how to reduce the number of required seed alignments while maintaining performance by novel active learning techniques. We also discuss the power of textual features and show that graph-neural-network-based methods have difficulties with noisy alignment data. For Link Prediction, we demonstrate how to improve the prediction for unknown entities at training time by exploiting additional metadata on individual statements, often available in modern graphs. Supported with results from a large-scale experimental study, we present an analysis of the effect of individual components of machine learning models, e.g., the interaction function or loss criterion, on the task of link prediction. We also introduce a software library that simplifies the implementation and study of such components and makes them accessible to a wide research community, ranging from relational learning researchers to applied fields, such as life sciences. Finally, we propose a novel metric for evaluating ranking results, as used for both completion tasks. It allows for easier interpretation and comparison, especially in cases with different numbers of ranking candidates, as encountered in the de-facto standard evaluation protocols for both tasks.Mit der rasant fortschreitenden Digitalisierung des privaten, kommerziellen und öffentlichen Sektors werden immer größere Datenmengen verfügbar. Um aus diesen enormen Mengen an Rohdaten Erkenntnisse oder Wissen zu gewinnen, ist eine tiefgehende Analyse unerlässlich. Das immense Volumen erfordert hochautomatisierte Prozesse mit minimaler manueller Interaktion. In den letzten Jahren haben Methoden des maschinellen Lernens eine zentrale Rolle bei dieser Aufgabe eingenommen. Neben den einzelnen Datenpunkten spielen oft auch deren Zusammenhänge eine entscheidende Rolle, z.B. ob zwei Patienten miteinander verwandt sind oder ob sie vom selben Arzt behandelt werden. Daher ist das relationale Lernen ein wichtiger Forschungszweig, der untersucht, wie diese explizit verfügbaren strukturellen Informationen zwischen verschiedenen Datenpunkten nutzbar gemacht werden können. In letzter Zeit haben Graph Neural Networks an Bedeutung gewonnen. Diese können als eine Erweiterung von CNNs von regelmäßigen Gittern auf allgemeine (unregelmäßige) Graphen betrachtet werden. Wissensgraphen spielen eine wesentliche Rolle bei der Darstellung von Fakten über Entitäten in maschinenlesbaren Form. Obwohl große Anstrengungen unternommen werden, so viele Fakten wie möglich in diesen Graphen zu speichern, bleiben sie oft unvollständig, d. h. es fehlen Fakten. Die manuelle Überprüfung und Erweiterung der Graphen wird aufgrund der großen Datenmengen immer schwieriger und muss daher durch automatisierte Verfahren unterstützt oder ersetzt werden, die fehlende Fakten vorhersagen. Das Gebiet der Wissensgraphenvervollständigung lässt sich grob in zwei Kategorien einteilen: Link Prediction und Entity Alignment. Bei der Link Prediction werden maschinelle Lernmodelle trainiert, um unbekannte Fakten zwischen Entitäten auf der Grundlage der bekannten Fakten vorherzusagen. Entity Alignment zielt darauf ab, gemeinsame Entitäten zwischen Graphen zu identifizieren, um mehrere solcher Wissensgraphen auf der Grundlage einiger vorgegebener Paare zu verknüpfen. In dieser Arbeit stellen wir wichtige Fortschritte auf dem Gebiet der Vervollständigung von Wissensgraphen vor. Für das Entity Alignment zeigen wir, wie die Anzahl der benötigten Paare reduziert werden kann, während die Leistung durch neuartige aktive Lerntechniken erhalten bleibt. Wir erörtern auch die Leistungsfähigkeit von Textmerkmalen und zeigen, dass auf Graph-Neural-Networks basierende Methoden Schwierigkeiten mit verrauschten Paar-Daten haben. Für die Link Prediction demonstrieren wir, wie die Vorhersage für unbekannte Entitäten zur Trainingszeit verbessert werden kann, indem zusätzliche Metadaten zu einzelnen Aussagen genutzt werden, die oft in modernen Graphen verfügbar sind. Gestützt auf Ergebnisse einer groß angelegten experimentellen Studie präsentieren wir eine Analyse der Auswirkungen einzelner Komponenten von Modellen des maschinellen Lernens, z. B. der Interaktionsfunktion oder des Verlustkriteriums, auf die Aufgabe der Link Prediction. Außerdem stellen wir eine Softwarebibliothek vor, die die Implementierung und Untersuchung solcher Komponenten vereinfacht und sie einer breiten Forschungsgemeinschaft zugänglich macht, die von Forschern im Bereich des relationalen Lernens bis hin zu angewandten Bereichen wie den Biowissenschaften reicht. Schließlich schlagen wir eine neuartige Metrik für die Bewertung von Ranking-Ergebnissen vor, wie sie für beide Aufgaben verwendet wird. Sie ermöglicht eine einfachere Interpretation und einen leichteren Vergleich, insbesondere in Fällen mit einer unterschiedlichen Anzahl von Kandidaten, wie sie in den de-facto Standardbewertungsprotokollen für beide Aufgaben vorkommen
    corecore