4,578 research outputs found

    AI-based design methodologies for hot form quench (HFQ®)

    Get PDF
    This thesis aims to develop advanced design methodologies that fully exploit the capabilities of the Hot Form Quench (HFQ®) stamping process in stamping complex geometric features in high-strength aluminium alloy structural components. While previous research has focused on material models for FE simulations, these simulations are not suitable for early-phase design due to their high computational cost and expertise requirements. This project has two main objectives: first, to develop design guidelines for the early-stage design phase; and second, to create a machine learning-based platform that can optimise 3D geometries under hot stamping constraints, for both early and late-stage design. With these methodologies, the aim is to facilitate the incorporation of HFQ capabilities into component geometry design, enabling the full realisation of its benefits. To achieve the objectives of this project, two main efforts were undertaken. Firstly, the analysis of aluminium alloys for stamping deep corners was simplified by identifying the effects of corner geometry and material characteristics on post-form thinning distribution. New equation sets were proposed to model trends and design maps were created to guide component design at early stages. Secondly, a platform was developed to optimise 3D geometries for stamping, using deep learning technologies to incorporate manufacturing capabilities. This platform combined two neural networks: a geometry generator based on Signed Distance Functions (SDFs), and an image-based manufacturability surrogate model. The platform used gradient-based techniques to update the inputs to the geometry generator based on the surrogate model's manufacturability information. The effectiveness of the platform was demonstrated on two geometry classes, Corners and Bulkheads, with five case studies conducted to optimise under post-stamped thinning constraints. Results showed that the platform allowed for free morphing of complex geometries, leading to significant improvements in component quality. The research outcomes represent a significant contribution to the field of technologically advanced manufacturing methods and offer promising avenues for future research. The developed methodologies provide practical solutions for designers to identify optimal component geometries, ensuring manufacturing feasibility and reducing design development time and costs. The potential applications of these methodologies extend to real-world industrial settings and can significantly contribute to the continued advancement of the manufacturing sector.Open Acces

    Modular lifelong machine learning

    Get PDF
    Deep learning has drastically improved the state-of-the-art in many important fields, including computer vision and natural language processing (LeCun et al., 2015). However, it is expensive to train a deep neural network on a machine learning problem. The overall training cost further increases when one wants to solve additional problems. Lifelong machine learning (LML) develops algorithms that aim to efficiently learn to solve a sequence of problems, which become available one at a time. New problems are solved with less resources by transferring previously learned knowledge. At the same time, an LML algorithm needs to retain good performance on all encountered problems, thus avoiding catastrophic forgetting. Current approaches do not possess all the desired properties of an LML algorithm. First, they primarily focus on preventing catastrophic forgetting (Diaz-Rodriguez et al., 2018; Delange et al., 2021). As a result, they neglect some knowledge transfer properties. Furthermore, they assume that all problems in a sequence share the same input space. Finally, scaling these methods to a large sequence of problems remains a challenge. Modular approaches to deep learning decompose a deep neural network into sub-networks, referred to as modules. Each module can then be trained to perform an atomic transformation, specialised in processing a distinct subset of inputs. This modular approach to storing knowledge makes it easy to only reuse the subset of modules which are useful for the task at hand. This thesis introduces a line of research which demonstrates the merits of a modular approach to lifelong machine learning, and its ability to address the aforementioned shortcomings of other methods. Compared to previous work, we show that a modular approach can be used to achieve more LML properties than previously demonstrated. Furthermore, we develop tools which allow modular LML algorithms to scale in order to retain said properties on longer sequences of problems. First, we introduce HOUDINI, a neurosymbolic framework for modular LML. HOUDINI represents modular deep neural networks as functional programs and accumulates a library of pre-trained modules over a sequence of problems. Given a new problem, we use program synthesis to select a suitable neural architecture, as well as a high-performing combination of pre-trained and new modules. We show that our approach has most of the properties desired from an LML algorithm. Notably, it can perform forward transfer, avoid negative transfer and prevent catastrophic forgetting, even across problems with disparate input domains and problems which require different neural architectures. Second, we produce a modular LML algorithm which retains the properties of HOUDINI but can also scale to longer sequences of problems. To this end, we fix the choice of a neural architecture and introduce a probabilistic search framework, PICLE, for searching through different module combinations. To apply PICLE, we introduce two probabilistic models over neural modules which allows us to efficiently identify promising module combinations. Third, we phrase the search over module combinations in modular LML as black-box optimisation, which allows one to make use of methods from the setting of hyperparameter optimisation (HPO). We then develop a new HPO method which marries a multi-fidelity approach with model-based optimisation. We demonstrate that this leads to improvement in anytime performance in the HPO setting and discuss how this can in turn be used to augment modular LML methods. Overall, this thesis identifies a number of important LML properties, which have not all been attained in past methods, and presents an LML algorithm which can achieve all of them, apart from backward transfer

    Beam scanning by liquid-crystal biasing in a modified SIW structure

    Get PDF
    A fixed-frequency beam-scanning 1D antenna based on Liquid Crystals (LCs) is designed for application in 2D scanning with lateral alignment. The 2D array environment imposes full decoupling of adjacent 1D antennas, which often conflicts with the LC requirement of DC biasing: the proposed design accommodates both. The LC medium is placed inside a Substrate Integrated Waveguide (SIW) modified to work as a Groove Gap Waveguide, with radiating slots etched on the upper broad wall, that radiates as a Leaky-Wave Antenna (LWA). This allows effective application of the DC bias voltage needed for tuning the LCs. At the same time, the RF field remains laterally confined, enabling the possibility to lay several antennas in parallel and achieve 2D beam scanning. The design is validated by simulation employing the actual properties of a commercial LC medium

    Using machine learning to predict pathogenicity of genomic variants throughout the human genome

    Get PDF
    Geschätzt mehr als 6.000 Erkrankungen werden durch Veränderungen im Genom verursacht. Ursachen gibt es viele: Eine genomische Variante kann die Translation eines Proteins stoppen, die Genregulation stören oder das Spleißen der mRNA in eine andere Isoform begünstigen. All diese Prozesse müssen überprüft werden, um die zum beschriebenen Phänotyp passende Variante zu ermitteln. Eine Automatisierung dieses Prozesses sind Varianteneffektmodelle. Mittels maschinellem Lernen und Annotationen aus verschiedenen Quellen bewerten diese Modelle genomische Varianten hinsichtlich ihrer Pathogenität. Die Entwicklung eines Varianteneffektmodells erfordert eine Reihe von Schritten: Annotation der Trainingsdaten, Auswahl von Features, Training verschiedener Modelle und Selektion eines Modells. Hier präsentiere ich ein allgemeines Workflow dieses Prozesses. Dieses ermöglicht es den Prozess zu konfigurieren, Modellmerkmale zu bearbeiten, und verschiedene Annotationen zu testen. Der Workflow umfasst außerdem die Optimierung von Hyperparametern, Validierung und letztlich die Anwendung des Modells durch genomweites Berechnen von Varianten-Scores. Der Workflow wird in der Entwicklung von Combined Annotation Dependent Depletion (CADD), einem Varianteneffektmodell zur genomweiten Bewertung von SNVs und InDels, verwendet. Durch Etablierung des ersten Varianteneffektmodells für das humane Referenzgenome GRCh38 demonstriere ich die gewonnenen Möglichkeiten Annotationen aufzugreifen und neue Modelle zu trainieren. Außerdem zeige ich, wie Deep-Learning-Scores als Feature in einem CADD-Modell die Vorhersage von RNA-Spleißing verbessern. Außerdem werden Varianteneffektmodelle aufgrund eines neuen, auf Allelhäufigkeit basierten, Trainingsdatensatz entwickelt. Diese Ergebnisse zeigen, dass der entwickelte Workflow eine skalierbare und flexible Möglichkeit ist, um Varianteneffektmodelle zu entwickeln. Alle entstandenen Scores sind unter cadd.gs.washington.edu und cadd.bihealth.org frei verfügbar.More than 6,000 diseases are estimated to be caused by genomic variants. This can happen in many possible ways: a variant may stop the translation of a protein, interfere with gene regulation, or alter splicing of the transcribed mRNA into an unwanted isoform. It is necessary to investigate all of these processes in order to evaluate which variant may be causal for the deleterious phenotype. A great help in this regard are variant effect scores. Implemented as machine learning classifiers, they integrate annotations from different resources to rank genomic variants in terms of pathogenicity. Developing a variant effect score requires multiple steps: annotation of the training data, feature selection, model training, benchmarking, and finally deployment for the model's application. Here, I present a generalized workflow of this process. It makes it simple to configure how information is converted into model features, enabling the rapid exploration of different annotations. The workflow further implements hyperparameter optimization, model validation and ultimately deployment of a selected model via genome-wide scoring of genomic variants. The workflow is applied to train Combined Annotation Dependent Depletion (CADD), a variant effect model that is scoring SNVs and InDels genome-wide. I show that the workflow can be quickly adapted to novel annotations by porting CADD to the genome reference GRCh38. Further, I demonstrate the integration of deep-neural network scores as features into a new CADD model, improving the annotation of RNA splicing events. Finally, I apply the workflow to train multiple variant effect models from training data that is based on variants selected by allele frequency. In conclusion, the developed workflow presents a flexible and scalable method to train variant effect scores. All software and developed scores are freely available from cadd.gs.washington.edu and cadd.bihealth.org

    Novel 129Xe Magnetic Resonance Imaging and Spectroscopy Measurements of Pulmonary Gas-Exchange

    Get PDF
    Gas-exchange is the primary function of the lungs and involves removing carbon dioxide from the body and exchanging it within the alveoli for inhaled oxygen. Several different pulmonary, cardiac and cardiovascular abnormalities have negative effects on pulmonary gas-exchange. Unfortunately, clinical tests do not always pinpoint the problem; sensitive and specific measurements are needed to probe the individual components participating in gas-exchange for a better understanding of pathophysiology, disease progression and response to therapy. In vivo Xenon-129 gas-exchange magnetic resonance imaging (129Xe gas-exchange MRI) has the potential to overcome these challenges. When participants inhale hyperpolarized 129Xe gas, it has different MR spectral properties as a gas, as it diffuses through the alveolar membrane and as it binds to red-blood-cells. 129Xe MR spectroscopy and imaging provides a way to tease out the different anatomic components of gas-exchange simultaneously and provides spatial information about where abnormalities may occur. In this thesis, I developed and applied 129Xe MR spectroscopy and imaging to measure gas-exchange in the lungs alongside other clinical and imaging measurements. I measured 129Xe gas-exchange in asymptomatic congenital heart disease and in prospective, controlled studies of long-COVID. I also developed mathematical tools to model 129Xe MR signals during acquisition and reconstruction. The insights gained from my work underscore the potential for 129Xe gas-exchange MRI biomarkers towards a better understanding of cardiopulmonary disease. My work also provides a way to generate a deeper imaging and physiologic understanding of gas-exchange in vivo in healthy participants and patients with chronic lung and heart disease

    Synthetic Aperture Radar (SAR) Meets Deep Learning

    Get PDF
    This reprint focuses on the application of the combination of synthetic aperture radars and depth learning technology. It aims to further promote the development of SAR image intelligent interpretation technology. A synthetic aperture radar (SAR) is an important active microwave imaging sensor, whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., in geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications. In recent years, deep learning represented by convolution neural networks has promoted significant progress in the computer vision community, e.g., in face recognition, the driverless field and Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve the performance of various applications. This reprint provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR in various manuscript types, e.g., articles, letters, reviews and technical reports

    Leveraging Manifold Theory for Trajectory Design - A Focus on Futuristic Cislunar Missions

    Get PDF
    Optimal control methods for designing trajectories have been studied extensively by astro-dynamicists. Direct and indirect methods provide separate approaches to arrive at the optimal solution, each having their associated advantages and challenges. Among the realm of optimized transfer trajectories, fuel-optimal trajectories are typically most sought and characterized by se-quential thrust and coast arcs. On the other hand, it is well known that a simplified dynamical model like the CR3BP analyzed in a rotating coordinate system, reveal fixed points known as Lagrange points. These spatial points can be orbited, with researchers categorizing periodic orbits around them starting from the simple planar Lyapunov orbits and continuing to the more enigmatic butterfly orbits. Studying linearized dynamics using eigenanalysis in the vicinity of a point on these periodic orbits lead to interesting departures spatially manifesting into the invariant manifolds. This thesis delves into the novel idea of merging aspects of invariant manifold theory and indirect optimal control methods to provide efficient computation of feasible transfer trajectories. The marriage of these ideas provide the possibility of alleviating the challenges of an end-to end optimization using indirect methods for a long mission by utilizing the pre-computed and analyzed manifolds for insertion points of a long terminal coast arc. In addition to this, realistic and accurate mission scenarios require consideration of a high-fidelity dynamical model as well as shadow constraints. A methodology to use the “manifold analogues” in such cases has been discussed and utilized in this thesis along with modelling of eclipses during optimization, providing mission designers a basis for efficient and accurate/mission-ready trajectory design. This overcomes the shortcomings in state of the art software packages such as MYSTIC and COPERNICUS

    Markov field models of molecular kinetics

    Get PDF
    Computer simulations such as molecular dynamics (MD) provide a possible means to understand protein dynamics and mechanisms on an atomistic scale. The resulting simulation data can be analyzed with Markov state models (MSMs), yielding a quantitative kinetic model that, e.g., encodes state populations and transition rates. However, the larger an investigated system, the more data is required to estimate a valid kinetic model. In this work, we show that this scaling problem can be escaped when decomposing a system into smaller ones, leveraging weak couplings between local domains. Our approach, termed independent Markov decomposition (IMD), is a first-order approximation neglecting couplings, i.e., it represents a decomposition of the underlying global dynamics into a set of independent local ones. We demonstrate that for truly independent systems, IMD can reduce the sampling by three orders of magnitude. IMD is applied to two biomolecular systems. First, synaptotagmin-1 is analyzed, a rapid calcium switch from the neurotransmitter release machinery. Within its C2A domain, local conformational switches are identified and modeled with independent MSMs, shedding light on the mechanism of its calcium-mediated activation. Second, the catalytic site of the serine protease TMPRSS2 is analyzed with a local drug-binding model. Equilibrium populations of different drug-binding modes are derived for three inhibitors, mirroring experimentally determined drug efficiencies. IMD is subsequently extended to an end-to-end deep learning framework called iVAMPnets, which learns a domain decomposition from simulation data and simultaneously models the kinetics in the local domains. We finally classify IMD and iVAMPnets as Markov field models (MFM), which we define as a class of models that describe dynamics by decomposing systems into local domains. Overall, this thesis introduces a local approach to Markov modeling that enables to quantitatively assess the kinetics of large macromolecular complexes, opening up possibilities to tackle current and future computational molecular biology questions

    Taylor University Catalog 2023-2024

    Get PDF
    The 2023-2024 academic catalog of Taylor University in Upland, Indiana.https://pillars.taylor.edu/catalogs/1128/thumbnail.jp
    corecore