6,864 research outputs found

    Meso-scale FDM material layout design strategies under manufacturability constraints and fracture conditions

    Get PDF
    In the manufacturability-driven design (MDD) perspective, manufacturability of the product or system is the most important of the design requirements. In addition to being able to ensure that complex designs (e.g., topology optimization) are manufacturable with a given process or process family, MDD also helps mechanical designers to take advantage of unique process-material effects generated during manufacturing. One of the most recognizable examples of this comes from the scanning-type family of additive manufacturing (AM) processes; the most notable and familiar member of this family is the fused deposition modeling (FDM) or fused filament fabrication (FFF) process. This process works by selectively depositing uniform, approximately isotropic beads or elements of molten thermoplastic material (typically structural engineering plastics) in a series of pre-specified traces to build each layer of the part. There are many interesting 2-D and 3-D mechanical design problems that can be explored by designing the layout of these elements. The resulting structured, hierarchical material (which is both manufacturable and customized layer-by-layer within the limits of the process and material) can be defined as a manufacturing process-driven structured material (MPDSM). This dissertation explores several practical methods for designing these element layouts for 2-D and 3-D meso-scale mechanical problems, focusing ultimately on design-for-fracture. Three different fracture conditions are explored: (1) cases where a crack must be prevented or stopped, (2) cases where the crack must be encouraged or accelerated, and (3) cases where cracks must grow in a simple pre-determined pattern. Several new design tools, including a mapping method for the FDM manufacturability constraints, three major literature reviews, the collection, organization, and analysis of several large (qualitative and quantitative) multi-scale datasets on the fracture behavior of FDM-processed materials, some new experimental equipment, and the refinement of a fast and simple g-code generator based on commercially-available software, were developed and refined to support the design of MPDSMs under fracture conditions. The refined design method and rules were experimentally validated using a series of case studies (involving both design and physical testing of the designs) at the end of the dissertation. Finally, a simple design guide for practicing engineers who are not experts in advanced solid mechanics nor process-tailored materials was developed from the results of this project.U of I OnlyAuthor's request

    In-situ crack and keyhole pore detection in laser directed energy deposition through acoustic signal and deep learning

    Full text link
    Cracks and keyhole pores are detrimental defects in alloys produced by laser directed energy deposition (LDED). Laser-material interaction sound may hold information about underlying complex physical events such as crack propagation and pores formation. However, due to the noisy environment and intricate signal content, acoustic-based monitoring in LDED has received little attention. This paper proposes a novel acoustic-based in-situ defect detection strategy in LDED. The key contribution of this study is to develop an in-situ acoustic signal denoising, feature extraction, and sound classification pipeline that incorporates convolutional neural networks (CNN) for online defect prediction. Microscope images are used to identify locations of the cracks and keyhole pores within a part. The defect locations are spatiotemporally registered with acoustic signal. Various acoustic features corresponding to defect-free regions, cracks, and keyhole pores are extracted and analysed in time-domain, frequency-domain, and time-frequency representations. The CNN model is trained to predict defect occurrences using the Mel-Frequency Cepstral Coefficients (MFCCs) of the lasermaterial interaction sound. The CNN model is compared to various classic machine learning models trained on the denoised acoustic dataset and raw acoustic dataset. The validation results shows that the CNN model trained on the denoised dataset outperforms others with the highest overall accuracy (89%), keyhole pore prediction accuracy (93%), and AUC-ROC score (98%). Furthermore, the trained CNN model can be deployed into an in-house developed software platform for online quality monitoring. The proposed strategy is the first study to use acoustic signals with deep learning for insitu defect detection in LDED process.Comment: 36 Pages, 16 Figures, accepted at journal Additive Manufacturin

    A framework for experimental-data-driven assessment of Magnetized Liner Inertial Fusion stagnation image metrics

    Full text link
    A variety of spherical crystal x-ray imager (SCXI) diagnostics have been developed and fielded on Magnetized Liner Inertial Fusion (MagLIF) experiments at the Sandia National Laboratories Z-facility. These different imaging modalities provide detailed insight into different physical phenomena such as mix of liner material into the hot fuel, cold liner emission, or reduce impact of liner opacity. However, several practical considerations ranging from the lack of a consistent spatial fiducial for registration to different point-spread-functions and tuning crystals or using filters to highlight specific spectral regions make it difficult to develop broadly applicable metrics to compare experiments across our stagnation image database without making significant unverified assumptions. We leverage experimental data for a model-free assessment of sensitivities to instrumentation-based features for any specified image metric. In particular, we utilize a database of historical and recent MagLIF data including Nscans=139N_{\text{scans}} = 139 image plate scans gathered across Nexp=67N_{\text{exp}} = 67 different experiments to assess the impact of a variety of features in the experimental observations arising from uncertainties in registration as well as discrepancies in signal-to-noise ratio and instrument resolution. We choose a wavelet-based image metric known as the Mallat Scattering Transform for the study and highlight how alternate metric choices could also be studied. In particular, we demonstrate a capability to understand and mitigate the impact of signal-to-noise, image registration, and resolution difference between images. This is achieved by utilizing multiple scans of the same image plate, sampling random translations and rotations, and applying instrument specific point-spread-functions found by ray tracing to high-resolution datasets, augmenting our data in an effectively model-free fashion.Comment: 17 pages, 14 figure

    The cosmic waltz of Coma Berenices and Latyshev 2 (Group X). Membership, phase-space structure, mass, and energy distributions

    Full text link
    Context. Open clusters (OCs) are fundamental benchmarks where theories of star formation and stellar evolution can be tested and validated. Coma Ber and Latyshev 2 (Group X) are the second and third OCs closest to the Sun, making them excellent targets to search for low-mass stars and ultra-cool dwarfs. In addition, this pair will experience a flyby in 10-16 Myr which makes it a benchmark to test OCs pair interactions. Aims. We aim at analysing the membership, luminosity, mass, phase-space (i.e., positions and velocities), and energy distributions for Coma Ber and Latyshev 2 and test the hypothesis of the mixing of their populations at the encounter time. Methods. We develop a new phase-space membership methodology and apply it to Gaia data. With the recovered members we infer the phase-space, luminosity and mass distributions using publicly available Bayesian inference codes. Then, with a publicly available orbit integration code and members' positions and velocities, we integrate their orbits 20 Myr into the future. Results. In Coma Ber, we identify 302 candidate members distributed in the core and tidal tails. The tails are dynamically cold and asymmetrically populated. The stellar system called Group X is made of two structures: the disrupted OC Latyshev 2 (186 candidate members) and a loose stellar association called Mecayotl 1 (146 candidate members), both of them will fly by Coma Ber in 11.3±0.511.3\pm0.5 Myr and 14.0±0.614.0\pm0.6 Myr, respectively, and each other in 8.1±1.38.1\pm1.3 Myr. Conclusions. We study the dynamical properties of the core and tails of Coma Ber and also confirm the existence of the OC Latyshev 2 and its neighbour stellar association Mecayotl 1. Although these three systems will experience encounters we find no evidence supporting the mixing of their populations.Comment: 25 pages, 19 figures, accepted for publication in Astronomy & Astrophysic

    A Decision Support System for Economic Viability and Environmental Impact Assessment of Vertical Farms

    Get PDF
    Vertical farming (VF) is the practice of growing crops or animals using the vertical dimension via multi-tier racks or vertically inclined surfaces. In this thesis, I focus on the emerging industry of plant-specific VF. Vertical plant farming (VPF) is a promising and relatively novel practice that can be conducted in buildings with environmental control and artificial lighting. However, the nascent sector has experienced challenges in economic viability, standardisation, and environmental sustainability. Practitioners and academics call for a comprehensive financial analysis of VPF, but efforts are stifled by a lack of valid and available data. A review of economic estimation and horticultural software identifies a need for a decision support system (DSS) that facilitates risk-empowered business planning for vertical farmers. This thesis proposes an open-source DSS framework to evaluate business sustainability through financial risk and environmental impact assessments. Data from the literature, alongside lessons learned from industry practitioners, would be centralised in the proposed DSS using imprecise data techniques. These techniques have been applied in engineering but are seldom used in financial forecasting. This could benefit complex sectors which only have scarce data to predict business viability. To begin the execution of the DSS framework, VPF practitioners were interviewed using a mixed-methods approach. Learnings from over 19 shuttered and operational VPF projects provide insights into the barriers inhibiting scalability and identifying risks to form a risk taxonomy. Labour was the most commonly reported top challenge. Therefore, research was conducted to explore lean principles to improve productivity. A probabilistic model representing a spectrum of variables and their associated uncertainty was built according to the DSS framework to evaluate the financial risk for VF projects. This enabled flexible computation without precise production or financial data to improve economic estimation accuracy. The model assessed two VPF cases (one in the UK and another in Japan), demonstrating the first risk and uncertainty quantification of VPF business models in the literature. The results highlighted measures to improve economic viability and the viability of the UK and Japan case. The environmental impact assessment model was developed, allowing VPF operators to evaluate their carbon footprint compared to traditional agriculture using life-cycle assessment. I explore strategies for net-zero carbon production through sensitivity analysis. Renewable energies, especially solar, geothermal, and tidal power, show promise for reducing the carbon emissions of indoor VPF. Results show that renewably-powered VPF can reduce carbon emissions compared to field-based agriculture when considering the land-use change. The drivers for DSS adoption have been researched, showing a pathway of compliance and design thinking to overcome the ‘problem of implementation’ and enable commercialisation. Further work is suggested to standardise VF equipment, collect benchmarking data, and characterise risks. This work will reduce risk and uncertainty and accelerate the sector’s emergence

    The MeerKAT Galaxy Cluster Legacy Survey: Survey overview and highlights

    Get PDF
    MeerKAT’s large number (64) of 13.5 m diameter antennas, spanning 8 km with a densely packed 1 km core, create a powerful instrument for wide-area surveys, with high sensitivity over a wide range of angular scales. The MeerKAT Galaxy Cluster Legacy Survey (MGCLS) is a programme of long-track MeerKAT L-band (900−1670 MHz) observations of 115 galaxy clusters, observed for ∌6−10 h each in full polarisation. The first legacy product data release (DR1), made available with this paper, includes the MeerKAT visibilities, basic image cubes at ∌8″ resolution, and enhanced spectral and polarisation image cubes at ∌8″ and 15″ resolutions. Typical sensitivities for the full-resolution MGCLS image products range from ∌3−5 ÎŒJy beam−1. The basic cubes are full-field and span 2° × 2°. The enhanced products consist of the inner 1.2° × 1.2° field of view, corrected for the primary beam. The survey is fully sensitive to structures up to ∌10â€Č scales, and the wide bandwidth allows spectral and Faraday rotation mapping. Relatively narrow frequency channels (209 kHz) are also used to provide H I mapping in windows of 0 < z < 0.09 and 0.19 < z < 0.48. In this paper, we provide an overview of the survey and the DR1 products, including caveats for usage. We present some initial results from the survey, both for their intrinsic scientific value and to highlight the capabilities for further exploration with these data. These include a primary-beam-corrected compact source catalogue of ∌626 000 sources for the full survey and an optical and infrared cross-matched catalogue for compact sources in the primary-beam-corrected areas of Abell 209 and Abell S295. We examine dust unbiased star-formation rates as a function of cluster-centric radius in Abell 209, extending out to 3.5 R 200. We find no dependence of the star-formation rate on distance from the cluster centre, and we observe a small excess of the radio-to-100 ÎŒm flux ratio towards the centre of Abell 209 that may reflect a ram pressure enhancement in the denser environment. We detect diffuse cluster radio emission in 62 of the surveyed systems and present a catalogue of the 99 diffuse cluster emission structures, of which 56 are new. These include mini-halos, halos, relics, and other diffuse structures for which no suitable characterisation currently exists. We highlight some of the radio galaxies that challenge current paradigms, such as trident-shaped structures, jets that remain well collimated far beyond their bending radius, and filamentary features linked to radio galaxies that likely illuminate magnetic flux tubes in the intracluster medium. We also present early results from the H I analysis of four clusters, which show a wide variety of H I mass distributions that reflect both sensitivity and intrinsic cluster effects, and the serendipitous discovery of a group in the foreground of Abell 3365

    Learning disentangled speech representations

    Get PDF
    A variety of informational factors are contained within the speech signal and a single short recording of speech reveals much more than the spoken words. The best method to extract and represent informational factors from the speech signal ultimately depends on which informational factors are desired and how they will be used. In addition, sometimes methods will capture more than one informational factor at the same time such as speaker identity, spoken content, and speaker prosody. The goal of this dissertation is to explore different ways to deconstruct the speech signal into abstract representations that can be learned and later reused in various speech technology tasks. This task of deconstructing, also known as disentanglement, is a form of distributed representation learning. As a general approach to disentanglement, there are some guiding principles that elaborate what a learned representation should contain as well as how it should function. In particular, learned representations should contain all of the requisite information in a more compact manner, be interpretable, remove nuisance factors of irrelevant information, be useful in downstream tasks, and independent of the task at hand. The learned representations should also be able to answer counter-factual questions. In some cases, learned speech representations can be re-assembled in different ways according to the requirements of downstream applications. For example, in a voice conversion task, the speech content is retained while the speaker identity is changed. And in a content-privacy task, some targeted content may be concealed without affecting how surrounding words sound. While there is no single-best method to disentangle all types of factors, some end-to-end approaches demonstrate a promising degree of generalization to diverse speech tasks. This thesis explores a variety of use-cases for disentangled representations including phone recognition, speaker diarization, linguistic code-switching, voice conversion, and content-based privacy masking. Speech representations can also be utilised for automatically assessing the quality and authenticity of speech, such as automatic MOS ratings or detecting deep fakes. The meaning of the term "disentanglement" is not well defined in previous work, and it has acquired several meanings depending on the domain (e.g. image vs. speech). Sometimes the term "disentanglement" is used interchangeably with the term "factorization". This thesis proposes that disentanglement of speech is distinct, and offers a viewpoint of disentanglement that can be considered both theoretically and practically

    Discovering the hidden structure of financial markets through bayesian modelling

    Get PDF
    Understanding what is driving the price of a financial asset is a question that is currently mostly unanswered. In this work we go beyond the classic one step ahead prediction and instead construct models that create new information on the behaviour of these time series. Our aim is to get a better understanding of the hidden structures that drive the moves of each financial time series and thus the market as a whole. We propose a tool to decompose multiple time series into economically-meaningful variables to explain the endogenous and exogenous factors driving their underlying variability. The methodology we introduce goes beyond the direct model forecast. Indeed, since our model continuously adapts its variables and coefficients, we can study the time series of coefficients and selected variables. We also present a model to construct the causal graph of relations between these time series and include them in the exogenous factors. Hence, we obtain a model able to explain what is driving the move of both each specific time series and the market as a whole. In addition, the obtained graph of the time series provides new information on the underlying risk structure of this environment. With this deeper understanding of the hidden structure we propose novel ways to detect and forecast risks in the market. We investigate our results with inferences up to one month into the future using stocks, FX futures and ETF futures, demonstrating its superior performance according to accuracy of large moves, longer-term prediction and consistency over time. We also go in more details on the economic interpretation of the new variables and discuss the created graph structure of the market.Open Acces

    Statistical-dynamical analyses and modelling of multi-scale ocean variability

    Get PDF
    This thesis aims to provide a comprehensive analysis of multi-scale oceanic variabilities using various statistical and dynamical tools and explore the data-driven methods for correct statistical emulation of the oceans. We considered the classical, wind-driven, double-gyre ocean circulation model in quasi-geostrophic approximation and obtained its eddy-resolving solutions in terms of potential vorticity anomaly and geostrophic streamfunctions. The reference solutions possess two asymmetric gyres of opposite circulations and a strong meandering eastward jet separating them with rich eddy activities around it, such as the Gulf Stream in the North Atlantic and Kuroshio in the North Pacific. This thesis is divided into two parts. The first part discusses a novel scale-separation method based on the local spatial correlations, called correlation-based decomposition (CBD), and provides a comprehensive analysis of mesoscale eddy forcing. In particular, we analyse the instantaneous and time-lagged interactions between the diagnosed eddy forcing and the evolving large-scale PVA using the novel `product integral' characteristics. The product integral time series uncover robust causality between two drastically different yet interacting flow quantities, termed `eddy backscatter'. We also show data-driven augmentation of non-eddy-resolving ocean models by feeding them the eddy fields to restore the missing eddy-driven features, such as the merging western boundary currents, their eastward extension and low-frequency variabilities of gyres. In the second part, we present a systematic inter-comparison of Linear Regression (LR), stochastic and deep-learning methods to build low-cost reduced-order statistical emulators of the oceans. We obtain the forecasts on seasonal and centennial timescales and assess them for their skill, cost and complexity. We found that the multi-level linear stochastic model performs the best, followed by the ``hybrid stochastically-augmented deep learning models''. The superiority of these methods underscores the importance of incorporating core dynamics, memory effects and model errors for robust emulation of multi-scale dynamical systems, such as the oceans.Open Acces

    Seasonal variations in air concentrations of 27 organochlorine pesticides (OCPs) and 25 current-use pesticides (CUPs) across three agricultural areas of South Africa

    Get PDF
    For decades pesticides have been used in agriculture, however, the occurrence of legacy organochlorine pesticides (OCPs) and current-use pesticides (CUPs) is poorly understood in Africa. This study investigates air concentrations of OCPs and CUPs in three South African agricultural areas, their spatial/seasonal variations and mixture profiles. Between 2017 and 2018, 54 polyurethane foam-disks passive air-samplers (PUF-PAS) were positioned in three agricultural areas of the Western Cape, producing mainly apples, table grapes and wheat. Within areas, 25 CUPs were measured at two sites (farm and village), and 27 OCPs at one site (farm). Kruskal-Wallis tests investigated area differences in OCPs concentrations, and linear mixed-effect models studied differences in CUPs concentrations between areas, sites and sampling rounds
    • 

    corecore