16,916 research outputs found

    Meso-scale FDM material layout design strategies under manufacturability constraints and fracture conditions

    Get PDF
    In the manufacturability-driven design (MDD) perspective, manufacturability of the product or system is the most important of the design requirements. In addition to being able to ensure that complex designs (e.g., topology optimization) are manufacturable with a given process or process family, MDD also helps mechanical designers to take advantage of unique process-material effects generated during manufacturing. One of the most recognizable examples of this comes from the scanning-type family of additive manufacturing (AM) processes; the most notable and familiar member of this family is the fused deposition modeling (FDM) or fused filament fabrication (FFF) process. This process works by selectively depositing uniform, approximately isotropic beads or elements of molten thermoplastic material (typically structural engineering plastics) in a series of pre-specified traces to build each layer of the part. There are many interesting 2-D and 3-D mechanical design problems that can be explored by designing the layout of these elements. The resulting structured, hierarchical material (which is both manufacturable and customized layer-by-layer within the limits of the process and material) can be defined as a manufacturing process-driven structured material (MPDSM). This dissertation explores several practical methods for designing these element layouts for 2-D and 3-D meso-scale mechanical problems, focusing ultimately on design-for-fracture. Three different fracture conditions are explored: (1) cases where a crack must be prevented or stopped, (2) cases where the crack must be encouraged or accelerated, and (3) cases where cracks must grow in a simple pre-determined pattern. Several new design tools, including a mapping method for the FDM manufacturability constraints, three major literature reviews, the collection, organization, and analysis of several large (qualitative and quantitative) multi-scale datasets on the fracture behavior of FDM-processed materials, some new experimental equipment, and the refinement of a fast and simple g-code generator based on commercially-available software, were developed and refined to support the design of MPDSMs under fracture conditions. The refined design method and rules were experimentally validated using a series of case studies (involving both design and physical testing of the designs) at the end of the dissertation. Finally, a simple design guide for practicing engineers who are not experts in advanced solid mechanics nor process-tailored materials was developed from the results of this project.U of I OnlyAuthor's request

    Many Physical Design Problems are Sparse QCQPs

    Full text link
    Physical design refers to mathematical optimization of a desired objective (e.g. strong light--matter interactions, or complete quantum state transfer) subject to the governing dynamical equations, such as Maxwell's or Schrodinger's differential equations. Computing an optimal design is challenging: generically, these problems are highly nonconvex and finding global optima is NP hard. Here we show that for linear-differential-equation dynamics (as in linear electromagnetism, elasticity, quantum mechanics, etc.), the physical-design optimization problem can be transformed to a sparse-matrix, quadratically constrained quadratic program (QCQP). Sparse QCQPs can be tackled with convex optimization techniques (such as semidefinite programming) that have thrived for identifying global bounds and high-performance designs in other areas of science and engineering, but seemed inapplicable to the design problems of wave physics. We apply our formulation to prototypical photonic design problems, showing the possibility to compute fundamental limits for large-area metasurfaces, as well as the identification of designs approaching global optimality. Looking forward, our approach highlights the promise of developing bespoke algorithms tailored to specific physical design problems.Comment: 9 pages, 4 figures, plus references and Supplementary Material

    Uniformly ergodic probability measures

    Full text link
    Let GG be a locally compact group and μ\mu be a probability measure on GG. We consider the convolution operator λ1(μ) ⁣:L1(G)L1(G)\lambda_1(\mu)\colon L_1(G)\to L_1(G) given by λ1(μ)f=μf\lambda_1(\mu)f=\mu \ast f and its restriction λ10(μ)\lambda_1^0(\mu) to the augmentation ideal L10(G)L_1^0(G). Say that μ\mu is uniformly ergodic if the Ces\`aro means of the operator λ10(μ)\lambda_1^0(\mu) converge uniformly to 0, that is, if λ10(μ)\lambda_1^0(\mu) is a uniformly mean ergodic operator with limit 0 and that μ\mu is uniformly completely mixing if the powers of the operator λ10(μ)\lambda_1^0(\mu) converge uniformly to 0. We completely characterize the uniform mean ergodicity of the operator λ1(μ)\lambda_1(\mu) and the uniform convergence of its powers and see that there is no difference between λ1(μ)\lambda_1(\mu) and λ10(μ)\lambda_1^0(\mu) in this regard. We prove in particular that μ\mu is uniformly ergodic if and only if GG is compact, μ\mu is adapted (its support is not contained in a proper closed subgroup of GG) and 1 is an isolated point of the spectrum of μ\mu. The last of these three conditions is actually equivalent to μ\mu being spread-out (some convolution power of μ\mu is not singular). The measure μ\mu is uniformly completely mixing if and only if GG is compact, μ\mu is spread-out and the only unimodular value of the spectrum of μ\mu is 1.Comment: Updated version. References to previous related results are improved. 21 page

    Properties of Non-Equilibrium Steady States for the non-linear BGK equation on the torus

    Full text link
    We study the non-linear BGK model in 1d coupled to a spatially varying thermostat. We show existence, local uniqueness and linear stability of a steady state when the linear coupling term is large compared to the non-linear self interaction term. This model possesses a non-explicit spatially dependent non-equilibrium steady state. We are able to successfully use hypocoercivity theory in this case to prove that the linearised operator around this steady state posesses a spectral gap.Comment: 41 page

    Properties of a model of sequential random allocation

    Get PDF
    Probabilistic models of allocating shots to boxes according to a certain probability distribution have commonly been used for processes involving agglomeration. Such processes are of interest in many areas of research such as ecology, physiology, chemistry and genetics. Time could be incorporated into the shots-and-boxes model by considering multiple layers of boxes through which the shots move, where the layers represent the passing of time. Such a scheme with multiple layers, each with a certain number of occupied boxes is naturally associated with a random tree. It lends itself to genetic applications where the number of ancestral lineages of a sample changes through the generations. This multiple-layer scheme also allows us to explore the difference in the number of occupied boxes between layers, which gives a measure of how quickly merges are happening. In particular, results for the multiple-layer scheme corresponding to those known for a single-layer scheme, where, under certain conditions, the limiting distribution of the number of occupied boxes is either Poisson or normal, are derived. To provide motivation and demonstrate which methods work well, a detailed study of a small, finite example is provided. A common approach for establishing a limiting distribution for a random variable of interest is to first show that it can be written as a sum of independent Bernoulli random variables as this then allows us to apply standard central limit theorems. Additionally, it allows us to, for example, provide an upper bound on the distance to a Poisson distribution. One way of showing that a random variable can be written as a sum of independent Bernoulli random variables is to show that its probability generating function (p.g.f.) has all real roots. Various methods are presented and considered for proving the p.g.f. of the number of occupied boxes in any given layer of the scheme has all real roots. By considering small finite examples some of these methods could be ruled out for general N. Finally, the scheme for general N boxes and n shots is considered, where again a uniform allocation of shots is used. It is shown that, under certain conditions, the distribution of the number of occupied boxes tends towards either a normal or Poisson limit. Equivalent results are also demonstrated for the distribution of the difference in the number of occupied boxes between consecutive layers

    Model Diagnostics meets Forecast Evaluation: Goodness-of-Fit, Calibration, and Related Topics

    Get PDF
    Principled forecast evaluation and model diagnostics are vital in fitting probabilistic models and forecasting outcomes of interest. A common principle is that fitted or predicted distributions ought to be calibrated, ideally in the sense that the outcome is indistinguishable from a random draw from the posited distribution. Much of this thesis is centered on calibration properties of various types of forecasts. In the first part of the thesis, a simple algorithm for exact multinomial goodness-of-fit tests is proposed. The algorithm computes exact pp-values based on various test statistics, such as the log-likelihood ratio and Pearson\u27s chi-square. A thorough analysis shows improvement on extant methods. However, the runtime of the algorithm grows exponentially in the number of categories and hence its use is limited. In the second part, a framework rooted in probability theory is developed, which gives rise to hierarchies of calibration, and applies to both predictive distributions and stand-alone point forecasts. Based on a general notion of conditional T-calibration, the thesis introduces population versions of T-reliability diagrams and revisits a score decomposition into measures of miscalibration, discrimination, and uncertainty. Stable and efficient estimators of T-reliability diagrams and score components arise via nonparametric isotonic regression and the pool-adjacent-violators algorithm. For in-sample model diagnostics, a universal coefficient of determination is introduced that nests and reinterprets the classical R2R^2 in least squares regression. In the third part, probabilistic top lists are proposed as a novel type of prediction in classification, which bridges the gap between single-class predictions and predictive distributions. The probabilistic top list functional is elicited by strictly consistent evaluation metrics, based on symmetric proper scoring rules, which admit comparison of various types of predictions

    A Decision Support System for Economic Viability and Environmental Impact Assessment of Vertical Farms

    Get PDF
    Vertical farming (VF) is the practice of growing crops or animals using the vertical dimension via multi-tier racks or vertically inclined surfaces. In this thesis, I focus on the emerging industry of plant-specific VF. Vertical plant farming (VPF) is a promising and relatively novel practice that can be conducted in buildings with environmental control and artificial lighting. However, the nascent sector has experienced challenges in economic viability, standardisation, and environmental sustainability. Practitioners and academics call for a comprehensive financial analysis of VPF, but efforts are stifled by a lack of valid and available data. A review of economic estimation and horticultural software identifies a need for a decision support system (DSS) that facilitates risk-empowered business planning for vertical farmers. This thesis proposes an open-source DSS framework to evaluate business sustainability through financial risk and environmental impact assessments. Data from the literature, alongside lessons learned from industry practitioners, would be centralised in the proposed DSS using imprecise data techniques. These techniques have been applied in engineering but are seldom used in financial forecasting. This could benefit complex sectors which only have scarce data to predict business viability. To begin the execution of the DSS framework, VPF practitioners were interviewed using a mixed-methods approach. Learnings from over 19 shuttered and operational VPF projects provide insights into the barriers inhibiting scalability and identifying risks to form a risk taxonomy. Labour was the most commonly reported top challenge. Therefore, research was conducted to explore lean principles to improve productivity. A probabilistic model representing a spectrum of variables and their associated uncertainty was built according to the DSS framework to evaluate the financial risk for VF projects. This enabled flexible computation without precise production or financial data to improve economic estimation accuracy. The model assessed two VPF cases (one in the UK and another in Japan), demonstrating the first risk and uncertainty quantification of VPF business models in the literature. The results highlighted measures to improve economic viability and the viability of the UK and Japan case. The environmental impact assessment model was developed, allowing VPF operators to evaluate their carbon footprint compared to traditional agriculture using life-cycle assessment. I explore strategies for net-zero carbon production through sensitivity analysis. Renewable energies, especially solar, geothermal, and tidal power, show promise for reducing the carbon emissions of indoor VPF. Results show that renewably-powered VPF can reduce carbon emissions compared to field-based agriculture when considering the land-use change. The drivers for DSS adoption have been researched, showing a pathway of compliance and design thinking to overcome the ‘problem of implementation’ and enable commercialisation. Further work is suggested to standardise VF equipment, collect benchmarking data, and characterise risks. This work will reduce risk and uncertainty and accelerate the sector’s emergence

    Statistical-dynamical analyses and modelling of multi-scale ocean variability

    Get PDF
    This thesis aims to provide a comprehensive analysis of multi-scale oceanic variabilities using various statistical and dynamical tools and explore the data-driven methods for correct statistical emulation of the oceans. We considered the classical, wind-driven, double-gyre ocean circulation model in quasi-geostrophic approximation and obtained its eddy-resolving solutions in terms of potential vorticity anomaly and geostrophic streamfunctions. The reference solutions possess two asymmetric gyres of opposite circulations and a strong meandering eastward jet separating them with rich eddy activities around it, such as the Gulf Stream in the North Atlantic and Kuroshio in the North Pacific. This thesis is divided into two parts. The first part discusses a novel scale-separation method based on the local spatial correlations, called correlation-based decomposition (CBD), and provides a comprehensive analysis of mesoscale eddy forcing. In particular, we analyse the instantaneous and time-lagged interactions between the diagnosed eddy forcing and the evolving large-scale PVA using the novel `product integral' characteristics. The product integral time series uncover robust causality between two drastically different yet interacting flow quantities, termed `eddy backscatter'. We also show data-driven augmentation of non-eddy-resolving ocean models by feeding them the eddy fields to restore the missing eddy-driven features, such as the merging western boundary currents, their eastward extension and low-frequency variabilities of gyres. In the second part, we present a systematic inter-comparison of Linear Regression (LR), stochastic and deep-learning methods to build low-cost reduced-order statistical emulators of the oceans. We obtain the forecasts on seasonal and centennial timescales and assess them for their skill, cost and complexity. We found that the multi-level linear stochastic model performs the best, followed by the ``hybrid stochastically-augmented deep learning models''. The superiority of these methods underscores the importance of incorporating core dynamics, memory effects and model errors for robust emulation of multi-scale dynamical systems, such as the oceans.Open Acces

    Critical Exponents in Sandpiles:(Alternative Format Thesis)

    Get PDF

    A suite of quantum algorithms for the shortestvector problem

    Get PDF
    Crytography has come to be an essential part of the cybersecurity infrastructure that provides a safe environment for communications in an increasingly connected world. The advent of quantum computing poses a threat to the foundations of the current widely-used cryptographic model, due to the breaking of most of the cryptographic algorithms used to provide confidentiality, authenticity, and more. Consequently a new set of cryptographic protocols have been designed to be secure against quantum computers, and are collectively known as post-quantum cryptography (PQC). A forerunner among PQC is lattice-based cryptography, whose security relies upon the hardness of a number of closely related mathematical problems, one of which is known as the shortest vector problem (SVP). In this thesis I describe a suite of quantum algorithms that utilize the energy minimization principle to attack the shortest vector problem. The algorithms outlined span the gate-model and continuous time quantum computing, and explore methods of parameter optimization via variational methods, which are thought to be effective on near-term quantum computers. The performance of the algorithms are analyzed numerically, analytically, and on quantum hardware where possible. I explain how the results obtained in the pursuit of solving SVP apply more broadly to quantum algorithms seeking to solve general real-world problems; minimize the effect of noise on imperfect hardware; and improve efficiency of parameter optimization.Open Acces
    corecore