18,493 research outputs found

    Meso-scale FDM material layout design strategies under manufacturability constraints and fracture conditions

    Get PDF
    In the manufacturability-driven design (MDD) perspective, manufacturability of the product or system is the most important of the design requirements. In addition to being able to ensure that complex designs (e.g., topology optimization) are manufacturable with a given process or process family, MDD also helps mechanical designers to take advantage of unique process-material effects generated during manufacturing. One of the most recognizable examples of this comes from the scanning-type family of additive manufacturing (AM) processes; the most notable and familiar member of this family is the fused deposition modeling (FDM) or fused filament fabrication (FFF) process. This process works by selectively depositing uniform, approximately isotropic beads or elements of molten thermoplastic material (typically structural engineering plastics) in a series of pre-specified traces to build each layer of the part. There are many interesting 2-D and 3-D mechanical design problems that can be explored by designing the layout of these elements. The resulting structured, hierarchical material (which is both manufacturable and customized layer-by-layer within the limits of the process and material) can be defined as a manufacturing process-driven structured material (MPDSM). This dissertation explores several practical methods for designing these element layouts for 2-D and 3-D meso-scale mechanical problems, focusing ultimately on design-for-fracture. Three different fracture conditions are explored: (1) cases where a crack must be prevented or stopped, (2) cases where the crack must be encouraged or accelerated, and (3) cases where cracks must grow in a simple pre-determined pattern. Several new design tools, including a mapping method for the FDM manufacturability constraints, three major literature reviews, the collection, organization, and analysis of several large (qualitative and quantitative) multi-scale datasets on the fracture behavior of FDM-processed materials, some new experimental equipment, and the refinement of a fast and simple g-code generator based on commercially-available software, were developed and refined to support the design of MPDSMs under fracture conditions. The refined design method and rules were experimentally validated using a series of case studies (involving both design and physical testing of the designs) at the end of the dissertation. Finally, a simple design guide for practicing engineers who are not experts in advanced solid mechanics nor process-tailored materials was developed from the results of this project.U of I OnlyAuthor's request

    Convex Optimization-based Policy Adaptation to Compensate for Distributional Shifts

    Full text link
    Many real-world systems often involve physical components or operating environments with highly nonlinear and uncertain dynamics. A number of different control algorithms can be used to design optimal controllers for such systems, assuming a reasonably high-fidelity model of the actual system. However, the assumptions made on the stochastic dynamics of the model when designing the optimal controller may no longer be valid when the system is deployed in the real-world. The problem addressed by this paper is the following: Suppose we obtain an optimal trajectory by solving a control problem in the training environment, how do we ensure that the real-world system trajectory tracks this optimal trajectory with minimal amount of error in a deployment environment. In other words, we want to learn how we can adapt an optimal trained policy to distribution shifts in the environment. Distribution shifts are problematic in safety-critical systems, where a trained policy may lead to unsafe outcomes during deployment. We show that this problem can be cast as a nonlinear optimization problem that could be solved using heuristic method such as particle swarm optimization (PSO). However, if we instead consider a convex relaxation of this problem, we can learn policies that track the optimal trajectory with much better error performance, and faster computation times. We demonstrate the efficacy of our approach on tracking an optimal path using a Dubin's car model, and collision avoidance using both a linear and nonlinear model for adaptive cruise control

    Many Physical Design Problems are Sparse QCQPs

    Full text link
    Physical design refers to mathematical optimization of a desired objective (e.g. strong light--matter interactions, or complete quantum state transfer) subject to the governing dynamical equations, such as Maxwell's or Schrodinger's differential equations. Computing an optimal design is challenging: generically, these problems are highly nonconvex and finding global optima is NP hard. Here we show that for linear-differential-equation dynamics (as in linear electromagnetism, elasticity, quantum mechanics, etc.), the physical-design optimization problem can be transformed to a sparse-matrix, quadratically constrained quadratic program (QCQP). Sparse QCQPs can be tackled with convex optimization techniques (such as semidefinite programming) that have thrived for identifying global bounds and high-performance designs in other areas of science and engineering, but seemed inapplicable to the design problems of wave physics. We apply our formulation to prototypical photonic design problems, showing the possibility to compute fundamental limits for large-area metasurfaces, as well as the identification of designs approaching global optimality. Looking forward, our approach highlights the promise of developing bespoke algorithms tailored to specific physical design problems.Comment: 9 pages, 4 figures, plus references and Supplementary Material

    Optimal Transmit Power and Channel-Information Bit Allocation With Zeroforcing Beamforming in MIMO-NOMA and MIMO-OMA Downlinks

    Full text link
    In downlink, a base station (BS) with multiple transmit antennas applies zeroforcing beamforming to transmit to single-antenna mobile users in a cell. We propose the schemes that optimize transmit power and the number of bits for channel direction information (CDI) for all users to achieve the max-min signal-to-interference plus noise ratio (SINR) fairness. The optimal allocation can be obtained by a geometric program for both non-orthogonal multiple access (NOMA) and orthogonal multiple access (OMA). For NOMA, 2 users with highly correlated channels are paired and share the same transmit beamforming. In some small total-CDI rate regimes, we show that NOMA can outperform OMA by as much as 3 dB. The performance gain over OMA increases when the correlation-coefficient threshold for user pairing is set higher. To reduce computational complexity, we propose to allocate transmit power and CDI rate to groups of multiple users instead of individual users. The user grouping scheme is based on K-means over the user SINR. We also propose a progressive filling scheme that performs close to the optimum, but can reduce the computation time by almost 3 orders of magnitude in some numerical examples

    Differentially private partitioned variational inference

    Full text link
    Learning a privacy-preserving model from sensitive data which are distributed across multiple devices is an increasingly important problem. The problem is often formulated in the federated learning context, with the aim of learning a single global model while keeping the data distributed. Moreover, Bayesian learning is a popular approach for modelling, since it naturally supports reliable uncertainty estimates. However, Bayesian learning is generally intractable even with centralised non-private data and so approximation techniques such as variational inference are a necessity. Variational inference has recently been extended to the non-private federated learning setting via the partitioned variational inference algorithm. For privacy protection, the current gold standard is called differential privacy. Differential privacy guarantees privacy in a strong, mathematically clearly defined sense. In this paper, we present differentially private partitioned variational inference, the first general framework for learning a variational approximation to a Bayesian posterior distribution in the federated learning setting while minimising the number of communication rounds and providing differential privacy guarantees for data subjects. We propose three alternative implementations in the general framework, one based on perturbing local optimisation runs done by individual parties, and two based on perturbing updates to the global model (one using a version of federated averaging, the second one adding virtual parties to the protocol), and compare their properties both theoretically and empirically.Comment: Published in TMLR 04/2023: https://openreview.net/forum?id=55Bcghgic

    A Spatio-temporal Decomposition Method for the Coordinated Economic Dispatch of Integrated Transmission and Distribution Grids

    Full text link
    With numerous distributed energy resources (DERs) integrated into the distribution networks (DNs), the coordinated economic dispatch (C-ED) is essential for the integrated transmission and distribution grids. For large scale power grids, the centralized C-ED meets high computational burden and information privacy issues. To tackle these issues, this paper proposes a spatio-temporal decomposition algorithm to solve the C-ED in a distributed and parallel manner. In the temporal dimension, the multi-period economic dispatch (ED) of transmission grid (TG) is decomposed to several subproblems by introducing auxiliary variables and overlapping time intervals to deal with the temporal coupling constraints. Besides, an accelerated alternative direction method of multipliers (A-ADMM) based temporal decomposition algorithm with the warm-start strategy, is developed to solve the ED subproblems of TG in parallel. In the spatial dimension, a multi-parametric programming projection based spatial decomposition algorithm is developed to coordinate the ED problems of TG and DNs in a distributed manner. To further improve the convergence performance of the spatial decomposition algorithm, the aggregate equivalence approach is used for determining the feasible range of boundary variables of TG and DNs. Moreover, we prove that the proposed spatio-temporal decomposition method can obtain the optimal solution for bilevel convex optimization problems with continuously differentiable objectives and constraints. Numerical tests are conducted on three systems with different scales, demonstrating the high computational efficiency and scalability of the proposed spatio-temporal decomposition method

    Advancing Model Pruning via Bi-level Optimization

    Full text link
    The deployment constraints in practical applications necessitate the pruning of large-scale deep learning models, i.e., promoting their weight sparsity. As illustrated by the Lottery Ticket Hypothesis (LTH), pruning also has the potential of improving their generalization ability. At the core of LTH, iterative magnitude pruning (IMP) is the predominant pruning method to successfully find 'winning tickets'. Yet, the computation cost of IMP grows prohibitively as the targeted pruning ratio increases. To reduce the computation overhead, various efficient 'one-shot' pruning methods have been developed, but these schemes are usually unable to find winning tickets as good as IMP. This raises the question of how to close the gap between pruning accuracy and pruning efficiency? To tackle it, we pursue the algorithmic advancement of model pruning. Specifically, we formulate the pruning problem from a fresh and novel viewpoint, bi-level optimization (BLO). We show that the BLO interpretation provides a technically-grounded optimization base for an efficient implementation of the pruning-retraining learning paradigm used in IMP. We also show that the proposed bi-level optimization-oriented pruning method (termed BiP) is a special class of BLO problems with a bi-linear problem structure. By leveraging such bi-linearity, we theoretically show that BiP can be solved as easily as first-order optimization, thus inheriting the computation efficiency. Through extensive experiments on both structured and unstructured pruning with 5 model architectures and 4 data sets, we demonstrate that BiP can find better winning tickets than IMP in most cases, and is computationally as efficient as the one-shot pruning schemes, demonstrating 2-7 times speedup over IMP for the same level of model accuracy and sparsity.Comment: Thirty-sixth Conference on Neural Information Processing Systems (NeurIPS 2022

    Extended mixed integer quadratic programming for simultaneous distributed generation location and network reconfiguration

    Get PDF
    Introduction. To minimise power loss, maintain the voltage within the acceptable range, and improve power quality in power distribution networks, reconfiguration and optimal distributed generation placement are presented. Power flow analysis and advanced optimization techniques that can handle significant combinatorial problems must be used in distribution network reconfiguration investigations. The optimization approach to be used depends on the size of the distribution network. Our methodology simultaneously addresses two nonlinear discrete optimization problems to construct an intelligent algorithm to identify the best solution. The proposed work is novel in that it the Extended Mixed-Integer Quadratic Programming (EMIQP) technique, a deterministic approach for determining the topology that will effectively minimize power losses in the distribution system by strategically sizing and positioning Distributed Generation (DG) while taking network reconfiguration into account. Using an efficient Quadratic Mixed Integer Programming (QMIP) solver (IBM ®), the resulting optimization problem has a quadratic form. To ascertain the range and impact of various variables, our methodology outperforms cutting-edge algorithms described in the literature in terms of the obtained power loss reduction, according to extensive numerical validation carried out on typical IEEE 33- and 69-bus systems at three different load factors. Practical value. Examining the effectiveness of concurrent reconfiguration and DG allocation versus sole reconfiguration is done using test cases. According to the findings, network reconfiguration along with the installation of a distributed generator in the proper location, at the proper size, with the proper loss level, and with a higher profile, is effective.  Вступ. Для мінімізації втрат потужності, підтримки напруги в допустимому діапазоні та покращення якості електроенергії у розподільчих мережах представлена реконфігурація та оптимальне розміщення розподіленої генерації. При дослідженнях реконфігурації розподільної мережі необхідно використовувати аналіз потоку потужності та передові методи оптимізації, які можуть вирішувати серйозні комбінаторні проблеми. Підхід до оптимізації, що використовується, залежить від розміру розподільної мережі. Наша методологія одночасно вирішує дві задачі нелінійної дискретної оптимізації, щоби побудувати інтелектуальний алгоритм для визначення найкращого рішення. Пропонована робота є новою, оскільки вона використовує метод розширеного змішано-цілочисельного квадратичного програмування (EMIQP), детермінований підхід до визначення топології, що ефективно мінімізує втрати потужності в системі розподілу за рахунок стратегічного визначення розмірів та позиціонування розподіленої генерації (DG) з урахуванням реконфігурації мережі. При використанні ефективного солвера Quadratic Mixed Integer Programming (QMIP) (IBM®) результуюча задача оптимізації має квадратичну форму. Щоб з'ясувати діапазон та вплив різних змінних, наша методологія перевершує передові алгоритми, описані в літературі, з точки зору одержаного зниження втрат потужності, згідно з великою числовою перевіркою, проведеною на типових системах з шинами IEEE 33 і 69 при трьох різних коефіцієнтах навантаження. Практична цінність. Вивчення ефективності одночасної реконфігурації та розподілу DG у порівнянні з єдиною реконфігурацією проводиться з використанням тестових прикладів. Відповідно до результатів, реконфігурація мережі разом із установкою розподіленого генератора в потрібному місці, належного розміру, з належним рівнем втрат і з більш високим профілем є ефективною

    Safe Zeroth-Order Optimization Using Quadratic Local Approximations

    Full text link
    This paper addresses black-box smooth optimization problems, where the objective and constraint functions are not explicitly known but can be queried. The main goal of this work is to generate a sequence of feasible points converging towards a KKT primal-dual pair. Assuming to have prior knowledge on the smoothness of the unknown objective and constraints, we propose a novel zeroth-order method that iteratively computes quadratic approximations of the constraint functions, constructs local feasible sets and optimizes over them. Under some mild assumptions, we prove that this method returns an η\eta-KKT pair (a property reflecting how close a primal-dual pair is to the exact KKT condition) within O(1/η2)O({1}/{\eta^{2}}) iterations. Moreover, we numerically show that our method can achieve faster convergence compared with some state-of-the-art zeroth-order approaches. The effectiveness of the proposed approach is also illustrated by applying it to nonconvex optimization problems in optimal control and power system operation.Comment: arXiv admin note: text overlap with arXiv:2211.0264
    corecore