20,018 research outputs found
Meso-scale FDM material layout design strategies under manufacturability constraints and fracture conditions
In the manufacturability-driven design (MDD) perspective, manufacturability of the product or system is the most important of the design requirements. In addition to being able to ensure that complex designs (e.g., topology optimization) are manufacturable with a given process or process family, MDD also helps mechanical designers to take advantage of unique process-material effects generated during manufacturing. One of the most recognizable examples of this comes from the scanning-type family of additive manufacturing (AM) processes; the most notable and familiar member of this family is the fused deposition modeling (FDM) or fused filament fabrication (FFF) process. This process works by selectively depositing uniform, approximately isotropic beads or elements of molten thermoplastic material (typically structural engineering plastics) in a series of pre-specified traces to build each layer of the part. There are many interesting 2-D and 3-D mechanical design problems that can be explored by designing the layout of these elements. The resulting structured, hierarchical material (which is both manufacturable and customized layer-by-layer within the limits of the process and material) can be defined as a manufacturing process-driven structured material (MPDSM). This dissertation explores several practical methods for designing these element layouts for 2-D and 3-D meso-scale mechanical problems, focusing ultimately on design-for-fracture. Three different fracture conditions are explored: (1) cases where a crack must be prevented or stopped, (2) cases where the crack must be encouraged or accelerated, and (3) cases where cracks must grow in a simple pre-determined pattern. Several new design tools, including a mapping method for the FDM manufacturability constraints, three major literature reviews, the collection, organization, and analysis of several large (qualitative and quantitative) multi-scale datasets on the fracture behavior of FDM-processed materials, some new experimental equipment, and the refinement of a fast and simple g-code generator based on commercially-available software, were developed and refined to support the design of MPDSMs under fracture conditions. The refined design method and rules were experimentally validated using a series of case studies (involving both design and physical testing of the designs) at the end of the dissertation. Finally, a simple design guide for practicing engineers who are not experts in advanced solid mechanics nor process-tailored materials was developed from the results of this project.U of I OnlyAuthor's request
Towards Autonomous Selective Harvesting: A Review of Robot Perception, Robot Design, Motion Planning and Control
This paper provides an overview of the current state-of-the-art in selective
harvesting robots (SHRs) and their potential for addressing the challenges of
global food production. SHRs have the potential to increase productivity,
reduce labour costs, and minimise food waste by selectively harvesting only
ripe fruits and vegetables. The paper discusses the main components of SHRs,
including perception, grasping, cutting, motion planning, and control. It also
highlights the challenges in developing SHR technologies, particularly in the
areas of robot design, motion planning and control. The paper also discusses
the potential benefits of integrating AI and soft robots and data-driven
methods to enhance the performance and robustness of SHR systems. Finally, the
paper identifies several open research questions in the field and highlights
the need for further research and development efforts to advance SHR
technologies to meet the challenges of global food production. Overall, this
paper provides a starting point for researchers and practitioners interested in
developing SHRs and highlights the need for more research in this field.Comment: Preprint: to be appeared in Journal of Field Robotic
Projected Multi-Agent Consensus Equilibrium (PMACE) for Distributed Reconstruction with Application to Ptychography
Multi-Agent Consensus Equilibrium (MACE) formulates an inverse imaging
problem as a balance among multiple update agents such as data-fitting terms
and denoisers. However, each such agent operates on a separate copy of the full
image, leading to redundant memory use and slow convergence when each agent
affects only a small subset of the full image. In this paper, we extend MACE to
Projected Multi-Agent Consensus Equilibrium (PMACE), in which each agent
updates only a projected component of the full image, thus greatly reducing
memory use for some applications.We describe PMACE in terms of an equilibrium
problem and an equivalent fixed point problem and show that in most cases the
PMACE equilibrium is not the solution of an optimization problem. To
demonstrate the value of PMACE, we apply it to the problem of ptychography, in
which a sample is reconstructed from the diffraction patterns resulting from
coherent X-ray illumination at multiple overlapping spots. In our PMACE
formulation, each spot corresponds to a separate data-fitting agent, with the
final solution found as an equilibrium among all the agents. Our results
demonstrate that the PMACE reconstruction algorithm generates more accurate
reconstructions at a lower computational cost than existing ptychography
algorithms when the spots are sparsely sampled
Multi-Attribute Utility Preference Robust Optimization: A Continuous Piecewise Linear Approximation Approach
In this paper, we consider a multi-attribute decision making problem where
the decision maker's (DM's) objective is to maximize the expected utility of
outcomes but the true utility function which captures the DM's risk preference
is ambiguous. We propose a maximin multi-attribute utility preference robust
optimization (UPRO) model where the optimal decision is based on the worst-case
utility function in an ambiguity set of plausible utility functions constructed
using partially available information such as the DM's specific preferences
between some lotteries. Specifically, we consider a UPRO model with two
attributes, where the DM's risk attitude is multivariate risk-averse and the
ambiguity set is defined by a linear system of inequalities represented by the
Lebesgue-Stieltjes (LS) integrals of the DM's utility functions. To solve the
maximin problem, we propose an explicit piecewise linear approximation (EPLA)
scheme to approximate the DM's true unknown utility so that the inner
minimization problem reduces to a linear program, and we solve the approximate
maximin problem by a derivative-free (Dfree) method. Moreover, by introducing
binary variables to locate the position of the reward function in a family of
simplices, we propose an implicit piecewise linear approximation (IPLA)
representation of the approximate UPRO and solve it using the Dfree method.
Such IPLA technique prompts us to reformulate the approximate UPRO as a single
mixed-integer program (MIP) and extend the tractability of the approximate UPRO
to the multi-attribute case. Furthermore, we extend the model to the expected
utility maximization problem with expected utility constraints where the
worst-case utility functions in the objective and constraints are considered
simultaneously. Finally, we report the numerical results about performances of
the proposed models.Comment: 50 pages,18 figure
Model Diagnostics meets Forecast Evaluation: Goodness-of-Fit, Calibration, and Related Topics
Principled forecast evaluation and model diagnostics are vital in fitting probabilistic models and forecasting outcomes of interest. A common principle is that fitted or predicted distributions ought to be calibrated, ideally in the sense that the outcome is indistinguishable from a random draw from the posited distribution. Much of this thesis is centered on calibration properties of various types of forecasts.
In the first part of the thesis, a simple algorithm for exact multinomial goodness-of-fit tests is proposed. The algorithm computes exact -values based on various test statistics, such as the log-likelihood ratio and Pearson\u27s chi-square. A thorough analysis shows improvement on extant methods. However, the runtime of the algorithm grows exponentially in the number of categories and hence its use is limited.
In the second part, a framework rooted in probability theory is developed, which gives rise to hierarchies of calibration, and applies to both predictive distributions and stand-alone point forecasts. Based on a general notion of conditional T-calibration, the thesis introduces population versions of T-reliability diagrams and revisits a score decomposition into measures of miscalibration, discrimination, and uncertainty. Stable and efficient estimators of T-reliability diagrams and score components arise via nonparametric isotonic regression and the pool-adjacent-violators algorithm. For in-sample model diagnostics, a universal coefficient of determination is introduced that nests and reinterprets the classical in least squares regression.
In the third part, probabilistic top lists are proposed as a novel type of prediction in classification, which bridges the gap between single-class predictions and predictive distributions. The probabilistic top list functional is elicited by strictly consistent evaluation metrics, based on symmetric proper scoring rules, which admit comparison of various types of predictions
Interference mitigation in LiFi networks
Due to the increasing demand for wireless data, the radio frequency (RF) spectrum has
become a very limited resource. Alternative approaches are under investigation to support
the future growth in data traffic and next-generation high-speed wireless communication
systems. Techniques such as massive multiple-input multiple-output (MIMO), millimeter
wave (mmWave) communications and light-fidelity (LiFi) are being explored. Among
these technologies, LiFi is a novel bi-directional, high-speed and fully networked wireless
communication technology. However, inter-cell interference (ICI) can significantly restrict the
system performance of LiFi attocell networks. This thesis focuses on interference mitigation
in LiFi attocell networks.
The angle diversity receiver (ADR) is one solution to address the issue of ICI as well as
frequency reuse in LiFi attocell networks. With the property of high concentration gain and
narrow field of view (FOV), the ADR is very beneficial for interference mitigation. However,
the optimum structure of the ADR has not been investigated. This motivates us to propose the
optimum structures for the ADRs in order to fully exploit the performance gain. The impact
of random device orientation and diffuse link signal propagation are taken into consideration.
The performance comparison between the select best combining (SBC) and maximum ratio
combining (MRC) is carried out under different noise levels. In addition, the double source
(DS) system, where each LiFi access point (AP) consists of two sources transmitting the same
information signals but with opposite polarity, is proven to outperform the single source (SS)
system under certain conditions.
Then, to overcome issues around ICI, random device orientation and link blockage, hybrid
LiFi/WiFi networks (HLWNs) are considered. In this thesis, dynamic load balancing (LB)
considering handover in HLWNs is studied. The orientation-based random waypoint (ORWP)
mobility model is considered to provide a more realistic framework to evaluate the performance
of HLWNs. Based on the low-pass filtering effect of the LiFi channel, we firstly propose
an orthogonal frequency division multiple access (OFDMA)-based resource allocation (RA)
method in LiFi systems. Also, an enhanced evolutionary game theory (EGT)-based LB scheme
with handover in HLWNs is proposed.
Finally, due to the characteristic of high directivity and narrow beams, a vertical-cavity
surface-emitting laser (VCSEL) array transmission system has been proposed to mitigate
ICI. In order to support mobile users, two beam activation methods are proposed. The
beam activation based on the corner-cube retroreflector (CCR) can achieve low power
consumption and almost-zero delay, allowing real-time beam activation for high-speed users.
The mechanism based on the omnidirectional transmitter (ODTx) is suitable for low-speed
users and very robust to random orientation
Early Neanderthal social and behavioural complexity during the Purfleet Interglacial: handaxes in the latest Lower Palaeolithic.
Only a handful of âflagshipâ sites from the Purfleet Interglacial (Marine Isotope Stage 9, c. 350-290,000 years ago) have been properly examined, but the archaeological succession at the proposed type-site at Purfleet suggests a period of complexity and transition, with three techno-cultural groups represented in Britain. The first was a simple toolkit lacking handaxes (the Clactonian), and
the last a more sophisticated technology presaging the coming Middle Palaeolithic (simple prepared core or proto-Levallois technology). Sandwiched between were Acheulean groups, whose handaxes comprise the great majority of the extant archaeological record of the period â these are the focus of this study. It has previously been suggested that some features of the Acheulean in the Purfleet Interglacial were chronologically restricted, particularly the co-occurrence of ficrons and cleavers. These distinctive forms may have exceeded pure functionality and were perhaps imbued with a deeper social and cultural meaning. This study supports both the previously suggested preference for narrow, pointed morphologies, and the chronologically restricted pairing of ficrons and cleavers. By drawing on a wide spatial and temporal range of sites these patterns could be identified beyond the handful of âflagshipâ sites
previously studied. Hypertrophic âgiantsâ have now also been identified as a chronologically restricted form. Greater metrical variability was found than had been anticipated, leading to the creation of two new sub-groups (IA and IB) which are tentatively suggested to represent spatial and
perhaps temporal patterning. The picture in the far west of Britain remains unclear, but the possibility of different Acheulean groups operating in the Solent area, and a late survival of the Acheulean, are both suggested. Handaxes with backing and macroscopic asymmetry may represent prehensile or ergonomic considerations not commonly found on handaxes from earlier interglacial periods. It is argued that these forms anticipate similar developments in the Late Middle Palaeolithic in an example of convergent evolution
FiabilitĂ© de lâunderfill et estimation de la durĂ©e de vie dâassemblages microĂ©lectroniques
Abstract : In order to protect the interconnections in flip-chip packages, an underfill material layer
is used to fill the volumes and provide mechanical support between the silicon chip and
the substrate. Due to the chip corner geometry and the mismatch of coefficient of thermal
expansion (CTE), the underfill suffers from a stress concentration at the chip corners when
the temperature is lower than the curing temperature. This stress concentration leads
to subsequent mechanical failures in flip-chip packages, such as chip-underfill interfacial
delamination and underfill cracking. Local stresses and strains are the most important
parameters for understanding the mechanism of underfill failures. As a result, the industry
currently relies on the finite element method (FEM) to calculate the stress components, but
the FEM may not be accurate enough compared to the actual stresses in underfill. FEM
simulations require a careful consideration of important geometrical details and material
properties. This thesis proposes a modeling approach that can accurately estimate the underfill delamination
areas and crack trajectories, with the following three objectives. The first
objective was to develop an experimental technique capable of measuring underfill deformations
around the chip corner region. This technique combined confocal microscopy and
the digital image correlation (DIC) method to enable tri-dimensional strain measurements
at different temperatures, and was named the confocal-DIC technique. This techique was
first validated by a theoretical analysis on thermal strains. In a test component similar
to a flip-chip package, the strain distribution obtained by the FEM model was in good
agreement with the results measured by the confocal-DIC technique, with relative errors
less than 20% at chip corners. Then, the second objective was to measure the strain near
a crack in underfills. Artificial cracks with lengths of 160 ÎŒm and 640 ÎŒm were fabricated
from the chip corner along the 45° diagonal direction. The confocal-DIC-measured
maximum hoop strains and first principal strains were located at the crack front area for
both the 160 ÎŒm and 640 ÎŒm cracks. A crack model was developed using the extended
finite element method (XFEM), and the strain distribution in the simulation had the same
trend as the experimental results. The distribution of hoop strains were in good agreement
with the measured values, when the model element size was smaller than 22 ÎŒm to
capture the strong strain gradient near the crack tip. The third objective was to propose
a modeling approach for underfill delamination and cracking with the effects of manufacturing
variables. A deep thermal cycling test was performed on 13 test cells to obtain the
reference chip-underfill delamination areas and crack profiles. An artificial neural network
(ANN) was trained to relate the effects of manufacturing variables and the number of
cycles to first delamination of each cell. The predicted numbers of cycles for all 6 cells in
the test dataset were located in the intervals of experimental observations. The growth
of delamination was carried out on FEM by evaluating the strain energy amplitude at
the interface elements between the chip and underfill. For 5 out of 6 cells in validation,
the delamination growth model was consistent with the experimental observations. The
cracks in bulk underfill were modelled by XFEM without predefined paths. The directions of edge cracks were in good agreement with the experimental observations, with an error
of less than 2.5°. This approach met the goal of the thesis of estimating the underfill
initial delamination, areas of delamination and crack paths in actual industrial flip-chip
assemblies.Afin de protĂ©ger les interconnexions dans les assemblages, une couche de matĂ©riau dâunderfill est utilisĂ©e pour remplir le volume et fournir un support mĂ©canique entre la puce de silicium et le substrat. En raison de la gĂ©omĂ©trie du coin de puce et de lâĂ©cart du coefficient de dilatation thermique (CTE), lâunderfill souffre dâune concentration de contraintes dans les coins lorsque la tempĂ©rature est infĂ©rieure Ă la tempĂ©rature de cuisson. Cette concentration de contraintes conduit Ă des dĂ©faillances mĂ©caniques dans les encapsulations de flip-chip, telles que la dĂ©lamination interfaciale puce-underfill et la fissuration dâunderfill. Les contraintes et dĂ©formations locales sont les paramĂštres les plus importants pour comprendre le mĂ©canisme des ruptures de lâunderfill. En consĂ©quent, lâindustrie utilise actuellement la mĂ©thode des Ă©lĂ©ments finis (EF) pour calculer les composantes de la contrainte, qui ne sont pas assez prĂ©cises par rapport aux contraintes actuelles dans lâunderfill. Ces simulations nĂ©cessitent un examen minutieux de dĂ©tails gĂ©omĂ©triques importants et des propriĂ©tĂ©s des matĂ©riaux. Cette thĂšse vise Ă proposer une approche de modĂ©lisation permettant dâestimer avec prĂ©cision les zones de dĂ©lamination et les trajectoires des fissures dans lâunderfill, avec les trois objectifs suivants. Le premier objectif est de mettre au point une technique expĂ©rimentale capable de mesurer la dĂ©formation de lâunderfill dans la rĂ©gion du coin de puce. Cette technique, combine la microscopie confocale et la mĂ©thode de corrĂ©lation des images numĂ©riques (DIC) pour permettre des mesures tridimensionnelles des dĂ©formations Ă diffĂ©rentes tempĂ©ratures, et a Ă©tĂ© nommĂ©e le technique confocale-DIC. Cette technique a dâabord Ă©tĂ© validĂ©e par une analyse thĂ©orique en dĂ©formation thermique. Dans un Ă©chantillon similaire Ă un flip-chip, la distribution de la dĂ©formation obtenues par le modĂšle EF Ă©tait en bon accord avec les rĂ©sultats de la technique confocal-DIC, avec des erreurs relatives infĂ©rieures Ă 20% au coin de puce. Ensuite, le second objectif est de mesurer la dĂ©formation autour dâune fissure dans lâunderfill. Des fissures artificielles dâune longueuer de 160 ÎŒm et 640 ÎŒm ont Ă©tĂ© fabriquĂ©es dans lâunderfill vers la direction diagonale de 45°. Les dĂ©formations circonfĂ©rentielles maximales et principale maximale Ă©taient situĂ©es aux pointes des fissures correspondantes. Un modĂšle de fissure a Ă©tĂ© dĂ©veloppĂ© en utilisant la mĂ©thode des Ă©lĂ©ments finis Ă©tendue (XFEM), et la distribution des contraintes dans la simuation a montrĂ© la mĂȘme tendance que les rĂ©sultats expĂ©rimentaux. La distribution des dĂ©formations circonfĂ©rentielles maximales Ă©tait en bon accord avec les valeurs mesurĂ©es lorsque la taille des Ă©lĂ©ments Ă©tait plus petite que 22 ÎŒm, assez petit pour capturer le grand gradient de dĂ©formation prĂšs de la pointe de fissure. Le troisiĂšme objectif Ă©tait dâapporter une approche de modĂ©lisation de la dĂ©lamination et de la fissuration de lâunderfill avec les effets des variables de fabrication. Un test de cyclage thermique a dâabord Ă©tĂ© effectuĂ© sur 13 cellules pour obtenir les zones dĂ©laminĂ©es entre la puce et lâunderfill, et les profils de fissures dans lâunderfill, comme rĂ©fĂ©rence. Un rĂ©seau neuronal artificiel (ANN) a Ă©tĂ© formĂ© pour Ă©tablir une liaison entre les effets des variables de fabrication et le nombre de cycles Ă la dĂ©lamination pour chaque cellule. Les nombres de cycles prĂ©dits pour les 6 cellules de lâensemble de test Ă©taient situĂ©s dans les intervalles dâobservations expĂ©rimentaux. La croissance de la dĂ©lamination a Ă©tĂ© rĂ©alisĂ©e par lâEF en Ă©valuant lâĂ©nergie de la dĂ©formation au niveau des Ă©lĂ©ments interfaciaux entre la puce et lâunderfill. Pour 5 des 6 cellules de la validation, le modĂšle de croissance du dĂ©laminage Ă©tait conforme aux observations expĂ©rimentales. Les fissures dans lâunderfill ont Ă©tĂ© modĂ©lisĂ©es par XFEM sans chemins prĂ©dĂ©finis. Les directions des fissures de bord Ă©taient en bon accord avec les observations expĂ©rimentales, avec une erreur infĂ©rieure Ă 2,5°. Cette approche a rĂ©pondu Ă la problĂ©matique qui consiste Ă estimer lâinitiation des dĂ©lamination, les zones de dĂ©lamination et les trajectoires de fissures dans lâunderfill pour des flip-chips industriels
Recommended from our members
Mixture Models in Machine Learning
Modeling with mixtures is a powerful method in the statistical toolkit that can be used for representing the presence of sub-populations within an overall population. In many applications ranging from financial models to genetics, a mixture model is used to fit the data. The primary difficulty in learning mixture models is that the observed data set does not identify the sub-population to which an individual observation belongs. Despite being studied for more than a century, the theoretical guarantees of mixture models remain unknown for several important settings.
In this thesis, we look at three groups of problems. The first part is aimed at estimating the parameters of a mixture of simple distributions. We ask the following question: How many samples are necessary and sufficient to learn the latent parameters? We propose several approaches for this problem that include complex analytic tools to connect statistical distances between pairs of mixtures with the characteristic function. We show sufficient sample complexity guarantees for mixtures of popular distributions (including Gaussian, Poisson and Geometric). For many distributions, our results provide the first sample complexity guarantees for parameter estimation in the corresponding mixture. Using these techniques, we also provide improved lower bounds on the Total Variation distance between Gaussian mixtures with two components and demonstrate new results in some sequence reconstruction problems.
In the second part, we study Mixtures of Sparse Linear Regressions where the goal is to learn the best set of linear relationships between the scalar responses (i.e., labels) and the explanatory variables (i.e., features). We focus on a scenario where a learner is able to choose the features to get the labels. To tackle the high dimensionality of data, we further assume that the linear maps are also sparse , i.e., have only few prominent features among many. For this setting, we devise algorithms with sub-linear (as a function of the dimension) sample complexity guarantees that are also robust to noise.
In the final part, we study Mixtures of Sparse Linear Classifiers in the same setting as above. Given a set of features and the binary labels, the objective of this task is to find a set of hyperplanes in the space of features such that for any (feature, label) pair, there exists a hyperplane in the set that justifies the mapping. We devise efficient algorithms with sub-linear sample complexity guarantees for learning the unknown hyperplanes under similar sparsity assumptions as above. To that end, we propose several novel techniques that include tensor decomposition methods and combinatorial designs
Recommended from our members
MODELING CHAIN PACKING IN COMPLEX PHASES OF SELF-ASSEMBLED BLOCK COPOLYMERS
Block copolymer (BCP) melts undergo microphase seperation and form ordered soft matter crystals with varying domain shapes and symmetries. We study the con- nection between diblock copolymer molecular designs and thermodynamic selection of ordered crystals by modeling features of variable sub-domain geometry filled with individual blocks within non-canonical sphere-like and network phases that together with layered, cylindrical and canonical spherical phases forms ânatural formsâ of self- assembled amphiphilic soft matter at large. First, we present a model to revise our understanding of optimal Frank-Kasper sphere-like morphologies by advancing the- ory to account for varying domain volumes. We then develop generic approaches to quantify local changes to domain thickness or packing frustration using medial sets and show its application to morphologies with arbitrary domain topologies and sym- metries in both theoretical models and experimental data. We further use medial sets as a proxy for terminal boundaries of blocks within different domains and revise thermodynamic models of BCP assembly in the strong segregation limit. Finally, we use this revised model to study effect of elastic stiffness asymmetry on relaxing packing frustration experienced by BCPs in tubular and matrix domains leading to equilibrium double gyroid network morphology in diblock copolymers
- âŠ