27 research outputs found
Thermal Design of Three-Dimensional Electronic Assemblies
Currently, three-dimensional electronic assemblies (3D Packages) are a key technology for enabling heterogeneous integration and “more than Moore” functionality. A critical bottleneck to the viability of 3D Packages is their thermal design. Traditionally, heat spreaders are used as a passive method to reduce the peak temperature as well as temperature gradient on the chip. However, heat spreaders by themselves are often insufficient in stacked, multiple-die containing 3D Packages. Towards this end, to more efficiently remove heat, silicon interposers with through silicon vias (TSV) are used. However, careful design of number and location of TSVs is necessary. In addition, the heat spreader design as well as the selection of thermal interface materials needs careful consideration. At the present time, there are no automated tools available to carryout such a thermal design of 3D Packages.
The present study is focused on the development of an efficient tool that determines the optimal configuration of heat spreading elements subject to constraints on allowable copper heat spreading area or metal volume. To achieve this goal, a three-dimensional finite element analysis (FEA) code for steady-state heat conduction is coupled with a sequential quadratic programming (SQP) algorithm, and both are implemented within the MATLAB environment. Considerable effort was spent to ensure efficient matrix solution using a sparse matrix solver during FEA. Several example problems are solved and the results are compared against solutions obtained using Simulia iSight in combination with the sophisticated Simulia ABAQUS FEA tool. The developed tool is demonstrated to be nearly two-orders of magnitude faster for the same level of accuracy in the final solution
Reliability of Lead-Free Solder Joints Under Combined Shear and Compressive Loads
In electronic assemblies, solder joints are used to create electrical connections, remove heat, and mechanically support the components. When an electronic device is powered on, the solder joints and the board they are attached to heat up, expanding at different rates. Due to the difference in expansion, shear stress is imposed on the solder joints. As the device is powered on and off, this shear stress can eventually fracture the solder joint, causing the device to fail. Therefore, to increase the lifespan of electronics, it is important to investigate the mechanical properties of solder alloys. The present study investigates how the SAC 305 solder alloy (96.5% Tin, 3% Silver, 0.5% Copper) degrades under simultaneous compressive loading and shear cycling. The effect of compressive load on solder joint life has not been systematically studied in prior work but is critical to understand as large heat sinks are bolted onto increasingly large electronic assemblies, adding compressive stress on solder joints. To gather data, we constructed a custom shear tester. Shear loads were applied using a programmable motor. A pulley system applied compressive loads. Tests were conducted on a large number of samples under varying shear and compressive loads. The data showed that, for compressive loads below 30N, increasing the compressive load decreased the rate of damage to the sample. However, at the highest compressive load of 45N, the sample fractured immediately. This suggests that applying small compressive loads to critical components of electronic devices could improve their long-term reliability
The Effect of Polydispersivity on the Thermal Conductivity of Particulate Thermal Interface Materials
A critical need in developing thermal interface materials (TIMs) is an understanding of the effect of particle/matrix conductivities, volume loading of the particles, the size distribution, and the random arrangement of the particles in the matrix on the homogenized thermal conductivity. Commonly, TIM systems contain random spatial distributions of particles of a polydisperse (usually bimodal) nature. A detailed analysis of the microstructural characteristics that influence the effective thermal conductivity of TIMs is the goal of this paper. Random microstructural arrangements consisting of lognormal size-distributions of alumina particles in silicone matrix were generated using a drop-fall-shake algorithm. The generated microstructures were statistically characterized using the matrix-exclusion probability function. The filler particle volume loading was varied over a range of 40-55 %. For a given filler volume loading, the effect of polydispersivity in the microstructures was captured by varying the standard deviation(s) of the filler particle size distribution function. For each particle arrangement, the effective thermal conductivity of the microstructures was evaluated through numerical simulations using a network model previously developed by the authors. Counter to expectation, increased polydispersivity was observed to increase the effective conductivity up to a volume loading of 50%. However, at a volume loading of 55%, beyond a limiting standard deviation of 0.9, the effective thermal conductivity decreased with increased standard deviation suggesting that the observed effects are a trade-off between resistance to transport through the particles versus transport through the inter-particle matrix gap in a percolation chain
Terrace B. Thompson Applications of a Decomposed Analysis Procedure for Area-Array Packages
The goals of the present paper are to apply the recently developed decomposed analysis procedure using a computer code developed in this study. The Introduction New products are pushing density and speed of electronic packages to the limits. To support this evolution, packaging technologies are moving at a rapid pace. It is believed by the year 2012, complex electronic packages will contain over 5000ϩ I/Os ͑SIA ͓1͔͒. Analysis techniques must evolve at an even faster pace if quick decisions regarding package reliability are to be made. Numerical techniques, such as finite element analysis, are commonly used in determination of stresses and strains, and through these the package reliability. However, with decreasing package size, increasing I/O count, and complex package configurations and materials, finite element analysis has become computationally expensive. Analysis of packages can take hours or even days to complete at the present time, owing primarily to the nonlinear behavior of materials. Deshpande and Subbarayan ͓2͔ developed and demonstrated an efficient technique for the design/analysis of electronic packages. This procedure is inspired by the domain and design decomposition methodologies literature. The former methods largely address the issue of partitioning a large domain ͑approximated by a finite element mesh͒ into well-balanced subdomains ͑submeshes͒ that can be efficiently solved on a parallel computer ͑see for example Farhat et al. ͓3,4͔͒. The latter methods aim to divide a large design problem ͑often posed as an optimization problem͒ into a series of simpler subproblems for solution efficiency ͑Haftka and Gurdal ͓5͔͒. The procedure of Deshpande and Subbarayan ͓2͔ is among the first for simultaneous design and domain decomposition. They demonstrated a speed-up of approximately 350 percent at an accuracy loss of only 6 percent on a hypothetical 5ϫ5 area array package. Their analysis of the floating-point operations demonstrated an unbounded increase in speed-up for larger array sizes. In the present paper, we apply the technique developed by Deshpande and Subbarayan to analyze a wide variety of 2-D and 3-D example problems, including a ''real world'' 225 I/O PBGA package. To enable this application, a general code capable of efficiently solving arbitrary sized array packages is developed. The code and the methodology underlying the code address a critical need in the electronic industry for quick reliability decisions. One important aspect of the code implementation scheme is the parametric representation of the overall problem that allows the system response to be determined by a relatively few unknowns. This is in contrast to the nonlinear finite element analysis schemes that would require an iterative solution to a large number of nodal unknowns. A second important aspect of the implementation scheme is the optimization procedure that allows the approximate solution to the energy balance equation at system level by allowing for independent analysis of the subsystems
Maximum Entropy Models for Fatigue Damage in Metals with Application to Low-Cycle Fatigue of Aluminum 2024-T351
In the present work, we propose using the cumulative distribution functions derived from maximum entropy formalisms, utilizing thermodynamic entropy as a measure of damage to fit the low-cycle fatigue data of metals. The thermodynamic entropy is measured from hysteresis loops of cyclic tension–compression fatigue tests on aluminum 2024-T351. The plastic dissipation per cyclic reversal is estimated from Ramberg–Osgood constitutive model fits to the hysteresis loops and correlated to experimentally measured average damage per reversal. The developed damage models are shown to more accurately and consistently describe fatigue life than several alternative damage models, including the Weibull distribution function and the Coffin–Manson relation. The formalism is founded on treating the failure process as a consequence of the increase in the entropy of the material due to plastic deformation. This argument leads to using inelastic dissipation as the independent variable for predicting low-cycle fatigue damage, rather than the more commonly used plastic strain. The entropy of the microstructural state of the material is modeled by statistical cumulative distribution functions, following examples in recent literature. We demonstrate the utility of a broader class of maximum entropy statistical distributions, including the truncated exponential and the truncated normal distribution. Not only are these functions demonstrated to have the necessary qualitative features to model damage, but they are also shown to capture the random nature of damage processes with greater fidelity
Maintaining an Accurate Printer Characterization
In this study, the problem of updating a printer characterization in response to systematic changes in print-device characteristics is addressed with two distinct approaches: the creation of corrective models used in conjunction with an existing device model, and the re-evaluation of regression-model parameters using an augmented characterization data set. Several types of corrective models are evaluated, including polynomial models and neuralnetwork models. A significant reduction in error was realized by incorporating these techniques into the colormanagement program NeuralColor. The most successful of these methods was a quadratic polynomial correction model, which removed 90 % of the error introduced by a change of paper stock, and all of the error introduced by a change in toner cartridge. A general conclusion is that simple corrective models exhibiting global control are preferred over more complex models which may introduce local errors
Efficient Local Refinement near Parametric Boundaries Using kd-Tree Data Structure and Algebraic Level Sets
In analysis of problems with parametric spline boundaries that are immersed or inserted into an underlying domain, the discretization on the underlying domain usually does not conform to the inserted boundaries. While the fixed underlying discretization is of great convenience as the immersed boundaries evolve, the field approximations near the inserted boundaries require refinement in the underlying domain, as do the quadrature cells. In this paper, a kd-tree data structure together with a sign-based and/or distance-based refinement strategy is proposed for local refinement near the inserted boundaries as well as for adaptive quadrature near the boundaries. The developed algorithms construct and utilize implicit forms of parametric Non-Uniform Rational B-Spline (NURBS) surfaces to algebraically (and non-iteratively) estimate distance as well as sign relative to the inserted boundary. The kd-tree local refinement is demonstrated to produce fewer sub-cells for the same accuracy of solution as compared to the classical quad/oct tree-based subdivision. Consistent with the kd-tree data structure, we describe a new a priori refinement algorithm based on the signed and unsigned distance from the inserted boundary. We first demonstrate the local refinement strategy coupled with the the kd-tree data structure by constructing Truncated Hierarchical B-spline (THB-spline) “meshes”. We next demonstrate the accuracy and efficiency of the developed local refinement strategy through adaptive quadrature near NURBS boundaries inserted within volumetric three-dimensional NURBS discretizations