52,865 research outputs found

    A recommendation system for CAD assembly modeling based on graph neural networks

    Get PDF
    In computer-aided design (CAD), software tools support design engineers during the modeling of assemblies, i.e., products that consist of multiple components. Selecting the right components is a cumbersome task for design engineers as they have to pick from a large number of possibilities. Therefore, we propose to analyze a data set of past assemblies composed of components from the same component catalog, represented as connected, undirected graphs of components, in order to suggest the next needed component. In terms of graph machine learning, we formulate this as graph classification problem where each class corresponds to a component ID from a catalog and the models are trained to predict the next required component. In addition to pretraining of component embeddings, we recursively decompose the graphs to obtain data instances in a self-supervised fashion without imposing any node insertion order. Our results indicate that models based on graph convolution networks and graph attention networks achieve high predictive performance, reducing the cognitive load of choosing among 2,000 and 3,000 components by recommending the ten most likely components with 82-92% accuracy, depending on the chosen catalog

    A Generic Model Driven Methodology for Extending Component Models

    Get PDF
    Software components have interesting properties for the development of scientific applications such as easing code reuse and code coupling. In classical component models, component assemblies are however still tightly coupled with the execution resources they are targeted to. Dedicated concepts to abstract assemblies from resources and to enable high performance component implementations have thus been proposed. These concepts have not achieved widespread use, mainly because of the lack of suitable approach to extend component models. Existing approaches -- based on ad-hoc modifications of component run-times or compilation chains -- are complex, difficult to port from one implementation to another and prevent mixing of distinct extensions in a single model. An interesting trend to separate application logic from the underlying execution resources exists; it is based on meta-modeling and on the manipulation of the resulting models. This report studies how a model driven approach could be applied to implement abstract concepts in component models. The proposed approach is based on a two step transformation from an abstract model to a concrete one. In the first step, all abstract concepts of the source model are rewritten using the limited set of abstract concepts of an intermediate model. In the second step, resources are taken into account to transform these intermediate concepts into concrete ones. A prototype implementation is described to evaluate the feasibility of this approach

    Experimental implementation controlled SPWM inverter based harmony search algorithm

    Get PDF
    An optimum PI controller using harmony search optimization algorithm (HS) is utilized in this research for the single-phase bipolar SPWM inverter. The aim of this algorithm is to avoid the conventional trial and error procedure which is usually applied in finding the PI coefficients in order to obtain the desired performance. Then, the control algorithm of the inverter prototype is experimentally implemented using the eZdsp F28355 board along with the bipolar sinusoidal pulse width modulation (SPWM) to control the output voltage drop under different load conditions. The proposed overall inverter design and the control algorithm are modelled using MATLAB environment (Simulink/m-file Code). The mean absolute error (MAE) formula is used as an objective function with the HS algorithm in finding the adaptive values of  and  parameters to minimize the error of the inverter output voltage. Based on the output results, the proposed voltage controller using HS algorithm based PI (HS-PI) showed that the inverter output performance is improved in terms of voltage amplitude, robustness, and convergence rate speed as compared to PSO algorithm based PI (PSO-PI). This is to say that the proposed controller provides a good dynamic responses in both cases; transient and steady-state. Finally, the experimental setup result of the inverter controller is verified to validate the simulation results

    Improving Loss Estimation for Woodframe Buildings. Volume 2: Appendices

    Get PDF
    This report documents Tasks 4.1 and 4.5 of the CUREE-Caltech Woodframe Project. It presents a theoretical and empirical methodology for creating probabilistic relationships between seismic shaking severity and physical damage and loss for buildings in general, and for woodframe buildings in particular. The methodology, called assembly-based vulnerability (ABV), is illustrated for 19 specific woodframe buildings of varying ages, sizes, configuration, quality of construction, and retrofit and redesign conditions. The study employs variations on four basic floorplans, called index buildings. These include a small house and a large house, a townhouse and an apartment building. The resulting seismic vulnerability functions give the probability distribution of repair cost as a function of instrumental ground-motion severity. These vulnerability functions are useful by themselves, and are also transformed to seismic fragility functions compatible with the HAZUS software. The methods and data employed here use well-accepted structural engineering techniques, laboratory test data and computer programs produced by Element 1 of the CUREE-Caltech Woodframe Project, other recently published research, and standard construction cost-estimating methods. While based on such well established principles, this report represents a substantially new contribution to the field of earthquake loss estimation. Its methodology is notable in that it calculates detailed structural response using nonlinear time-history structural analysis as opposed to the simplifying assumptions required by nonlinear pushover methods. It models physical damage at the level of individual building assemblies such as individual windows, segments of wall, etc., for which detailed laboratory testing is available, as opposed to two or three broad component categories that cannot be directly tested. And it explicitly models uncertainty in ground motion, structural response, component damageability, and contractor costs. Consequently, a very detailed, verifiable, probabilistic picture of physical performance and repair cost is produced, capable of informing a variety of decisions regarding seismic retrofit, code development, code enforcement, performance-based design for above-code applications, and insurance practices

    Kernel arquitecture for CAD/CAM in shipbuilding enviroments

    Get PDF
    The capabilities of complex software products such as CAD/CAM systems are strongly supported by basic information technologies related with data management, visualization, communication, geometry modeling and others related with the development process. These basic information technologies are involved in a continuous evolution process, but over recent years this evolution has been dramatic. The main reason for this has been that new hardware capabilities (including graphic cards) are available at very low cost, but also a contributing factor has been the evolution of the prices of basic software. To take advantage of these new features, the existing CAD/CAM systems must undergo a complete and drastic redesign. This process is complicated but strategic for the future evolution of a system. There are several examples in the market of how a bad decision has lead to a cul-de-sac (both technically and commercially). This paper describes what the authors consider are the basic architectural components of a kernel for a CAD/CAM system oriented to shipbuilding. The proposed solution is a combination of in-house developed frameworks together with commercial products that are accepted as standard components. The proportion of in-house frameworks within this combination of products is a key factor, especially when considering CAD/CAM systems oriented to shipbuilding. General-purpose CAD/CAM systems are mainly oriented to the mechanical CAD market. For this reason several basic products exist devoted to geometry modelling in this context. But these basic products are not well suited to deal with the very specific geometry modelling requirements of a CAD/CAM system oriented to shipbuilding. The complexity of the ship model, the different model requirements through its short and changing life cycle and the many different disciplines involved in the process are reasons for this inadequacy. Apart from these basic frameworks, specific shipbuilding frameworks are also required. This second layer is built over the basic technology components mentioned above. This paper describes in detail the technological frameworks which have been used to develop the latest FORAN version.Postprint (published version

    Void Formation Study of Flip Chip in Package Using No-Flow Underfill

    Get PDF
    ©2008 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or distribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.DOI: 10.1109/TEPM.2008.2002951The advanced flip chip in package (FCIP) process using no-flow underfill material for high I/O density and fine-pitch interconnect applications presents challenges for an assembly process that must achieve high electrical interconnect yield and high reliability performance. With respect to high reliability, the voids formed in the underfill between solder bumps or inside the solder bumps during the no-flow underfill assembly process of FCIP devices have been typically considered one of the critical concerns affecting assembly yield and reliability performance. In this paper, the plausible causes of underfill void formation in FCIP using no-flow underfill were investigated through systematic experimentation with different types of test vehicles. For instance, the effects of process conditions, material properties, and chemical reaction between the solder bumps and no-flow underfill materials on the void formation behaviors were investigated in advanced FCIP assemblies. In this investigation, the chemical reaction between solder and underfill during the solder wetting and underfill cure process has been found to be one of the most significant factors for void formation in high I/O and fine-pitch FCIP assembly using no-flow underfill materials

    Pore-scale Modeling of Viscous Flow and Induced Forces in Dense Sphere Packings

    Full text link
    We propose a method for effectively upscaling incompressible viscous flow in large random polydispersed sphere packings: the emphasis of this method is on the determination of the forces applied on the solid particles by the fluid. Pore bodies and their connections are defined locally through a regular Delaunay triangulation of the packings. Viscous flow equations are upscaled at the pore level, and approximated with a finite volume numerical scheme. We compare numerical simulations of the proposed method to detailed finite element (FEM) simulations of the Stokes equations for assemblies of 8 to 200 spheres. A good agreement is found both in terms of forces exerted on the solid particles and effective permeability coefficients

    Sensitivity of Building Loss Estimates to Major Uncertain Variables

    Get PDF
    This paper examines the question of which sources of uncertainty most strongly affect the repair cost of a building in a future earthquake. Uncertainties examined here include spectral acceleration, ground-motion details, mass, damping, structural force-deformation behavior, building-component fragility, contractor costs, and the contractor's overhead and profit. We measure the variation (or swing) of the repair cost when each basic input variable except one is taken at its median value, and the remaining variable is taken at its 10th and at its 90th percentile. We perform this study using a 1960s highrise nonductile reinforced-concrete moment-frame building. Repair costs are estimated using the assembly-based vulnerability (ABV) method. We find that the top three contributors to uncertainty are assembly capacity (the structural response at which a component exceeds some damage state), shaking intensity (measured here in terms of damped elastic spectral acceleration, Sa), and details of the ground motion with a given Sa
    • 

    corecore