55 research outputs found

    Numerical methods in Nonlinear Analysis of shell structures

    Get PDF
    The design of shell and spatial structures represents an important challenge even with the use of the modern computer technology.If we concentrate in the concrete shell structures many problems must be faced,such as the conceptual and structural disposition, optimal shape design, analysis, construction methods, details etc. and all these problems are interconnected among them. As an example the shape optimization requires the use of several disciplines like structural analysis, sensitivity analysis, optimization strategies and geometrical design concepts. Similar comments can be applied to other space structures such as steel trusses with single or double shape and tension structures. In relation to the analysis the Finite Element Method appears to be the most extended and versatile technique used in the practice. In the application of this method several issues arise. First the derivation of the pertinent shell theory or alternatively the degenerated 3-D solid approach should be chosen. According to the previous election the suitable FE model has to be adopted i.e. the displacement,stress or mixed formulated element. The good behavior of the shell structures under dead loads that are carried out towards the supports by mainly compressive stresses is impaired by the high imperfection sensitivity usually exhibited by these structures. This last effect is important particularly if large deformation and material nonlinearities of the shell may interact unfavorably, as can be the case for thin reinforced shells. In this respect the study of the stability of the shell represents a compulsory step in the analysis. Therefore there are currently very active fields of research such as the different descriptions of consistent nonlinear shell models given by Simo, Fox and Rifai, Mantzenmiller and Buchter and Ramm among others, the consistent formulation of efficient tangent stiffness as the one presented by Ortiz and Schweizerhof and Wringgers, with application to concrete shells exhibiting creep behavior given by Scordelis and coworkers; and finally the development of numerical techniques needed to trace the nonlinear response of the structure. The objective of this paper is concentrated in the last research aspect i.e. in the presentation of a state-of-the-art on the existing solution techniques for nonlinear analysis of structures. In this presentation the following excellent reviews on this subject will be mainly used

    Computational issues in process optimisation using historical data.

    Get PDF
    This thesis presents a new generic approach to improve the computational efficiency of neural-network-training algorithms and investigates the applicability of its 'learning from examples'' featured in improving the performance of a current intelligent diagnostic system. The contribution of this thesis is summarised in the following two points: For the first time in the literature, it has been shown that significant improvements in the computational efficiency of neural-network algorithms can be achieved using the proposed methodology based on using adaptive-gain variation. The capabilities of the current Knowledge Hyper-surface method (Meghana R. Ransing, 2002) are enhanced to overcome its existing limitations in modelling an exponential increase in the shape of the hyper-surface. Neural-network techniques, particularly back-propagation algorithms, have been widely used as a tool for discovering a mapping function between a known set of input and output examples. Neural networks learn from the known example set by adjusting its internal parameters, referred to as weights, using an optimisation procedure based on the 'least square fit principle'. The optimisation procedure normally involves thousands of iterations to converge to an acceptable solution. Hence, improving the computational efficiency of a neural-network algorithm is an active area of research. Various options for improving the computational efficiency of neural networks have been reviewed in this thesis. It has been shown in the existing literature that the variation of the gain parameter improves the learning efficiency of the gradient-descent method. However, it can be concluded from previous researchers' claims that the adaptive-gain variation improved the learning rate and hence the efficiency. It was discovered in this thesis that the gain variation has no influence on the learning rate; however, it actually influences the search direction. This made it possible to develop a novel approach that modifies the gradient-search direction by introducing the adaptive-gain variation. The proposed method is robust and has been shown that it can easily be implemented in all commonly used gradient- based optimisation algorithms. It has also been shown that it significantly improves the computational efficiency as compared to existing neural-network training algorithms. Computer simulations on a number of benchmark problems are used throughout to illustrate the improvement proposed in this thesis. In a foundry a large amount of data is generated within the foundry every time a casting is poured. Furthermore, with the increased number of computing tools and power there is a need to develop an efficient, intelligent diagnostic tool that can learn from the historical data to gain further insight into cause and effect relationships. In this study the performance of the current Knowledge Hyper-surface method was reviewed and the mathematical formulation of the current Knowledge Hyper-surface method was analysed to identify its limitations. An enhancement is proposed by introducing mid-points in the existing shape formulation. It is shown that the midpoints' shape function can successfully constrain the shape of decision hyper-surface to become more realistic with an acceptable result in a multi-dimensional case. This is a novel and original approach and is of direct relevance to the foundry industry

    NLTGCR: A class of Nonlinear Acceleration Procedures based on Conjugate Residuals

    Full text link
    This paper develops a new class of nonlinear acceleration algorithms based on extending conjugate residual-type procedures from linear to nonlinear equations. The main algorithm has strong similarities with Anderson acceleration as well as with inexact Newton methods - depending on which variant is implemented. We prove theoretically and verify experimentally, on a variety of problems from simulation experiments to deep learning applications, that our method is a powerful accelerated iterative algorithm.Comment: Under Revie

    Dynaamisten mallien puoliautomaattinen parametrisointi käyttäen laitosdataa

    Get PDF
    The aim of this thesis was to develop a new methodology for estimating parameters of NAPCON ProsDS dynamic simulator models to better represent data containing several operating points. Before this thesis, no known methodology had existed for combining operating point identification with parameter estimation of NAPCON ProsDS simulator models. The methodology was designed by assessing and selecting suitable methods for operating space partitioning, parameter estimation and parameter scheduling. Previously implemented clustering algorithms were utilized for the operating space partition. Parameter estimation was implemented as a new tool in the NAPCON ProsDS dynamic simulator and iterative parameter estimation methods were applied. Finally, lookup tables were applied for tuning the model parameters according to the state. The methodology was tested by tuning a heat exchanger model to several operating points based on plant process data. The results indicated that the developed methodology was able to tune the simulator model to better represent several operating states. However, more testing with different models is required to verify general applicability of the methodology.Tämän diplomityön tarkoitus oli kehittää uusi parametrien estimointimenetelmä NAPCON ProsDS -simulaattorin dynaamisille malleille, jotta ne vastaisivat paremmin dataa useista prosessitiloista. Ennen tätä diplomityötä NAPCON ProsDS -simulaattorin malleille ei ollut olemassa olevaa viritysmenetelmää, joka yhdistäisi operointitilojen tunnistuksen parametrien estimointiin. Menetelmän kehitystä varten tutkittiin ja valittiin sopivat menetelmät operointiavaruuden jakamiselle, parametrien estimoinnille ja parametrien virittämiseen prosessitilan mukaisesti. Aikaisemmin ohjelmoituja klusterointialgoritmeja hyödynnettiin operointiavaruuden jakamisessa. Parametrien estimointi toteutettiin uutena työkaluna NAPCON ProsDS -simulaattoriin ja estimoinnissa käytettiin iteratiivisia optimointimenetelmiä. Lopulta hakutaulukoita sovellettiin mallin parametrien hienosäätöön prosessitilojen mukaisesti. Menetelmää testattiin virittämällä lämmönvaihtimen malli kahteen eri prosessitilaan käyttäen laitokselta kerättyä prosessidataa. Tulokset osoittavat että kehitetty menetelmä pystyi virittämään simulaattorin mallin vastaamaan paremmin dataa useista prosessitiloista. Kuitenkin tarvitaan lisää testausta erityyppisten mallien kanssa, jotta voidaan varmistaa menetelmän yleinen soveltuvuus

    Agent Based Individual Traffic Guidance

    Get PDF

    Development of an Optimal Replenishment Policy for Human Capital Inventory

    Get PDF
    A unique approach is developed for evaluating Human Capital (workforce) requirements. With this approach, new ways of measuring personnel availability are proposed and available to ensure that an organization remains ready to provide timely, relevant, and accurate products and services in support of its strategic objectives over its planning horizon. The development of this analysis and methodology was established as an alternative approach to existing studies for determining appropriate hiring and attrition rates and to maintain appropriate personnel levels of effectiveness to support existing and future missions. The contribution of this research is a prescribed method for the strategic analyst to incorporate a personnel and cost simulation model within the framework of Human Resources Human Capital forecasting which can be used to project personnel requirements and evaluate workforce sustainment, at least cost, through time. This will allow various personnel managers to evaluate multiple resource strategies, present and future, maintaining near “perfect” hiring and attrition policies to support its future Human Capital assets

    3D Subject-Atlas Image Registration for Micro-Computed Tomography Based Characterization of Drug Delivery in the Murine Cochlea

    Get PDF
    A wide variety of hearing problems can potentially be treated with local drug delivery systems capable of delivering drugs directly to the cochlea over extended periods of time. Developing and testing such systems requires accurate quantification of drug concentration over time. A variety of techniques have been proposed for both direct and indirect measurement of drug pharmacokinetics; direct techniques are invasive, whereas many indirect techniques are imprecise because they rely on assumptions about the relationship between physiological response and drug concentrations. One indirect technique, however, is capable of quantifying drug pharmacokinetics very precisely: Micro-Computed tomography (micro-CT) can provide a non-invasive way to measure the concentration of a contrast agent in the cochlea over time. In this thesis, we propose a systematic approach for analyzing micro-CT images to measure concentrations of the contrast agent ioversol in mouse cochlea. This approach requires segmenting and classifying the intra-cochlea structures from micro-CT images, which is done via 3D atlas-subject registration to a published atlas of the mouse cochlea. Labels of each intra-cochlear structure in the atlas are propagated through the registration transformation to the corresponding structures in the micro-CT images. Pixel intensities are extracted from three key intra-cochlea structures: scala tympani (ST), scala vestibuli (SV), scala media (SM) in the micro-CT images, and these intensities are mapped into concentrations using a linear model between solution concentration and image intensity that is determined in a previous calibration step. To localize this analysis, the ST, SV, SM are divided into several discrete components, and the concentrations are estimated in each component using a weighted average with weights determined by solving a nonhomogeneous Poisson equation with Dirichlet boundary conditions on the component boundaries. We illustrate this entire system on a series of micro-CT images of an anesthetized mouse that include a baseline scan (with no contrast agent) and a series of scans after injection of ioversol into the cochlea

    Acta physica et chemica Tomus XXXV. Fasciculus 1-4.

    Get PDF
    corecore