30 research outputs found

    Zuverlässige numerische Berechnungen mit dem Spigot-Ansatz

    Get PDF
    Der Spigot-Ansatz ist eine elegante Vorgehensweise, spezielle numerische Werte zuverlässig, effizient und mit beliebiger Genauigkeit zu berechnen. Die Stärke des Ansatzes ist seine Effizienz, seine totale Korrektheit und seine mathematisch exakt begründete Sicherstellung einer gewünschten absoluten Genauigkeit. Seine Schwäche ist möglicherweise die eingeschränkte Anwendbarkeit. Es gibt in der Literatur Spigot-Berechnung für e und pi. Wurzelberechnung und Logarithmenberechnung gehören zu den Hauptergebnissen der Dissertation. In Kombination mit den Methoden der Reihentransformation von Zeilberger und Wilf bzw. von Gosper ist der Einsatz zur Berechnung von hypergeometrischen Reihen sehr Erfolg versprechend. Eine interessante offene Frage ist die Berechnung der Feigenbaumkonstanten mit dem Ansatz. 'Spigot' bedeutet 'sukzessive Extraktion von Wertanteilen': die Wertanteile werden extrahiert, als ob sie durch einen Hahn (englisch: spigot) gepumpt werden. Es ist dabei besonders interessant, dass in bestimmten Fällen ein Wert-Anteil mit einer Ziffer der Kodierung des Ergebnisses versehen werden kann. Der Spigot-Ansatz steht damit im Gegensatz zu den konventionellen Iterationsverfahren: in einem Schritt des Spigot-Ansatzes wird jeweils ein Wert-Anteil 'extrahiert' und das gesamte Ergebnis ist die Summe der Wert-Anteile; während ein Schritt in einem Iterationsverfahren die Berechnung eines besseren gesamten Ergebnisses aus dem des vorigen Schritt beinhaltet. Das Grundschema der Berechnung mit dem Spigot-Ansatz sieht folgendermaßen aus: zuerst wird für den zu berechnenden numerischen Wert eine gut konvergierende Reihe mit rationalen Gliedern durch symbolisch-algebraische Methoden hergeleitet; dann wird für eine gewünschte Genauigkeit eine Teilsumme ausgewählt; anschließend werden aus der Teilsumme Wert-Anteile iterativ extrahiert. Die Extraktion von Wert-Anteilen aus der Teilsumme geschieht mit dem Spigot-Algorithmus, der auf Sale zurück geht, nur Integer-Arithmetik benötigt und sich als eine verallgemeinerte Form der Basis-Konvertierung dadurch auffassen lässt, dass die Teilsumme als die Kodierung des Wertes in einer inhomogenen Basis interpretiert wird. Die Spigot-Idee findet auch in der Überführung einer Reihe in eine besser konvergierende Reihe auf der Art und Weise Anwendung, dass Wert-Anteile aus der Reihe extrahiert werden, um die originale Reihe werttreu zur Reihe der Wert-Anteile zu transformieren. Dies geschieht mit den Methoden der Reihentransformation von Gosper bzw. Wilf. Die Dissertation umfasst im Wesentlichen die Formalisierung des Spigot-Algorithmus und der Gosperschen Reihentransformation, eine systematische Darstellung der Ansätze, Methoden und Techniken der Reihenentwiclung und Reihentransformation (die Herleitung von Reihen mit Hilfe charakteristischer Funktionalgleichungen; Methoden der Reihentransformation von Euler, Kummer, Markoff, Gosper, Zeilberger und Wilf) sowie die Methoden zur Berechnung von Wurzeln und Logarithmen mit dem Spigot-Ansatz. Es ist interessant zu sehen, wie sich die Grundideen des Spigot-Algorithmus, der Wurzelberechnung und der Logarithmenberechnung jeweils im Wesentlichen durch eine Gleichung ausdrücken lassen. Es ist auch interessant zu sehen, wie sich verschiedene Methoden der Reihentransformation auf einige einfache Grundideen zurückführen lassen. Beispiele für den Beweis von totalen Korrektheit (bei iterativer Berechnung von Wurzeln) könnte auch von starkem Interesse sein. Um die Zuverlässigkeit anderer Methoden zur Berechnung von natürlichen Logarithmen zu überprüfen, scheint der Vergleich der Ergebnisse mit den des Spigot-Ansatzes die beste Methode zu sein. Anders als bei Wurzelberechnung kann hierbei zur Überprüfung die inverse Berechnung nicht angewandt werden.spigot, total correctness, acceleration of series, computation of roots, computation of logarithms Reliable numerical computations with spigot approach Spigot approach is an elegant way to compute special numerical values reliably, efficiently and with arbitrary accuracy. The advantage of this way are its efficiency and its total correctness including the bounding of the absolute error. The disadvantage is perhaps its restricted applicableness. There are spigot computation for e an pi. The computation of roots and logarithms belongs to the main results of this thesis. In combination with the methods for acceleration of series by Gosper as well as by Zeilberger and Wilf is the use for numerical summation of hypergeometric series very promising. An interesting open question is the computation of the Feigenbaum constant by this way. ‘Spigot’ means ‘successive extraction of portions of value’: the portions of value are ‘extracted’ as if they were pumped through a spigot. It is very interesting in certain case, where these portions can be interpreted as the digits of the result. With respect to that the spigot approach is the opposition to the iterative approach, where in each step the new better result is computed from the result of the previous step. The schema of spigot approach is characterised as follows: first a series for the value to be computed is derived, then a partial sum of the series is chosen with respect to an desired accuracy, afterwards the portions of value are extracted from the sum. The extraction of potions of value is carried by the spigot algorithm which is due to Sale an requires only integer arithmetic. The spigot algorithm can be understood as a generalisation of radix-conversion if the sum is interpreted as an encoding of the result in a mixed-radix (inhomogeneous) system. The spigot idea is also applied in transferring a series into a better convergent series: portions of value are extracted successively from the original series in order to form the series of extracted potions which should have the same value as the original series. This transfer is carried with the methods for acceleration of series by Gosper and Wilf. The thesis incorporates essentially the formalisation of the spigot algorithm and the method of Gosper for acceleration of series, a systematisation of methods and techniques for derivation and acceleration of series (derivation of series for functions by using their characteristic functional equations; methods for acceleration of series by Euler, Kummer, Markov, Gosper Zeilberger and Wilf) as well as the methods for computation of roots and logarithms by spigot approach. It is interesting to see how to express the basic ideas for spigot algorithm, computation of roots and computation of logarithm respectively in some equations. It is also interesting to see how to build various methods for acceleration of series from some simple basic ideas. Examples for proof of total correctness (for iterative computation of roots) can be of value to read. Comparing with the results produced by spigot approach is possibly the best way for verifying the reliability of other methods for computation of natural logarithms, because (as opposed to root computing) the verification by inverse computation is inapplicable

    Tolerance analysis and synthesis of assemblies subject to loading with process integration and design optimization tools

    Get PDF
    Manufacturing variation results in uncertainty in the functionality and performance of mechanical assemblies. Management of this uncertainty is of paramount importance for manufacturing efficiency. Methods focused on the management of uncertainty and variation in the design of mechanical assemblies, such as tolerance analysis and synthesis, have been subject to extensive research and development to date. However, due to the challenges involved, limitations in the capability of these methods remain. These limitations are associated with the following problems: The identification of Key Product Characteristics (KPCs) in mechanical assemblies (which are required for measuring functional performance) without imposing significant modelling demands.  Accommodation of the high computational cost of traditional statistical tolerance analysis in early design where analysis budgets are limited. Efficient identification of feasible regions and optimum performance within the large design spaces associated with early design stages.  The ability to comprehensively accommodate tolerance analysis problems in which assembly functionality is dependent on the effects of loading (such as compliance or multi‐body dynamics). Current Computer Aided Tolerancing (CAT) is limited by: the ability to accommodate only specific loading effects; reliance on custom simulation codes with limited practical implementation in accessible software tools; and, the need for additional expertise in formulating specific assembly tolerance models and interpreting results. Accommodation of the often impractically high computational cost of tolerance synthesis involving demanding assembly models (particularly assemblies under loading). The high computational cost is associated with traditional statistical tolerancing Uncertainty Quantification (UQ) methods reliant on low‐efficiency Monte Carlo (MC) sampling. This research is focused on addressing these limitations, by developing novel methods for enhancing the engineering design of mechanical assemblies involving uncertainty or variation in design parameters. This is achieved by utilising the emerging design analysis and refinement capabilities of Process Integration and Design Optimization (PIDO) tools. ii The main contributions of this research are in three main themes:  Design analysis and refinement accommodating uncertainty in early design;  Tolerancing of assemblies subject to loading; and, efficient Uncertainty Quantification (UQ) in tolerance analysis and synthesis. The research outcomes present a number of contributions within each research theme, as outlined below. Design analysis and refinement accommodating uncertainty in early design: A PIDO tool based visualization method to aid designers in identifying assembly KPCs in early design stages. The developed method integrates CAD software functionally with the process integration, UQ, data logging and statistical analysis capabilities of PIDO tools, to simulate manufacturing variation in an assembly and visualise assembly clearances, contacts or interferences. The visualization capability subsequently assists the designer in specifying critical assembly dimensions as KPCs.  Computationally efficient method for manufacturing sensitivity analysis of assemblies with linear‐compliant elements. Reduction in computational cost are achieved by utilising linear‐compliant assembly stiffness measures, reuse of CAD models created in early design stages, and PIDO tool based tolerance analysis. The associated increase in computational efficiency, allows an estimate of sensitivity to manufacturing variation to be made earlier in the design process with low effort.  Refinement of concept design embodiments through PIDO based DOE analysis and optimization. PIDO tools are utilised to allow CAE tool integration, and efficient reuse of models created in early design stages, to rapidly identify feasible and optimal regions in the design space. A case study focused on the conceptual design of automotive seat kinematics is presented, in which an optimal design is identified and subsequently selected for commercialisation in the Tesla Motors Model S full‐sized electric sedan. These contributions can be directly applied to improve the design of mechanical assemblies involving uncertainty or variation in design parameters in the early stages of design. The use of native CAD/E models developed as part of an established design modelling procedure imposes low additional modelling effort. Tolerancing of assemblies subject to loading:  A novel tolerance analysis platform is developed which integrates CAD/E and statistical analysis tools using PIDO tool capabilities to facilitate tolerance analysis of assemblies subject to loading. The proposed platform extends the capabilities of traditional CAT tools and methods by enabling tolerance analysis of assemblies which are dependent on iii the effects of loads. The ability to accommodate the effects of loading in tolerance analysis allows for an increased level of capability in estimating the effects of variation on functionality.  The interdisciplinary integration capabilities of the PIDO based platform allow for CAD/E models created as part of the standard design process to be used for tolerance analysis. The need for additional modelling tools and expertise is subsequently reduced.  Application of the developed platform resulted in effective solutions to practical, industry based tolerance analysis problems, including: an automotive actuator mechanism assembly consisting of rigid and compliant components subject to external forces; and a rotary switch and spring loaded radial detent assembly in which functionality is defined by external forces and internal multi‐body dynamics. In both case studies the tolerance analysis platform was applied to specify nominal dimensions and required tolerances to achieve the desired assembly yield. The computational platform offers an accessible tolerance analysis approach for accommodating assemblies subject to loading with low implementation demands. Efficient Uncertainty Quantification (UQ) in tolerance analysis and synthesis:  A novel approach is developed for addressing the high computational cost of Monte Carlo (MC) sampling in statistical tolerance analysis and synthesis, with Polynomial Chaos Expansion (PCE) uncertainty quantification. Compared to MC sampling, PCE offers significantly higher efficiency. The feasibility of PCE based UQ in tolerance synthesis is established through: theoretical analysis of the PCE method identifying working principles, implementation requirements, advantages and limitations; identification of a preferred method for determining PCE expansion coefficients in tolerance analysis; and, formulation of an approach for the validation of PCE statistical moment estimates.  PCE based UQ is subsequently implemented in a PIDO based tolerance synthesis platform for assemblies subject to loading. The resultant PIDO based tolerance synthesis platform integrates: highly efficient sparse grid based PCE UQ, parametric CAD/E models accommodating the effects of loading, cost‐tolerance modelling, yield quantification with Process Capability Indices (PCI), optimization of tolerance cost and yield with multiobjective Genetic Algorithm (GA).  To demonstrate the capabilities of the developed platform, two industry based case studies are used for validation, including: an automotive seat rail assembly consisting of compliant components subject to loading; and an automotive switch in assembly in which functionality is defined by external forces and multi‐body dynamics. In both case studies optimal tolerances were identified which satisfied desired yield and tolerance cost objectives. The addition of PCE to the tolerance synthesis platform resulted in large computational cost reductions without compromising accuracy compared to traditional MC methods. With traditional MC sampling UQ the required computational expense is impractically high. The resulting tolerance synthesis platform can be applied to tolerance analysis and synthesis with significantly reduced computation time while maintaining accurac

    Reconhecimento de padrões baseado em compressão: um exemplo de biometria utilizando ECG

    Get PDF
    The amount of data being collected by sensors and smart devices that people use on their daily lives has been increasing at higher rates than ever before. That enables the possibility of using biomedical signals in several applications, with the aid of pattern recognition algorithms in several applications. In this thesis we investigate the usage of compression based methods to perform classification using one-dimensional signals. In order to test those methods, we use as testbed example, electrocardiographic (ECG) signals and the task biometric identification. First and foremost, we introduce the notion of Kolmogorov complexity and how it relates with compression methods. Then, we explain how can these methods be useful for pattern recognition, by exploring different compression-based measures, namely, the Normalized Relative Compression, a measure based on the relative similarity between strings. For this purpose, we present finite-context models and explain the theory behind a generalized version of those models, called the extended-alphabet finite-context models, a novel contribution. Since the testbed application for the methods presented in the thesis is based on ECG signals, we explain what constitutes such a signal and the methods that should be used before data compresison can be applied to them, such as filtering and quantization. Finally, we explore the application of biometric identification using the ECG signal into more depth, making some tests regarding the acquisition of signals and benchmark different proposals based on compresison methods, namely, non-fiducial ones. We also highlight the advantages of such an alternative approach to machine learning methods, namely, low computational costs and not requiring any kind of feature extraction, making this approach easily transferable into different applications and signals.A quantidade de dados recolhidos por sensores e dispositivos inteligentes que as pessoas utilizam no seu dia a dia tem aumentado a taxas mais elevadas do que nunca. Isso possibilita a utilização de sinais biomédicos em diversas aplicações práticas, com o auxílio de algoritmos de reconhecimento de padrões. Nesta tese, investigamos o uso de métodos baseados em compressão para realizar classificação de sinais unidimensionais. Para testar esses métodos, utilizamos, como aplicação de exemplo, o problema de identificação biométrica através de sinais eletrocardiográficos (ECG). Em primeiro lugar, introduzimos a noção de complexidade de Kolmogorov e a forma como a mesma se relaciona com os métodos de compressão. De seguida, explicamos como esses métodos são úteis para reconhecimento de padrões, explorando diferentes medidas baseadas em compressão, nomeadamente, a compressão relativa normalizada (NRC), uma medida baseada na similaridade relativa entre strings. Para isso, apresentamos os modelos de contexto finito e explicaremos a teoria por detrás de uma versão generalizada desses modelos, chamados de modelos de contexto finito de alfabeto estendido (xaFCM), uma nova contribuição. Uma vez que a aplicação de exemplo para os métodos apresentados na tese é baseada em sinais de ECG, explicamos também o que constitui tal sinal e os métodos que devem ser utilizados antes que a compressão de dados possa ser aplicada aos mesmos, tais como filtragem e quantização. Por fim, exploramos com maior profundidade a aplicação da identificação biométrica utilizando o sinal de ECG, realizando alguns testes relativos à aquisição de sinais e comparando diferentes propostas baseadas em métodos de compressão, nomeadamente os não fiduciais. Destacamos também as vantagens de tal abordagem, alternativa aos métodos de aprendizagem computacional, nomeadamente, baixo custo computacional bem como não exigir tipo de extração de atributos, tornando esta abordagem mais facilmente transponível para diferentes aplicações e sinais.Programa Doutoral em Informátic

    Towards more effective simulation of minerals processing systems

    Get PDF
    Two aspects of the computer simulation of minerals processing systems were investigated in order to facilitate more effective use of simulation technology. A user-interface was designed and combined with an existing simulator executive, resulting in the implementation of a user-friendly microcomputer based minerals processing simulator, MicroSim. Ease of use was achieved by consideration of the needs of the user of such a program. This resulted in the use of graphical methods for information input and output. Efficient form-filling techniques were developed for numerical data entry and editing. Models for the carbon-in-pulp adsorption process and for continuous gold leaching were derived. The CIP models were derived using a population balance approach. The method of characteristics and the method of moments were found to be particularly useful in solving the resulting equations. Besides being important processes in themselves, the integration of these models into MicroSim provided valuable experience regarding the use of such models in a simulator.AC 201

    Radiation Transport Measurements in Methanol Pool Fires with Fourier Transform Infrared Spectroscopy

    Get PDF
    Pool fires rely on radiation and conduction heat feedback from the combustion process to the liquid surface to vaporize the fuel. This coupled relationship determines the fuel burning rate and thus the fire structure and size. Radiative heat transfer is the dominant heat feedback in large pool fires. Species concentrations and temperatures have large influence on the radiative heat transfer in the fuel rich-core between the flame and the pool surface. To study radiative transport in the fuel-rich core, an experimental method was developed to measure radiative absorption through various pathlengths inside a 30 cm diameter methanol pool fire by using a Fourier Transform Infrared Spectrometer with N2 purged optical probes. The measured spectra are used to estimate species concentration profiles of methanol, CO, and CO2 in the fuel rich core by fitting predictions of a spectrally resolved radiation transport model to the measured spectra. Results show the importance of reliable temperature measurements for fitting the data and the need for further measurements to further understand the structure of fuel rich cores in pool fires

    Decomposition-Based Decision Making for Aerospace Vehicle Design

    Get PDF
    Most practical engineering systems design problems have multiple and conflicting objectives. Furthermore, the satisfactory attainment level for each objective ( requirement ) is likely uncertain early in the design process. Systems with long design cycle times will exhibit more of this uncertainty throughout the design process. This is further complicated if the system is expected to perform for a relatively long period of time, as now it will need to grow as new requirements are identified and new technologies are introduced. These points identify a need for a systems design technique that enables decision making amongst multiple objectives in the presence of uncertainty. Traditional design techniques deal with a single objective or a small number of objectives that are often aggregates of the overarching goals sought through the generation of a new system. Other requirements, although uncertain, are viewed as static constraints to this single or multiple objective optimization problem. With either of these formulations, enabling tradeoffs between the requirements, objectives, or combinations thereof is a slow, serial process that becomes increasingly complex as more criteria are added. This research proposal outlines a technique that attempts to address these and other idiosyncrasies associated with modern aerospace systems design. The proposed formulation first recasts systems design into a multiple criteria decision making problem. The now multiple objectives are decomposed to discover the critical characteristics of the objective space. Tradeoffs between the objectives are considered amongst these critical characteristics by comparison to a probabilistic ideal tradeoff solution. The proposed formulation represents a radical departure from traditional methods. A pitfall of this technique is in the validation of the solution: in a multi-objective sense, how can a decision maker justify a choice between non-dominated alternatives? A series of examples help the reader to observe how this technique can be applied to aerospace systems design and compare the results of this so-called Decomposition-Based Decision Making to more traditional design approaches

    Mathematical modelling and investigation of explosive pinch, friction and shear problems

    Get PDF
    The mechanisms which lead to the accidental ignition of explosive materials in response to low energy stimuli are still not well understood. It is widely agreed that localised regions of increased temperature, so-called `hot spots', are responsible. Many mechanisms for hot spot generation have been suggested as a result of experimental studies, but the understanding of such processes remains incomplete. In this thesis, we use asymptotic and numerical techniques to investigate hot spot mechanisms, with a particular focus on those arising from impacts which pinch and shear explosives. First, a model which accounts for the effect of friction as an explosive material is pinched between two at plates is developed. An analytical solution is found by exploiting the small aspect ratio of the explosive sample. Numerical solution of the thermal part of the problem demonstrates that our model is able to predict important features observed in experiments, such as additional heating near to the plates. We then go on to study how the presence of a chemical reaction affects the development of shear bands as explosive materials are deformed. Through a boundary layer analysis, we are able to extract key non-dimensional parameters which control the development of shear bands in explosives, and discuss how this may inform the design of materials that are less susceptible to accidental ignitions due to mechanical insults. Finally, we investigate how molten layers of explosive, which can form between sliding surfaces during shear deformation, may act as a site for hot spot generation. In particular, we consider how the inhomogeneous nature of explosive materials affects the propagation of the melt front. Through a lubrication-type analysis, we demonstrate that the melt front is unstable to perturbations in the presence of a chemical reaction, and that material non-uniformities lead to localised heating within the molten layer

    Law and Policy for the Quantum Age

    Get PDF
    Law and Policy for the Quantum Age is for readers interested in the political and business strategies underlying quantum sensing, computing, and communication. This work explains how these quantum technologies work, future national defense and legal landscapes for nations interested in strategic advantage, and paths to profit for companies

    CEAS/AIAA/ICASE/NASA Langley International Forum on Aeroelasticity and Structural Dynamics 1999

    Get PDF
    The proceedings of a workshop sponsored by the Confederation of European Aerospace Societies (CEAS), the American Institute of Aeronautics and Astronautics (AIAA), the National Aeronautics and Space Administration (NASA), Washington, D.C., and the Institute for Computer Applications in Science and Engineering (ICASE), Hampton, Virginia, and held in Williamsburg, Virginia June 22-25, 1999 represent a collection of the latest advances in aeroelasticity and structural dynamics from the world community. Research in the areas of unsteady aerodynamics and aeroelasticity, structural modeling and optimization, active control and adaptive structures, landing dynamics, certification and qualification, and validation testing are highlighted in the collection of papers. The wide range of results will lead to advances in the prediction and control of the structural response of aircraft and spacecraft

    Physical mechanism of silk strength and design of ultra-strong silk

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH
    corecore