240 research outputs found

    Development of advanced criteria for blade root design and optimization

    Get PDF
    In gas and steam turbine engines, blade root attachments are considered as critical components which require special attention for design. The traditional method of root design required high experienced engineers yet the strength of the material was not fully exploited in most cases. In the current thesis, different methodologies for automatic design and optimization of the blade root has been evaluated. Moreover, some methods for reducing the computational time have been proposed. First, a simplified analytical model of the fir-tree was developed in order to evaluate mean stress in different sections of the blade root and disc groove. Then, a more detailed two-dimensional shape of the attachment capable to be analyzed in finite element (FE) analysis was developed for dovetail and fir-tree. The model was developed to be general in a way to include all possible shapes of the attachment. Then the projection of the analytical model over the 2D model was performed to compare the results obtained from analytical and FE methods. This comparison is essential in the later use of analytical evaluation of the fir-tree as a reduction technique of searching domain optimization. Moreover, the possibility of predicting the contact normal stress of the blade and disc attachment by the use of a punch test was evaluated. A puncher composed of a flat surface and rounded edge was simulated equivalent to a sample case of a dovetail. The stress profile of the contact in analytical, 2d and 3d for puncher and dovetail was compared. As an optimizer Genetic Algorithm (GA) was described and different rules affecting this algorithm was introduced. In order to reduce the number of callbacks to high fidelity finite element (FE) method, the surrogate functions were evaluated and among them, the Kriging function was selected to be constructed for use in the current study. Its efficiency was evaluated within a numerical optimization of a single lob. In this study, the surrogate model is not used solely in finding the optimum of the attachment shape as it may provide low accuracy but in order to benefit its fast evaluation and diminish its low accuracy drawback, the Kriging function (KRG) was used within GA as a pre-evaluation of the candidate before performing FE analysis. Moreover, the feasible and non-feasible space in a multi-dimensional complex searching domain of the attachment geometry is explained and also the challenge of a multi-district domain is tackled with a new mutation operation. In order to rectify the non-continuous domain, an adaptive penalty method based on Latin Hypercube Sampling (LHS) was proposed which could successfully improve the optimization convergence. Furthermore, different topologies of the contact in a dovetail were assessed. Four different types of contact were modeled and optimized under the same loading and boundary conditions. The punch test was also assessed with different contact shapes. In addition, the state of stress for the dovetail in different rotational speed with different types of contact was assessed. In the results and discussion, an optimization of a dovetail with the analytical approach was performed and the optimum was compared with the one obtained by FE analysis. It was found that the analytical approach has the advantage of fast evaluation and if constraints are well defined the results are comparable to the FE solution. Then, a Kriging function was embedded within the GA optimization and the approach was evaluated in an optimization of a dovetail. The results revealed that the low computational cost of the surrogate model is an advantage and the low accuracy would be diminished in a collaboration of FE and surrogate models. Later, the capability of employing the analytical approach in a fir-tree optimization is assessed. As the fir-tree geometry has a higher complexity working domain in comparison to the dovetail, the results would be consistent for the dovetail also. Different methods are assessed and compared. In the first attempt, the analytical approach was adopted as a filter to select out the least probable fit candidates. This method could provide a 7\% improvement in convergence. In another attempt, the proposed adaptive penalty method was added to the optimization which successfully found the reasonable optimum with 47\% reduction in computational cost. Later, a combination of analytical and FE models was joined in a multi-objective multi-level optimization which provided 32\% improvement with less error comparing to the previous method. In the last evaluation of this type, the analytical approach was solely used in a multi-objective optimization in which the results were selected according to an FE evaluation of most fit candidates. This approach although provided 86\% improvement in computational time reduction but it depends highly on the case under investigation and provides low accuracy in the final solution. Furthermore, a robust optimum was found for both dovetail and fir-tree in a multi-objective optimization. In this trial, the proposed adaptive penalty method in addition to the surrogate model was also involved

    wavelet domain inversion and joint deconvolution/interpolation of geophysical data

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Earth, Atmospheric, and Planetary Sciences, 2003.Includes bibliographical references (leaves 168-174).This thesis presents two innovations to geophysical inversion. The first provides a framework and an algorithm for combining linear deconvolution methods with geostatistical interpolation techniques. This allows for sparsely sampled data to aid in image deblurring problems, or, conversely, noisy and blurred data to aid in sample interpolation. In order to overcome difficulties arising from high dimensionality, the solution must be derived in the correct framework and the structure of the problem must be exploited by an iterative solution algorithm. The effectiveness of the method is demonstrated first on a synthetic problem involving satellite remotely sensed data, and then on a real 3-D seismic data set combined with well logs. The second innovation addresses how to use wavelets in a linear geophysical inverse problem. Wavelets have lead to great successes in image compression and denoising, so it is interesting to see what, if anything, they can do for a general linear inverse problem. It is shown that a simple nonlinear operation of weighting and thresholding wavelet coefficients can consistently outperform classical linear inverse methods in terms of mean-square error across a broad range of noise magnitude in the data. Wavelets allow for an adaptively smoothed solution: smoothed more in uninteresting regions, less at geologically important transitions.(cont.) A third issue is also addressed, somewhat separate from the first two: the correct manipulation of discrete geophysical data. The theory of fractional splines is introduced, which allows for optimal approximation of real signals on a digital computer. Using splines, it can be shown that a linear operation on the spline can be equivalently represented by a matrix operating on the coefficients of a certain spline basis function. The form of the matrix, however, depends completely on the spline basis, and incorrect discretization of the operator into a matrix can lead to large errors in the resulting matrix/vector product.by Jonathan A. Kane.Ph.D

    Spatio-temporal rainfall estimation and nowcasting for flash flood forecasting.

    Get PDF
    Thesis (Ph.D.Eng.)-University of KwaZulu-Natal, Durban, 2007.Floods cannot be prevented, but their devastating effects can be minimized if advance warning of the event is available. The South African Disaster Management Act (Act 57 of 2002) advocates a paradigm shift from the current "bucket and blanket brigade" response-based mind set to one where disaster prevention or mitigation are the preferred options. It is in the context of mitigating the effects of floods that the development and implementation of a reliable flood forecasting system has major significance. In the case of flash floods, a few hours lead time can afford disaster managers the opportunity to take steps which may significantly reduce loss of life and damage to property. The engineering challenges in developing and implementing such a system are numerous. In this thesis, the design and implement at ion of a flash flood forecasting system in South Africa is critically examined. The technical aspect s relating to spatio-temporal rainfall estimation and now casting are a key area in which new contributions are made. In particular, field and optical flow advection algorithms are adapted and refined to help predict future path s of storms; fast and pragmatic algorithms for combining rain gauge and remote sensing (rada r and satellite) estimates are re fined and validated; a two-dimensional adaptation of Empirical Mode Decomposition is devised to extract the temporally persistent structure embedded in rainfall fields. A second area of significant contribution relates to real-time fore cast updates, made in response to the most recent observed information. A number of techniques embedded in the rich Kalm an and adaptive filtering literature are adopted for this purpose. The work captures the current "state of play" in the South African context and hopes to provide a blueprint for future development of an essential tool for disaster management. There are a number of natural spin-offs from this work for related field s in water resources management

    Développement d'une méthode d'inventaire de la qualité de la fibre au Québec

    Get PDF
    La valeur du panier de produits des peuplements résineux dépend des dimensions et du défilement des tiges, de la présence de défauts internes, mais aussi des propriétés physiques et mécaniques du bois, principalement de la rigidité du bois, puisque le bois d’œuvre résineux est utilisé essentiellement à des fins structurales. La rigidité du bois d’œuvre résineux est estimée en usine lors du classement visuel ou mécanique des sciages. En forêt, cette connaissance n’existe pas et il n’est pas possible aujourd’hui de localiser les régions ou les peuplements avec un fort potentiel de bois rigide, ni d’évaluer la valeur des bois ou encore de comparer la rentabilité des scénarios sylvicoles, en considérant ce critère de qualité déterminant. Cette absence de connaissances s’explique principalement par les efforts et les coûts considérables associés à la réalisation d’un inventaire de la qualité de la fibre, lequel suppose en effet l’échantillonnage de milliers d’arbres et des analyses en laboratoire très exigeantes. C’est dans ce contexte que la spectroscopie proche infrarouge a été évaluée, comme technologie rapide et non destructive, pour mesurer les propriétés physiques et mécaniques du bois. Dans un premier temps, nous avons évalué la variabilité écogéographique des propriétés du bois de l’épinette noire pour les deux principales végétations potentielles de la forêt boréale aménagée du Québec (chapitre 1). Nous avons observé que le bois poussant dans les peuplements purs d’épinettes noires avait des fibres matures plus longues, un bois significativement plus dense et de meilleures caractéristiques mécaniques que le bois poussant dans des peuplements mélangés avec le sapin baumier. Une approche de modélisation par saut d’échelle, basée sur des mesures de cernes provenant de 3350 placettes d’inventaire, a permis d’améliorer la performance de tous les modèles, en expliquant, à l’échelle du peuplement, 47%, 57%, 63% et 63% de la variabilité de la densité du bois, du module d’élasticité, de l’angle des microfibrilles, et de la longueur des fibres matures, avec des erreurs quadratiques moyennes, de 8.9 kg/m³, 0.52 GPa, 0.60° et 0.06 mm respectivement. Nous avons ensuite évalué le potentiel de la spectroscopie proche infrarouge pour mesurer les propriétés du bois de l’épinette noire (chapitre 2). De bonnes et d’excellentes statistiques de calibration (R², rapport de la performance à l'écart) ont été obtenues pour la densité basale (0.85, 1.8), l’angle des microfibrilles (0.79, 2.2), et le module d’élasticité (0.88, 2.9). Une régression segmentée a également été appliquée au profil radial de l’angle des microfibrilles afin de déterminer l’âge de transition du bois juvénile au bois mature. Les valeurs obtenues avec SilviScan ont été comparées à celles obtenues par spectroscopie. L’âge moyen de transition (23 ans ± 7 ans) a été légèrement sous-estimé, par spectroscopie proche infrarouge, avec une erreur de prédiction moyenne de −2.2 ± 6.3 ans et des intervalles de confiance à 95% de −14.6 et 10.1. Ces résultats suggèrent que l’âge de transition du bois juvénile au bois mature peut également être prédit par spectroscopie proche infrarouge. Finalement, la spectroscopie proche infrarouge a été utilisée pour évaluer la variabilité régionale de la densité basale et de la rigidité du bois des principales essences boréales du Québec (épinette noire, sapin baumier, pin gris, bouleau à papier et peuplier faux-tremble) (chapitre 3). Un système automatisé a été développé à cette fin et calibré à partir de données SilviScan. La densité basale et la rigidité du bois ont été estimées sur 30159 carottes de bois provenant de 10573 placettes d’inventaire. Les observations de densité et de rigidité étaient spatialement autocorrélées sur de plus longues distances chez les feuillus que chez les résineux. Un gradient latitudinal uniforme relié au climat était apparent pour le bouleau à papier et le peuplier faux-tremble. La distribution spatiale de ces mêmes propriétés n’était pas uniforme chez les résineux, suggérant une adaptabilité environnementale plus restreinte en comparaison aux essences feuillues étudiées. Cette thèse présente de grandes avancées dans le développement d’une méthode d’inventaire de la qualité de la fibre au Québec. La variabilité régionale de la densité et de la rigidité du bois est maintenant connue pour les principales essences boréales du Québec. Les prochains travaux porteront sur l’estimation de ces propriétés à l’échelle du peuplement forestier.The value of forest products from softwood stands depends on stems volume and taper, internal defects, but also on physical and mechanical wood properties, especially stiffness, since softwood lumber is primarily used for structural purposes. The lumber stiffness is evaluated in sawmills through visual or mechanical gradings. In forest, this knowledge does not exist and it is not possible to locate the regions or the forest stands with a high lumber stiffness, neither to evaluate the value of forest products and the economic profitability of forest management scenarios considering this key attribute. This lack of knowledge is mainly due to substantial investments associated with an inventory of wood fibre quality, which involves the sampling of thousands of trees and very demanding laboratory tests. It is in this context that the near-infrared spectroscopy was evaluated as a rapid, non-destructive method for estimating physical and mechanical wood properties. Ecogeographic variation in black spruce clear wood properties was first investigated for the two main vegetation types of the managed boreal forest of the province of Quebec (chapter 1). Wood growing in pure black spruce stands had longer mature fibers, a significantly denser wood with better mechanical characteristics than the wood growing in stands mixed with balsam fir. A scaling-up modeling approach, based on ring data from 3,350 inventory plots, has improved the performance of all models, explaining, at the stand level, 47%, 57%, 63% and 63% of variance in wood density, modulus of elasticity, microfibril angle and mature fiber length with estimated root mean square errors of 8.9 kg/m³, 0.52 GPa, 0.60° and 0.06 mm respectively. The potential of near-infrared spectroscopy to determine the transition from juvenile to mature wood in black spruce was then assessed (chapter 2). Good to excellent calibration statistics (R², ratio of performance to deviation) were obtained for basic density (0.85, 1.8), microfibril angle (0.79, 2.2), and modulus of elasticity (0.88, 2.9). Two-segment linear regressions were applied to microfibril angle profiles to determine the transition age. The values obtained using SilviScan data were compared with those obtained using near-infrared spectroscopy predicted data. The average transition age (23 years ± 7 years) was slightly underestimated by near-infrared spectroscopy with a mean prediction error (and 95% limits of agreement) of -2.2 ± 6.3 years (-14.6/10.1). These results suggest that the transition age from juvenile to mature wood could be predicted by near-infrared spectroscopy. Finally, the near-infrared spectroscopy was used for estimating the regional variation in wood density and stiffness for the main boreal species of Quebec (black spruce, balsam fir, jack pine, paper birch, trembling aspen) (chapter 3). An automated near-infrared system was developed for this purpose and calibrated using SilviScan data. Basic density and wood stiffness were estimated on 30,159 increment cores from 10,573 inventory plots. The observations in wood density and stiffness were spatially autocorrelated on longer distances in hardwoods than softwoods. A uniform latitudinal gradient related to climate was observed in paper birch and trembling aspen. Conversely, spatial distribution in wood density and modulus of elasticity was not uniform in softwoods, suggesting a more limited environmental adaptability in comparison to the hardwood species studied. This thesis has made major advances in the development of a method for inventorying wood fibre quality in Quebec. Regional variation in wood density and stiffness is now known for the main boreal species of Quebec. Future work will focus on estimating these properties at the forest stand level

    The scale free and scale - bound properties of land surfaces: fractal analysis and specific geomorphometry from digital terrain models

    Get PDF
    The scale-bound view of landsurfaces, being an assemblage of certain landforms, occurring within limited scale ranges, has been challenged by the scale-free characteristics of fractal geometry. This thesis assesses the fractal model by examining the irregularity of landsurface form, for the self-affine behaviour present in fractional Brownian surfaces. Different methods for detecting self-affine behaviour in surfaces are considered and of these the variogram technique is shown to be the most effective. It produces the best results of two methods tested on simulated surfaces, with known fractal properties. The algorithm used has been adapted to consider log (altitude variance) over a sample of log (distances) for: complete surfaces; subareas within surfaces; separate directions within surfaces. Twenty seven digital elevation models of landsurfaces arc re-examined for self- affine behaviour. The variogram results for complete surfaces show that none of these are self-affine over the scale range considered. This is because of dominant slope lengths and regular valley, spacing within areas. For similar reasons subarea analysis produces the non-fractal behaviour of markedly different variograms for separate subareas. The linearity of landforms in many areas, is detected by the variograms for separate directions. This indicates that the roughness of landsurfaces is anisotropic, unlike that of fractal surfaces. Because of difficulties in extracting particular landforms from their landsurfaces, no clear links between fractal behaviour, and landform size distribution could be established. A comparative study shows the geomorphometric parameters of fractal surfaces to vary with fractal dimension, while the geomorphometry of landsurfaces varies with the landforms present. Fractal dimensions estimated from landsurfaces do not correlate with geomorphometric parameters. From the results of this study, real landsurfaces would not appear to be scale- free. Therefore, a scale-bound approach towards landsurfaces would seem to be more appropriate to geomorphology than the fractal alternative

    Analysis of motion in scale space

    Get PDF
    This work includes some new aspects of motion estimation by the optic flow method in scale spaces. The usual techniques for motion estimation are limited to the application of coarse to fine strategies. The coarse to fine strategies can be successful only if there is enough information in every scale. In this work we investigate the motion estimation in the scale space more basically. The wavelet choice for scale space decomposition of image sequences is discussed in the first part of this work. We make use of the continuous wavelet transform with rotationally symmetric wavelets. Bandpass decomposed sequences allow the replacement of the structure tensor by the phase invariant energy operator. The structure tensor is computationally more expensive because of its spatial or spatio-temporal averaging. The energy operator needs in general no further averaging. The numerical accuracy of the motion estimation with the energy operator is compared to the results of usual techniques, based on the structure tensor. The comparison tests are performed on synthetic and real life sequences. Another practical contribution is the accuracy measurement for motion estimation by adaptive smoothed tensor fields. The adaptive smoothing relies on nonlinear anisotropic diffusion with discontinuity and curvature preservation. We reached an accuracy gain under properly chosen parameters for the diffusion filter. A theoretical contribution from mathematical point of view is a new discontinuity and curvature preserving regularization for motion estimation. The convergence of solutions for the isotropic case of the nonlocal partial differential equation is shown. For large displacements between two consecutive frames the optic flow method is systematically corrupted because of the violence of the sampling theorem. We developed a new method for motion analysis by scale decomposition, which allows to circumvent the systematic corruption without using the coarse to fine strategy. The underlying assumption is, that in a certain neighborhood the grey value undergoes the same displacement. If this is fulfilled, then the same optic flow should be measured in all scales. If there arise inconsistencies in a pixel across the scale space, so they can be detected and the scales containing this inconsistencies are not taken into account

    Recent advances in low-cost particulate matter sensor: calibration and application

    Get PDF
    Particulate matter (PM) has been monitored routinely due to its negative effects on human health and atmospheric visibility. Standard gravimetric measurements and current commercial instruments for field measurements are still expensive and laborious. The high cost of conventional instruments typically limits the number of monitoring sites, which in turn undermines the accuracy of real-time mapping of sources and hotspots of air pollutants with insufficient spatial resolution. The new trends of PM concentration measurement are personalized portable devices for individual customers and networking of large quantity sensors to meet the demand of Big Data. Therefore, low-cost PM sensors have been studied extensively due to their price advantage and compact size. These sensors have been considered as a good supplement of current monitoring sites for high spatial-temporal PM mapping. However, a large concern is the accuracy of these low-cost PM sensors. Multiple types of low-cost PM sensors and monitors were calibrated against reference instruments. All these units demonstrated high linearity against reference instruments with high R2 values for different types of aerosols over a wide range of concentration levels. The question of whether low-cost PM monitors can be considered as a substituent of conventional instruments was discussed, together with how to qualitatively describe the improvement of data quality due to calibrations. A limitation of these sensors and monitors is that their outputs depended highly on particle composition and size, resulting in as high as 10 times difference in the sensor outputs. Optical characterization of low-cost PM sensors (ensemble measurement) was conducted by combining experimental results with Mie scattering theory. The reasons for their dependence on the PM composition and size distribution were studied. To improve accuracy in estimation of mass concentration, an expression for K as a function of the geometric mean diameter, geometric standard deviation, and refractive index is proposed. To get rid of the influence of the refractive index, we propose a new design of a multi-wavelength sensor with a robust data inversion routine to estimate the PM size distribution and refractive index simultaneously. The utility of the networked system with improved sensitivity was demonstrated by deploying it in a woodworking shop. Data collected by the networked system was utilized to construct spatiotemporal PM concentration distributions using an ordinary Kriging method and an Artificial Neural Network model to elucidate particle generation and ventilation processes. Furthermore, for the outdoor environment, data reported by low-cost sensors were compared against satellite data. The remote sensing data could provide a daily calibration of these low-cost sensors. On the other hand, low-cost PM sensors could provide better accuracy to demonstrate the microenvironment
    • …
    corecore