137 research outputs found

    Cone normal stepping

    Get PDF
    This dissertation examines several methods of relief mapping, such as parallax mapping and cone step mapping, as well as methods for soft shadowing and ambient occlusion of relief maps. Ambient occlusion is an approximation of global illumination that only takes occlusion into account. New relief mapping methods are introduced to bridge the gap between distance elds and cone maps. The new methods allow calculating approximate distance elds from their cone map approximate ambient occlusion and soft shadows. The new methods are compared with linear, binary, and interval search as well as variants of cone mapping, such as relaxed cone mapping and quad cone mapping. These methods were evaluated with regards to performance and accuracy and were found to be similar in performance and accuracy than the existing methods. The new methods did not outperform existing methods on the tested scenes, but the new methods make use of approximate distance elds and remove the maximum cone angle limitation. It was also shown that in most cases linear search with interval mapping performed the best, given the error metric used.Dissertation (MSc)--University of Pretoria, 2018.TM2019Computer ScienceMScUnrestricte

    Textural and Rule-based Lithological Classification of Remote Sensing Data, and Geological Mapping in Southwestern Prieska Sub-basin, Transvaal Supergroup, South Africa

    Get PDF
    Although remote sensing has been widely used in geological investigations, the lithological classification of the area interested, based on medium-spatial and spectral resolution satellite data, is often not successful because of the complicated geological situation and other factors like inadequate methodology applied and wrong geological models. The study area of the present thesis is located in southwest of the Prieska sub-basin, Transvaal Supergroup, South Africa. This area includes mainly Neoarchean and Proterozoic sedimentary rocks partly uncomfortably covered by uppermost Paleozoic and lower Mesozoic rocks and Tertiary to recent soils and sands. The Precambrian rocks include various formations of volcanic and intrusive rocks, quartzites, shales, platform carbonates and Banded Iron Formations (BIF). The younger rocks and soils include dikes and shales, glacial sedimentary rocks, coarser siliciclastic rocks, calcretes, aeolian and fluvial sands, etc. Prospect activity for mineral deposits necessitates the detailed geological map (1:100000) of the area. In this research, a new rule-based classification system (RBS) was put forward, integrating spectral characteristics, textural features and ancillary data, such as general geological map (1:250000) and elevation data, in order to improve the lithological classification accuracy and the subsequent mapping accuracy in the study area. The proposed technique was mainly based on Landsat TM data and ASTER data with medium resolution. As ancillary data sets, topographic maps and general geological map were also available. Software like ERDAS©, Matlab©, and ArcGIS© supported the procedures of classification and mapping. The newly developed classification technique was performed by three steps. Firstly, the geographic and atmospheric correction was performed on the original TM and ASTER data, following the principal component analysis (PCA) and band ratioing, to enhance the images and to obtain data sets like principal components (PCs) and ratio bands. Traditional maximum-likelihood supervised classification (MLC) was performed individually on enhanced multispectral image and principal components image (PCs-image). For TM data, the classification accuracy based on PCs-image was higher than that based on multispectral image. For ASTER data, the classification accuracy of PCs- image was close to but lower, than that of multispectral image. As one of the encountered Banded Iron Formations, the Griquatown Banded Iron Formation (G-BIF) was recognized well in TM-principal components image (PCs-image). In the second step, textural features of different lithological types based on TM data were analyzed. Grey level co-occurrence matrix (GLCM) based textural features were computed individually from band 5 and the first principal component (PC1) of TM data. Geostatistics-based textural features were computed individually from the 6 TM multispectral bands and 3 principal components (PC1, PC2 and PC3). These two kinds of textural features were individually stacked as extra layers together with the original 6 multispectral bands and the 6 principal components to form several new data sets. Ratio bands were also individually stacked as extra layers with 6 multispectral bands and 6 principal components, to form other new data sets. In the same way new data sets were formed based on ASTER data. Then, all of the new data sets were individually classified using a maximum likelihood supervised classification (MLC), to produce several classified thematic images. The classification accuracy based on the new data sets are higher than that solely based on the spectral characteristics of original TM and ASTER data. It should be noticed that for one specific rock type, the class value in all classified images should correspond to its identified (ID) value in digital geological map. The third step was to perform the rule-based system (RBS) classification. In the first part of the RBS, two classified images were analyzed and compared. The analysis was based on the classification results in the first step, and the elevation data detracted from the topographic map. In comparison, the pixels with high possibility of being classified correctly (consistent pixels) and the pixels with high possibility of being misclassified (inconsistent pixels) were separately marked. In the second part of the RBS, the class values of consistent pixels were kept unchanged, and the class values of inconsistent pixels were replaced by their values in digital geological map (1:250000). Compared to the results solely based on spectral characteristics of TM data (54.3%) and ASTER data (66.41%), the new RBS classification improved the accuracy (83.2%) significantly. Based on the classification results, the detailed lithological map (1:100000) of the study area was edited. Photo-lineaments were interpreted from multi data source (MDS), including enhanced satellite images, slope images, shaded relief images and drainage maps. The interpreted lineaments were compared to those, digitized from general geological map and followed by a simple lineament analysis compared to published literatures. The results show the individual merits of lineament detection from MDS and general geological map. A final lineament map (1:100000) was obtained by integrating all the information. Ground check field work was carried out in 2009, to verify the classification and mapping, and the results were subsequently incorporated into the mapping and the classification procedures. Finally, a GIS-based detailed geological map (1:100000) of the study area was obtained, compiling the newly gained information from the performed classification and lineament analysis, from the field work and from published and available unpublished detailed geological maps. The here developed methods are proposed to be used for generation of new, detailed geological maps or updates of existent general geological maps by implementing the latest satellite images and all available ancillary data sets. Although final ground check field work is irreplaceable by remote sensing, the here presented research demonstrates the great potential and future prospects in lithological classification and geological mapping, for mineral exploration

    Implementation of the TOPKAPI model in South Africa: Initial results from the Liebenbergsvlei catchment

    Get PDF
    Flash floods and droughts are of major concern in Southern Africa. Hydrologists and engineers have to assist decision makers to address the issue of forecasting and monitoring extreme events. For these purposes, hydrological models are useful tools to: • Identify the dominant hydrological processes which influence the water balance and result in conditions of extreme water excess and/or deficit • Assist in generating both short- and long-term hydrological forecasts for use by water resource managers. In this study the physically-based and fully distributed hydrological TOPKAPI model (Liu and Todini, 2002),which has already been successfully applied in several countries in the world (Liu and Todini, 2002; Bartholomes and Todini, 2005; Liu et al., 2005; Martina et al., 2006), is applied in Africa for the first time. This paper contains the main theoretical and numerical components that have been integrated by the authors to model code and presents details of the application of the model in the Liebenbergsvlei catchment (4 625 km2) in South Africa. The physical basis of the equations, the fine-scale representation of the spatial catchment features, the parsimonious parameterisation linked to field/catchment information, the good computation time performance, the modularity of the processes, the ease of use and finally the good results obtained in modelling the river discharges of Liebenbergsvlei catchment, make the TOPKAPI model a promising tool for hydrological modelling of catchments in South Africa.Keywords: hydrology, physically distributed hydrological model, TOPKAPI, South Afric

    Automatic lineament analysis techniques for remotely sensed imagery

    Get PDF
    Imperial Users onl

    Effect of 475oC embrittlement on the fatigue behaviour of a duplex stainless steel

    Get PDF
    The low cycle fatigue behaviour of a duplex stainless steel was studied in standard heat treated and embrittled (by aging at 475°C for 100 hours) condition by mechanical testing, scanning and transmission electron microscopy. The fatigue crack initiation behaviour was studied in the embrittled condition by interrupting stress-controlled fatigue tests, identifying the crack initiation sites and relating them to the crystallographic parameters obtained from EBSD-OIM scans. The aging treatment at 475°C resulted in the precipitation of a' in the ferritic phase. Impact energy drops from 260 Joules in the annealed condition to 8 Joules in the aged condition. The drop in impact energy was caused by the inability of the ferritic phase to form deformation twins in the embrittled condition, which was confirmed by TEM examination. From the low cycle fatigue experiments conducted at A C/2 = 4. Ox 10-', 6. Ox 10-', 8.Oxl0-3 and 1.Ox10"2, it was established that the deformation curves in the annealed condition has three discernible stages: (i) cyclic hardening, (ii) cyclic softening and (iii) cyclic saturation and in the aged condition has two discernible stages: (i) cyclic hardening, and (ii) cyclic softening till final failure for all values of strain amplitude. A change in slope is observed in the cyclic stress-strain curve in the aged condition as compared to the standard heat treated condition. In the range of strain amplitude employed, in the aged condition, fatigue life is longer at lower plastic strain amplitude, decreases and becomes similar at intermediate plastic strain amplitude and becomes shorter at higher plastic strain amplitude in comparison to the standard heat treated condition. The gradual decrease in fatigue life with increase in plastic strain amplitude in the aged condition was attributed to the rapid cyclic softening caused by disappearance of the a' precipitates. From the fatigue crack initiation studies conducted at Ao12 = 400 MPa and 500 MPa, it was established that the crack initiation sites are the slip markings corresponding to { 111 } plane traces in the austenitic grains at Dof2 = 400 MPa and more cracks were observed to initiate at E3 CSL boundaries in the austenitic grain at Do12 = 500 MPa in the aged condition. The major resistance to crack growth came from the ay phase boundary

    DEVELOPMENT OF A NEW TECHNIQUE TO IDENTIFY AND QUANTIFY COMPLEX AUSTENITE DECOMPOSITION PRODUCTS

    Get PDF
    Polycrystalline aggregates are comprised of three microstructural features: grain centers, grain boundaries, and regions affected by grain boundaries. It is these features that determine the mechanical properties, and any advanced understanding of microstructure-property relations requires their quantitative description. Traditionally, descriptions of microstructures have been based on visualization, i.e., how grains appear in the optical or scanning electron microscope (SEM). While this may lead to classification systems that permit differentiation, it does not allow for quantification, especially in complex microstructures, and does not lend itself to either developing or applying structure-property relationships. The goal of this study is to present a new approach to the characterization of complex microstructures, especially those found in advanced modern high strength steels. For such steels, the new approach employs the fact that different austenite decomposition products formed at different transformation temperatures have different dislocation or sub-grain boundary densities. Hence, measuring the degree of lattice imperfection of the grain centers of the ferrite is one way of first identifying, then grouping, and finally quantifying, the different types or forms of ferrite. The index chosen in this study to distinguish the degree of lattice imperfection is the image quality (IQ). As part of the new approach a procedure has been developed to improve the accuracy of applying IQ measurements. This procedure includes three major features: IQ normalization, Grain Boundary Region (GBR) identification and the Multi-Peak model. These three features make this new approach a unique technique, which quantitatively describes the complex microstructures with much more details. The potential application of this technique and further development has also been discussed at the end of this study

    Modelling the human perception of shape-from-shading

    Get PDF
    Shading conveys information on 3-D shape and the process of recovering this information is called shape-from-shading (SFS). This thesis divides the process of human SFS into two functional sub-units (luminance disambiguation and shape computation) and studies them individually. Based on results of a series of psychophysical experiments it is proposed that the interaction between first- and second-order channels plays an important role in disambiguating luminance. Based on this idea, two versions of a biologically plausible model are developed to explain the human performances observed here and elsewhere. An algorithm sharing the same idea is also developed as a solution to the problem of intrinsic image decomposition in the field of image processing. With regard to the shape computation unit, a link between luminance variations and estimated surface norms is identified by testing participants on simple gratings with several different luminance profiles. This methodology is unconventional but can be justified in the light of past studies of human SFS. Finally a computational algorithm for SFS containing two distinct operating modes is proposed. This algorithm is broadly consistent with the known psychophysics on human SFS

    Efficient Algorithms for Large-Scale Image Analysis

    Get PDF
    This work develops highly efficient algorithms for analyzing large images. Applications include object-based change detection and screening. The algorithms are 10-100 times as fast as existing software, sometimes even outperforming FGPA/GPU hardware, because they are designed to suit the computer architecture. This thesis describes the implementation details and the underlying algorithm engineering methodology, so that both may also be applied to other applications

    Development of Capacitive Imaging Technology for Measuring Skin Hydration and Other Skin Properties

    Get PDF
    In this thesis, capacitive imaging systems are assessed for their suitability in skin research studies, as multi-purpose and portable laboratory equipment. The water content of the human skin, the status of the skin barrier, its permeability by solvents, and the skin texture are crucial pieces of information in pharmaceutical and cosmetic industries for the development of skin treatment products. Normally, multiple high-end scientific instruments with expensive dedicated analysis software are employed to measure the above skin properties. The aim of this work is to demonstrate how fingerprint sensors, originally designed for biometric security, can be exploited to achieve reliable skin hydration readings and analyse multiple other skin properties while maintaining low cost and portability. To begin with, the anatomy of human skin is summarised alongside the functional properties of each skin layer. The skin hydration instruments study the outermost layer of skin and its appendages, so their thickness, biology, functions, hydration levels and water holding capabilities are presented in the literature review in order to understand the target measurands. Since capacitive imaging, rather than single sensor, probes are employed in this work, the skin texture and its importance in cosmetic science are also studied as a part of the target measurand. In order to understand how this technology fits in the current skin research instrument market, well established measurement apparatuses are presented. These include opto-thermal transient emission radiometry and confocal Raman microspectroscopy for skin hydration and solvent permeation measurements as well as depth profiling. Then, electrical hygrometry and the dynamic vapour sortpion measurement principles are outlined, which focus on water diffusion and sorption measurements correspondingly. Since the skin texture will also be studied in this work, dermatoscopy is also summarised. A literature review on the non-invasive electrical-based measurement method is achieved, alongside the stratum corneum and viable skin capacitance and conductance as functions of sampling frequency. The latter allows to establish the criteria for the suitability of electrical based apparatuses in skin hydration measurements. More specifically, it is concluded that the measurement depth of the instrument should not be reaching viable skin and that the sampling frequency should be constant and below 100kHz for capacitive measurements. The presentation of existing electrical based skin hydration probes in the market demonstrates the current development stage of this technology, and it enables the expression of the research aim and its objectives for this work. In order to improve trust in the use of capacitive imaging technology for measuring skin hydration, apart from visualisation, established electrical based skin hydration probes are examined and compared with a capacitive imaging sensor. The criteria for this comparison derive from the literature review, i.e. the sampling frequency and the penetration depth of the electric field. The sampling frequency is measured directly on the hardware using an oscilloscope, while the measurement depth is estimated using an electrostatic model. The development of this model for different sensor geometries is presented and it is evaluated against different models as well as experimental results in the literature. It is concluded that low cost instruments tend to have high measurement depth that makes them unsuitable for stratum corneum hydration measurements. Higher end instruments, although they are using high sampling frequency, have safe penetration depth but low measurement sensitivity. The capacitive imaging sensor shown acceptable penetration depth, on the high end of the expected range, and good measurement sensitivity due to the miniaturisation of the technology. A common disadvantage of most of these instruments is that the readouts are provided in arbitrary units, so experimental results cannot be compared directly with the literature when different scientific equipment has been used. To overcome this disadvantage, and based on the previous analysis of capacitive measurement principle, a system calibration is proposed to convert system capacitance or arbitrary units to dielectric permittivity units, a property of the sample measurand. This allows the calculation of hydration and solvent percentage concentration within the sample and so direct comparison with a wider range of reported results in the literature. Furthermore, image analysis techniques are applied on the dielectric permittivity images to allow targeting and relocating skin regions of interest, as well as excluding pixels with bad sample contact that distort the results. Next, the measurement reliability of the capacitive imaging arrays is examined through in-vivo and in-vitro experiments as well as side-by-side comparative measurements with single sensor skin hydration probes. The advantages of the developed calibration method and image analysis tools are demonstrated via the introduction of new system applications in the skin research, including skin damage characterisation via occlusion, skin solvent penetration and water desorption in hair samples experiments. It has to be mentioned that a small number of subjects is used in these experiments and the results are compared with the literature, so the statistical significance is not clearly examined. Next, advanced image processing techniques are adapted and applied on the capacitive skin images to expand further the application of this technology. More specifically, the skin micro-relief aspects of interest in cosmetic industry are summarised, and algorithmic approaches for measuring the micro-relief orientation and intensity as well as the automatic skin grids account are reviewed and experimentally evaluated. The main research aim and its objective have been achieved, with their methodologies clearly presented the their implementations evaluated with experimental results. However, vulnerabilities of this technology have also been exposed and suggestions for further improvement are provided in the conclusions
    • …
    corecore