52 research outputs found

    Genetic algorithms and their use in Geophysical Problems

    Get PDF
    Genetic algorithms (GAs), global optimization methods that mimic Darwinian evolution are well suited to the nonlinear inverse problems of geophysics. A standard genetic algorithm selects the best or ''fittest'' models from a ''population'' and then applies operators such as crossover and mutation in order to combine the most successful characteristics of each model and produce fitter models. More sophisticated operators have been developed, but the standard GA usually provides a robust and efficient search. Although the choice of parameter settings such as crossover and mutation rate may depend largely on the type of problem being solved, numerous results show that certain parameter settings produce optimal performance for a wide range of problems and difficulties. In particular, a low (about half of the inverse of the population size) mutation rate is crucial for optimal results, but the choice of crossover method and rate do not seem to affect performance appreciably. Optimal efficiency is usually achieved with smaller (< 50) populations. Lastly, tournament selection appears to be the best choice of selection methods due to its simplicity and its autoscaling properties. However, if a proportional selection method is used such as roulette wheel selection, fitness scaling is a necessity, and a high scaling factor (> 2.0) should be used for the best performance. Three case studies are presented in which genetic algorithms are used to invert for crustal parameters. The first is an inversion for basement depth at Yucca mountain using gravity data, the second an inversion for velocity structure in the crust of the south island of New Zealand using receiver functions derived from teleseismic events, and the third is a similar receiver function inversion for crustal velocities beneath the Mendocino Triple Junction region of Northern California. The inversions demonstrate that genetic algorithms are effective in solving problems with reasonably large numbers of free parameters and with computationally expensive objective function calculations. More sophisticated techniques are presented for special problems. Niching and island model algorithms are introduced as methods to find multiple, distinct solutions to the nonunique problems that are typically seen in geophysics. Finally, hybrid algorithms are investigated as a way to improve the efficiency of the standard genetic algorithm

    Full Wave 2D Modeling of Scattering and Inverse Scattering for Layered Rough Surfaces with Buried Objects.

    Full text link
    Efficient and accurate modeling of electromagnetic scattering from layered rough surfaces with buried objects finds applications ranging from detection of landmines to remote sensing of subsurface soil moisture. In this dissertation, the formulation of a hybrid numerical/analytical solution to electromagnetic scattering from layered rough surfaces is first developed. The solution to scattering from each rough interface is sought independently based on the extended boundary condition method (EBCM), where the scattered fields of each rough interface are expressed as a summation of plane waves and then cast into reflection/transmission matrices. To account for interactions between multiple rough boundaries, the scattering matrix method (SMM) is applied to recursively cascade reflection and transmission matrices of each rough interface and obtain the composite reflection matrix from the overall scattering medium. The validation of this method against the Method of Moments (MoM) and Small Perturbation Method (SPM) will be addressed and the numerical results which investigate the potential of low frequency radar systems in estimating deep soil moisture will be presented. Computational efficiency of the proposed method is also addressed. In order to demonstrate the capability of this method in modeling coherent multiple scattering phenomena, the proposed method has been employed to analyze backscattering enhancement and satellite peaks due to surface plasmon waves from layered rough surfaces. Numerical results which show the appearance of enhanced backscattered peaks and satellite peaks are presented. Following the development of the EBCM/SMM technique, a technique which incorporates a buried object in layered rough surfaces is proposed by employing the T-matrix method and the cylindrical-to-spatial harmonics transformation. Validation and numerical results are provided. Finally, a multi-frequency polarimetric inversion algorithm for the retrieval of subsurface soil properties using VHF/UHF band radar measurements is developed. The top soil dielectric constant is first determined using an L-band inversion algorithm. For the retrieval of subsurface properties, a time-domain inversion technique is employed together with a parameter optimization for the pulse shape of time delay echoes from VHF/UHF band radar observations. Some numerical studies to investigate the accuracy of the proposed inversion technique in presence of errors are shown.Ph.D.Electrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/58459/1/kuoch_1.pd

    Case Studies on Optimizing Algorithms for GPU Architectures

    Get PDF
    Modern GPUs are complex, massively multi-threaded, and high-performance. Programmers naturally gravitate towards taking advantage of this high performance for achieving faster results. However, in order to do so successfully, programmers must first understand and then master a new set of skills – writing parallel code, using different types of parallelism, adapting to GPU architectural features, and understanding issues that limit performance. In order to ease this learning process and help GPU programmers become productive more quickly, this dissertation introduces three data access skeletons (DASks) – Block, Column, and Row -- and two block access skeletons (BASks) – Block-By-Block and Warp-by-Warp. Each “skeleton” provides a high-performance implementation framework that partitions data arrays into data blocks and then iterates over those blocks. The programmer must still write “body” methods on individual data blocks to solve their specific problem. These skeletons provide efficient machine dependent data access patterns for use on GPUs. DASks group n data elements into m fixed size data blocks. These m data block are then partitioned across p thread blocks using a 1D or 2D layout pattern. The fixed-size data blocks are parameterized using three C++ template parameters – nWork, WarpSize, and nWarps. Generic programming techniques use these three parameters to enable performance experiments on three different types of parallelism – instruction-level parallelism (ILP), data-level parallelism (DLP), and thread-level parallelism (TLP). These different DASks and BASks are introduced using a simple memory I/O (Copy) case study. A nearest neighbor search case study resulted in the development of DASKs and BASks but does not use these skeletons itself. Three additional case studies – Reduce/Scan, Histogram, and Radix Sort -- demonstrate DASks and BASks in action on parallel primitives and also provides more valuable performance lessons.Doctor of Philosoph

    Improved 3D MR Image Acquisition and Processing in Congenital Heart Disease

    Get PDF
    Congenital heart disease (CHD) is the most common type of birth defect, affecting about 1% of the population. MRI is an essential tool in the assessment of CHD, including diagnosis, intervention planning and follow-up. Three-dimensional MRI can provide particularly rich visualization and information. However, it is often complicated by long scan times, cardiorespiratory motion, injection of contrast agents, and complex and time-consuming postprocessing. This thesis comprises four pieces of work that attempt to respond to some of these challenges. The first piece of work aims to enable fast acquisition of 3D time-resolved cardiac imaging during free breathing. Rapid imaging was achieved using an efficient spiral sequence and a sparse parallel imaging reconstruction. The feasibility of this approach was demonstrated on a population of 10 patients with CHD, and areas of improvement were identified. The second piece of work is an integrated software tool designed to simplify and accelerate the development of machine learning (ML) applications in MRI research. It also exploits the strengths of recently developed ML libraries for efficient MR image reconstruction and processing. The third piece of work aims to reduce contrast dose in contrast-enhanced MR angiography (MRA). This would reduce risks and costs associated with contrast agents. A deep learning-based contrast enhancement technique was developed and shown to improve image quality in real low-dose MRA in a population of 40 children and adults with CHD. The fourth and final piece of work aims to simplify the creation of computational models for hemodynamic assessment of the great arteries. A deep learning technique for 3D segmentation of the aorta and the pulmonary arteries was developed and shown to enable accurate calculation of clinically relevant biomarkers in a population of 10 patients with CHD

    Surface global irradiance assessed by three different methods

    Get PDF
    Three methods will be used to evaluate the surface global irradiance: radiative transfer theory, empirical regression, and artificial neural networks (ANN). Radiative transfer is the fundamental theory that describes the propagation of radiation through a medium; empirical regression predicts surface global irradiance in simple parameterizations; artificial neural network, as an artificial intelligence technique, can also be tried to assess the surface global irradiance. These three approaches are studied in the present work. Data from the station “EL IDEAM” were used in the modeling experiments built upon these approaches to evaluate the daily transparency, also known as clearness index. We found out that the optimal inputs for artificial neural networks are extraterrestrial irradiance, surface relative humidity, and a pollution index based on particulate matter of sizes less than 10µm (PM 10 ). Surface relative humidity was suggested in a regression trial under meteorological conditions of “EL IDEAM”. By means of the programming code DISORT for the solution of the radiative transfer equation, daily irradiance characteristics were analyzed, and a hybrid model was created. Our results showed that artificial neural network produces higher scores than the other methods, though advantages and drawbacks are also discussed and compared.Maestrí

    Feasibility study for a numerical aerodynamic simulation facility. Volume 1

    Get PDF
    A Numerical Aerodynamic Simulation Facility (NASF) was designed for the simulation of fluid flow around three-dimensional bodies, both in wind tunnel environments and in free space. The application of numerical simulation to this field of endeavor promised to yield economies in aerodynamic and aircraft body designs. A model for a NASF/FMP (Flow Model Processor) ensemble using a possible approach to meeting NASF goals is presented. The computer hardware and software are presented, along with the entire design and performance analysis and evaluation

    Proteome comparison of helicobacter pylori isolates associated with four disease groups

    Get PDF
    The Gram-negative bacterium Helicobacter pylori is found in human gastric mucosa. H. pylori, one of the most common chronic bacterial infections of humans, is present in almost half of the world population. It is associated with chronic gastritis, non-ulcer dyspepsia, gastric and duodenal ulcers, and malignant neoplasms. The aim of this study was to detect microbial candidate protein markers whose presence might be correlated with the development of four different clinical consequences of H. pylori infection, gastric ulceration [GU], duodenal ulceration [DU], non-ulcer dyspepsia [NUD] and gastritis [GI]. Eleven H. pylori isolates associated with these outcomes were analysed. The total complement of protein from these H. pylori isolates were resolved by two-dimensional polyacrylamide gel electrophoresis (2D-PAGE) and compared using PDQUEST pattern analysis software. Relationships between the isolates associated with specific disease outcomes were determined by cluster analysis.Fifty six disease specific proteins were then characterised by tryptic peptide-mass fingerprinting using matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS). Up to 1165 protein species were resolved from each H. pylori strain. Proteome analysis revealed that only 470 (40%) of the proteins detected were common to all eleven isolates. Twenty six of the 56 disease specific proteins that were selected for identification consisted of spots whose expression is altered in response to stress conditions or those that can affect H. pylori cell division and the cell membrane. The remaining 30 proteins had no known function. This study has provided further confirmation of the extensive variation that the bacterium H. pylori exhibits at the proteome level. Most significantly this study has found, through the application of cluster analysis and protein matching, that isolates do form disease groups. Comparative proteome analysis is a useful method for highlighting the extensive strain variation that H. pylori exhibits and to determine if any disease specific proteins exist

    An Unsolicited Soliloquy on Dependency Parsing

    Get PDF
    Programa Oficial de Doutoramento en Computación . 5009V01[Abstract] This thesis presents work on dependency parsing covering two distinct lines of research. The first aims to develop efficient parsers so that they can be fast enough to parse large amounts of data while still maintaining decent accuracy. We investigate two techniques to achieve this. The first is a cognitively-inspired method and the second uses a model distillation method. The first technique proved to be utterly dismal, while the second was somewhat of a success. The second line of research presented in this thesis evaluates parsers. This is also done in two ways. We aim to evaluate what causes variation in parsing performance for different algorithms and also different treebanks. This evaluation is grounded in dependency displacements (the directed distance between a dependent and its head) and the subsequent distributions associated with algorithms and the distributions found in treebanks. This work sheds some light on the variation in performance for both different algorithms and different treebanks. And the second part of this area focuses on the utility of part-of-speech tags when used with parsing systems and questions the standard position of assuming that they might help but they certainly won’t hurt.[Resumen] Esta tesis presenta trabajo sobre análisis de dependencias que cubre dos líneas de investigación distintas. La primera tiene como objetivo desarrollar analizadores eficientes, de modo que sean suficientemente rápidos como para analizar grandes volúmenes de datos y, al mismo tiempo, sean suficientemente precisos. Investigamos dos métodos. El primero se basa en teorías cognitivas y el segundo usa una técnica de destilación. La primera técnica resultó un enorme fracaso, mientras que la segunda fue en cierto modo un ´éxito. La otra línea evalúa los analizadores sintácticos. Esto también se hace de dos maneras. Evaluamos la causa de la variación en el rendimiento de los analizadores para distintos algoritmos y corpus. Esta evaluación utiliza la diferencia entre las distribuciones del desplazamiento de arista (la distancia dirigida de las aristas) correspondientes a cada algoritmo y corpus. También evalúa la diferencia entre las distribuciones del desplazamiento de arista en los datos de entrenamiento y prueba. Este trabajo esclarece las variaciones en el rendimiento para algoritmos y corpus diferentes. La segunda parte de esta línea investiga la utilidad de las etiquetas gramaticales para los analizadores sintácticos.[Resumo] Esta tese presenta traballo sobre análise sintáctica, cubrindo dúas liñas de investigación. A primeira aspira a desenvolver analizadores eficientes, de maneira que sexan suficientemente rápidos para procesar grandes volumes de datos e á vez sexan precisos. Investigamos dous métodos. O primeiro baséase nunha teoría cognitiva, e o segundo usa unha técnica de destilación. O primeiro método foi un enorme fracaso, mentres que o segundo foi en certo modo un éxito. A outra liña avalúa os analizadores sintácticos. Esto tamén se fai de dúas maneiras. Avaliamos a causa da variación no rendemento dos analizadores para distintos algoritmos e corpus. Esta avaliaci´on usa a diferencia entre as distribucións do desprazamento de arista (a distancia dirixida das aristas) correspondentes aos algoritmos e aos corpus. Tamén avalía a diferencia entre as distribucións do desprazamento de arista nos datos de adestramento e proba. Este traballo esclarece as variacións no rendemento para algoritmos e corpus diferentes. A segunda parte desta liña investiga a utilidade das etiquetas gramaticais para os analizadores sintácticos.This work has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (FASTPARSE, grant agreement No 714150) and from the Centro de Investigación de Galicia (CITIC) which is funded by the Xunta de Galicia and the European Union (ERDF - Galicia 2014-2020 Program) by grant ED431G 2019/01.Xunta de Galicia; ED431G 2019/0

    Simulating cosmic structure formation with the GADGET-4 code

    Full text link
    Numerical methods have become a powerful tool for research in astrophysics, but their utility depends critically on the availability of suitable simulation codes. This calls for continuous efforts in code development, which is necessitated also by the rapidly evolving technology underlying today's computing hardware. Here we discuss recent methodological progress in the GADGET code, which has been widely applied in cosmic structure formation over the past two decades. The new version offers improvements in force accuracy, in time-stepping, in adaptivity to a large dynamic range in timescales, in computational efficiency, and in parallel scalability through a special MPI/shared-memory parallelization and communication strategy, and a more-sophisticated domain decomposition algorithm. A manifestly momentum conserving fast multipole method (FMM) can be employed as an alternative to the one-sided TreePM gravity solver introduced in earlier versions. Two different flavours of smoothed particle hydrodynamics, a classic entropy-conserving formulation and a pressure-based approach, are supported for dealing with gaseous flows. The code is able to cope with very large problem sizes, thus allowing accurate predictions for cosmic structure formation in support of future precision tests of cosmology, and at the same time is well adapted to high dynamic range zoom-calculations with extreme variability of the particle number density in the simulated volume. The GADGET-4 code is publicly released to the community and contains infrastructure for on-the-fly group and substructure finding and tracking, as well as merger tree building, a simple model for radiative cooling and star formation, a high dynamic range power spectrum estimator, and an initial conditions generator based on second-order Lagrangian perturbation theory.Comment: 82 pages, 65 figures, accepted by MNRAS, for the code see https://wwwmpa.mpa-garching.mpg.de/gadget

    On Martian Surface Exploration: Development of Automated 3D Reconstruction and Super-Resolution Restoration Techniques for Mars Orbital Images

    Get PDF
    Very high spatial resolution imaging and topographic (3D) data play an important role in modern Mars science research and engineering applications. This work describes a set of image processing and machine learning methods to produce the “best possible” high-resolution and high-quality 3D and imaging products from existing Mars orbital imaging datasets. The research work is described in nine chapters of which seven are based on separate published journal papers. These include a) a hybrid photogrammetric processing chain that combines the advantages of different stereo matching algorithms to compute stereo disparity with optimal completeness, fine-scale details, and minimised matching artefacts; b) image and 3D co-registration methods that correct a target image and/or 3D data to a reference image and/or 3D data to achieve robust cross-instrument multi-resolution 3D and image co-alignment; c) a deep learning network and processing chain to estimate pixel-scale surface topography from single-view imagery that outperforms traditional photogrammetric methods in terms of product quality and processing speed; d) a deep learning-based single-image super-resolution restoration (SRR) method to enhance the quality and effective resolution of Mars orbital imagery; e) a subpixel-scale 3D processing system using a combination of photogrammetric 3D reconstruction, SRR, and photoclinometric 3D refinement; and f) an optimised subpixel-scale 3D processing system using coupled deep learning based single-view SRR and deep learning based 3D estimation to derive the best possible (in terms of visual quality, effective resolution, and accuracy) 3D products out of present epoch Mars orbital images. The resultant 3D imaging products from the above listed new developments are qualitatively and quantitatively evaluated either in comparison with products from the official NASA planetary data system (PDS) and/or ESA planetary science archive (PSA) releases, and/or in comparison with products generated with different open-source systems. Examples of the scientific application of these novel 3D imaging products are discussed
    corecore