370 research outputs found

    DualSPHysics: from fluid dynamics to multiphysics problems

    Get PDF
    DualSPHysics is a weakly compressible smoothed particle hydrodynamics (SPH) Navier–Stokes solver initially conceived to deal with coastal engineering problems, especially those related to wave impact with coastal structures. Since the first release back in 2011, DualSPHysics has shown to be robust and accurate for simulating extreme wave events along with a continuous improvement in efficiency thanks to the exploitation of hardware such as graphics processing units for scientific computing or the coupling with wave propagating models such as SWASH and OceanWave3D. Numerous additional functionalities have also been included in the DualSPHysics package over the last few years which allow the simulation of fluid-driven objects. The use of the discrete element method has allowed the solver to simulate the interaction among different bodies (sliding rocks, for example), which provides a unique tool to analyse debris flows. In addition, the recent coupling with other solvers like Project Chrono or MoorDyn has been a milestone in the development of the solver. Project Chrono allows the simulation of articulated structures with joints, hinges, sliders and springs and MoorDyn allows simulating moored structures. Both functionalities make DualSPHysics especially suited for the simulation of offshore energy harvesting devices. Lately, the present state of maturity of the solver goes beyond single-phase simulations, allowing multi-phase simulations with gas–liquid and a combination of Newtonian and non-Newtonian models expanding further the capabilities and range of applications for the DualSPHysics solver. These advances and functionalities make DualSPHysics an advanced meshless solver with emphasis on free-surface flow modelling

    Advances in Grid Computing

    Get PDF
    This book approaches the grid computing with a perspective on the latest achievements in the field, providing an insight into the current research trends and advances, and presenting a large range of innovative research papers. The topics covered in this book include resource and data management, grid architectures and development, and grid-enabled applications. New ideas employing heuristic methods from swarm intelligence or genetic algorithm and quantum encryption are considered in order to explain two main aspects of grid computing: resource management and data management. The book addresses also some aspects of grid computing that regard architecture and development, and includes a diverse range of applications for grid computing, including possible human grid computing system, simulation of the fusion reaction, ubiquitous healthcare service provisioning and complex water systems

    MATLAB

    Get PDF
    This excellent book represents the final part of three-volumes regarding MATLAB-based applications in almost every branch of science. The book consists of 19 excellent, insightful articles and the readers will find the results very useful to their work. In particular, the book consists of three parts, the first one is devoted to mathematical methods in the applied sciences by using MATLAB, the second is devoted to MATLAB applications of general interest and the third one discusses MATLAB for educational purposes. This collection of high quality articles, refers to a large range of professional fields and can be used for science as well as for various educational purposes

    Modeling for inversion in exploration geophysics

    Get PDF
    Seismic inversion, and more generally geophysical exploration, aims at better understanding the earth's subsurface, which is one of today's most important challenges. Firstly, it contains natural resources that are critical to our technologies such as water, minerals and oil and gas. Secondly, monitoring the subsurface in the context of CO2 sequestration, earthquake detection and global seismology are of major interests with regard to safety and the environment hazards. However, the technologies to monitor the subsurface or find resources are scientifically extremely challenging. Seismic inversion can be formulated as a mathematical optimization problem that minimizes the difference between field recorded data and numerically modeled synthetic data. The process of solving this optimization problem then requires to numerically model, thousands of times, wave-propagation in large three-dimensional representations of part of the earth subsurface. The mathematical and computational complexity of this problem, therefore, calls for software design that abstracts these requirements and facilitates algorithm and software development. My thesis addresses some of the challenges that arise from these problems; mainly the computational cost and access to the right software for research and development. In the first part, I will discuss a performance metric that improves the current runtime-only benchmarks in exploration geophysics. This metric, the roofline model, first provides insight at the hardware level of the performance of a given implementation relative to the maximum achievable performance. Second, this study demonstrates that the choice of numerical discretization has a major impact on the achievable performance depending on the hardware at hand and shows that a flexible framework with respect to the discretization parameters is necessary. In the second part, I will introduce and describe Devito, a symbolic finite-difference DSL that provides a high-level interface to the definition of partial differential equations (PDE) such as the wave equation. Devito, from the symbolic definition of PDEs, then generates and compiles highly optimized C code on-the-fly to compute the solution of the PDE. The combination of the high-level abstractions and the just-in-time compiler enable research for geophysical exploration and PDE-constrainted optimization based on the paradigm of separation of concerns. This allows researchers to concentrate on their respective field of study while having access to computationally performant solvers with a flexible and easy to use interface to successfully implement complex representations of the physics. The second part of my thesis will be split into two sub-parts; first describing the symbolic application programming interface (API), before describing and benchmarking the just-in-time compiler. I will end my thesis with concluding remarks, the latest developments and a brief description of projects that were enabled by Devito.Ph.D

    Automatic tissue characterization from optical coherence tomography images for smart laser osteotomy

    Get PDF
    Fascinating experiments have proved that in the very near future, laser will completely replace mechanical tools in bone surgery or osteotomy. Laser osteotomy overcomes mechanical tools’ shortcomings, with less damage to surrounding tissue, lower risk of viral and bacterial infections, and faster wound healing. Furthermore, the current development of artificial intelligence has pushed the direction of research toward smart laser osteotomy. This thesis project aimed to advance smart laser osteotomy by introducing an image-based automatic tissue characterization or feedback system. The Optical Coherence Tomography (OCT) imaging system was selected because it could provide a high-resolution subsurface image slice over the laser ablation site. Experiments were conducted and published to show the feasibility of the feedback system. In the first part of this thesis project, a deep-learning-based OCT image denoising method was demonstrated and yielded a faster processing time than classical denoising methods, while maintaining image quality comparable to a frame-averaged image. Next part, it was necessary to find the best deep-learning model for tissue type identification in the absence of laser ablation. The results showed that the DenseNet model is sufficient for detecting tissue types based on the OCT image patch. The model could differentiate five different tissue types (bone, bone marrow, fat, muscle, and skin tissues) with an accuracy of 94.85 %. The last part of this thesis project presents the result of applying the deep-learning-based OCT-guided laser osteotomy in real-time. The first trial experiment took place at the time of the writing of this thesis. The feedback system was evaluated based on its ability to stop bone cutting when bone marrow was detected. The results show that the deep-learning-based setup successfully stopped the ablation laser when bone marrow was detected. The average maximum depth of bone marrow perforation was only 216 μm. This thesis project provides the basic framework for OCT-based smart laser osteotomy. It also shows that deep learning is a robust approach to achieving real-time application of OCT-guided laser osteotomy. Nevertheless, future research directions, such as a combination of depth control and tissue classification setup, and optimization of the ablation strategy, would make the use of OCT in laser osteotomy even more feasible

    Software for Exascale Computing - SPPEXA 2016-2019

    Get PDF
    This open access book summarizes the research done and results obtained in the second funding phase of the Priority Program 1648 "Software for Exascale Computing" (SPPEXA) of the German Research Foundation (DFG) presented at the SPPEXA Symposium in Dresden during October 21-23, 2019. In that respect, it both represents a continuation of Vol. 113 in Springer’s series Lecture Notes in Computational Science and Engineering, the corresponding report of SPPEXA’s first funding phase, and provides an overview of SPPEXA’s contributions towards exascale computing in today's sumpercomputer technology. The individual chapters address one or more of the research directions (1) computational algorithms, (2) system software, (3) application software, (4) data management and exploration, (5) programming, and (6) software tools. The book has an interdisciplinary appeal: scholars from computational sub-fields in computer science, mathematics, physics, or engineering will find it of particular interest

    Parallel and Distributed Computing

    Get PDF
    The 14 chapters presented in this book cover a wide variety of representative works ranging from hardware design to application development. Particularly, the topics that are addressed are programmable and reconfigurable devices and systems, dependability of GPUs (General Purpose Units), network topologies, cache coherence protocols, resource allocation, scheduling algorithms, peertopeer networks, largescale network simulation, and parallel routines and algorithms. In this way, the articles included in this book constitute an excellent reference for engineers and researchers who have particular interests in each of these topics in parallel and distributed computing

    The dynamics of circumbinary discs and embedded planets

    Get PDF
    Since the first detection of a circumbinary planet with the Kepler space telescope in 2011 nine more circumbinary planets have been discovered. All these circumbinary Kepler systems have two things in common: First they are very flat, meaning that the orbit of the binary and the planet are in one plane, suggesting that they formed in an accretion disc surrounding both binary components. Second, the orbits of the planets are very close to the calculated stability limit. To explain the formation of these planets two scenarios are possible: An in situ formation at the observed location or a formation further outside in the disc followed by radial migration to the current observed position. Simulations have shown that an in situ formation is unlikely, due to destructive collisions of planetesimals on orbits close to the binary. The second scenario leads to the question of how the migrating planets can be stopped at the observed location. In this thesis the second scenario is examined through numerical simulations. The gravitational interaction between the binary and the disc leads to the formation of an eccentric inner gap, which precesses slowly in a prograde manner around the binary. This inner gap constitutes a barrier for a migrating planet. Therefore, the first part of this thesis investigates how binary parameters (eccentricity, mass ratio) as well as disc parameters (pressure, viscosity) influence the size, eccentricity, and precession period of the gap. The binary eccentricity (ebin) is identified as an important parameter. In the precession period – gap-size diagram a bifurcation occurs for varying ebin. Increasing the binary eccentricity from zero, precession period and gap-size decrease until a critical eccentricity of ebin = 0.18 is reached. From this point onward, precession period and gap-size increase again for increasing binary eccentricities. The binary mass ratio changes only the precession period, which decreases with increasing mass ratios, while the gap-size remains constant. For increasing viscosity and pressure in the disc the expected behaviour is observed: precession period and gap-size decrease. The second part of this thesis investigates the migration of planets in circumbinary discs. For five Kepler systems (Kepler-16, -34, -35, -38, -413) the dependence of the final orbital parameters on the planet-to-disc mass ratio is examined as well as the change in the disc structure due to the presence of the planet. The planets migrate in all cases to the edge of the gap, as expected. Depending on the planet mass, two migration scenarios are observed. Massive planets dominate the disc by shrinking and circularising the inner gap, whereas light planets are influenced by the disc. They align their orbits to the precessing disc and their eccentricity is excited. In general the final simulated orbital parameters are too large compared to the observations. Circumbinary planets around systems which create large, eccentric gaps (Kepler-34 and -413) also have the highest simulated eccentricities in agreement with the observations.Seit der ersten Entdeckung eines zirkumbinären Planeten mit dem Kepler Weltraumteleskop im Jahr 2011 wurden neun weitere Planeten entdeckt, die um einen Binärstern kreisen. All diese zirkumbinären Kepler Systeme haben folgende Gemeinsamkeiten: Sie sind planar, das heißt die Umlaufbahn des Planeten liegt in derselben Ebene wie die Umlaufbahn des Binärsterns, was auf eine Entstehung der Planeten in einer Akkretionsscheibe, die beide Komponenten des Binärsterns umgab, hindeutet. Des Weiteren liegt die Umlaufbahn all dieser Planeten sehr nahe am berechneten Stabilitätslimit. Für die Entstehung von zirkumbinären Planeten gibt es grundsätzlich zwei Erklärungsmöglichkeiten: Eine Entstehung direkt am Ort der heutigen Beobachtung oder eine Entstehung in den äußeren Bereichen der Scheibe, gefolgt von einer Migration zur beobachteten Position. Wie Simulationen gezeigt haben, ist eine Entstehung am Ort der Beobachtung unwahrscheinlich, da Planetesimale auf Umlaufbahnen in der Nähe des Binärsterns destruktiv kollidieren. Das zweite Szenario führt direkt zur Frage, wie die Migration der Planeten an der beobachteten Stelle gestoppt werden kann. In dieser Arbeit wird das zweite Szenario mit Hilfe von numerischen Simulationen untersucht. Die gravitative Interaktion zwischen Binärstern und Akkretionsscheibe führt zur Bildung einer zentralen, exzentrischen Lücke, die langsam prograd um den Binärstern präzediert. Diese innere Lücke formt eine Barriere für den migrierenden Planeten. Daher untersucht der erste Teil dieser Arbeit wie Parameter des Binärsterns (Exzentrizität, Massenverhältnis) und Parameter der Scheibe (Druck, Viskosität) die Größe, Exzentrizität und Präzessionsperiode der Lücke beeinflussen. Dabei stellt sich die Exzentrizität des Binärsterns (ebin) als ein wichtiger Parameter heraus. So zeigt sich, wenn man die Präzession der Lücke gegen ihre Größe darstellt, eine Bifurkation bei Variation von ebin. Erhöht man, von null ausgehend, die Exzentrizität des Binärsterns, so sinken zunächst die Präzessionsperiode und die Größe der Lücke. Dieses Verhalten ändert sich mit Erreichen einer kritischen Exzentrizität von ebin = 0.18, ab der Präzessionsperiode und Größe der Lücke wieder zunehmen. Das Massenverhältnis des Binärsterns hat lediglich Auswirkung auf die Präzessionsperiode, die mit steigendem Massenverhältnis sinkt, die Größe der Lücke bleibt konstant. Bei der Variation des Druckes und der Viskosität erhält man das erwartete Ergebnis, dass sich mit steigendem Druck und Viskosität die Größe der Lücke verringert. Im zweiten Teil dieser Arbeit wird der Migrationsprozess von Planeten in zirkumbinären Scheiben untersucht. Dazu wurde anhand von fünf Kepler Systemen (Kepler-16, -34, -35, -38 und -413) simuliert wie die finalen Bahnparameter vom Verhältnis der Planeten- und Scheiben- masse abhängen, und wie die Planeten die Struktur der Scheibe verändern. Wie erwartet migrieren die Planeten in allen Fällen bis zur Lücke. In Abhängigkeit der Planetenmasse zeigen sich zwei verschiedene Migrationsszenarien: Schwere Planeten dominieren die Scheibe, verringern die Größe der inneren Lücke und formen sie kreisförmiger. Leichte Planeten werden hingegen von der Scheibe dominiert. Sie richten ihren Orbit an der präzedierenden Lücke in der Scheibe aus und ihre Exzentrizität wird angeregt. Im Allgemeinen sind die simulierten finalen Bahnparameter, im Vergleich zu den Beobachtungen, zu groß. Zirkumbinäre Planeten um Systeme, welche sehr exzentrische Lücke erzeugen (Kepler-34 und -413) haben auch die höchsten simulierten Exzentrizitäten im Einklang mit den Beobachtungen
    • …
    corecore