692 research outputs found

    Unsteady wake modelling for tidal current turbines

    Get PDF
    The authors present a numerical model for three-dimensional unsteady wake calculations for tidal turbines. Since wakes are characterised by the shedding of a vortex sheet from the rotor blades, the model is based on the vorticity transport equations. A vortex sheet may be considered a jump contact discontinuity in tangential velocity with, in inviscid hydrodynamic terms, certain kinematic and dynamic conditions across the sheet. The kinematic condition is that the sheet is a stream surface with zero normal fluid velocity; the dynamic condition is that the pressure is equal on either side of the sheet. The dynamic condition is explicitly satisfied at the trailing edge only, via an approximation of the Kutta condition. The shed vorticity is the span-wise derivative of bound circulation, and the trailed vorticity is the time derivative of bound circulation, and is convected downstream from the rotors using a finite-volume solution of vorticity transport equations thus satisfying the kinematic conditions. Owing to an absence in the literature of pressure data for marine turbines, results from the code are presented for the NREL-UAE Phase IV turbine. Axial flow cases show a close match in pressure coefficients at various spanwise stations; however, yawed flow cases demonstrate the shortcomings of a modelling strategy lacking viscosity

    Accelerating 3D Non-Rigid Registration using Graphics Hardware

    Get PDF
    International audienceThere is an increasing need for real-time implementation of 3D image analysis processes, especially in the context of image-guided surgery. Among the various image analysis tasks, non-rigid image registration is particularly needed and is also computationally prohibitive. This paper presents a GPU (Graphical Processing Unit) implementation of the popular Demons algorithm using a Gaussian recursive filtering. Acceleration of the classical method is mainly achieved by a new filtering scheme on GPU which could be reused in or extended to other applications and denotes a significant contribution to the GPU-based image processing domain. This implementation was able to perform a non-rigid registration of 3D MR volumes in less than one minute, which corresponds to an acceleration factor of 10 compared to the corresponding CPU implementation. This demonstrated the usefulness of such method in an intra-operative context

    Shape-based invariant features extraction for object recognition

    No full text
    International audienceThe emergence of new technologies enables generating large quantity of digital information including images; this leads to an increasing number of generated digital images. Therefore it appears a necessity for automatic systems for image retrieval. These systems consist of techniques used for query specification and re-trieval of images from an image collection. The most frequent and the most com-mon means for image retrieval is the indexing using textual keywords. But for some special application domains and face to the huge quantity of images, key-words are no more sufficient or unpractical. Moreover, images are rich in content; so in order to overcome these mentioned difficulties, some approaches are pro-posed based on visual features derived directly from the content of the image: these are the content-based image retrieval (CBIR) approaches. They allow users to search the desired image by specifying image queries: a query can be an exam-ple, a sketch or visual features (e.g., colour, texture and shape). Once the features have been defined and extracted, the retrieval becomes a task of measuring simi-larity between image features. An important property of these features is to be in-variant under various deformations that the observed image could undergo. In this chapter, we will present a number of existing methods for CBIR applica-tions. We will also describe some measures that are usually used for similarity measurement. At the end, and as an application example, we present a specific ap-proach, that we are developing, to illustrate the topic by providing experimental results

    Development of a Nanoelectronic 3-D (NEMO 3-D) Simulator for Multimillion Atom Simulations and Its Application to Alloyed Quantum Dots

    Get PDF
    Material layers with a thickness of a few nanometers are common-place in today’s semiconductor devices. Before long, device fabrication methods will reach a point at which the other two device dimensions are scaled down to few tens of nanometers. The total atom count in such deca-nano devices is reduced to a few million. Only a small finite number of “free” electrons will operate such nano-scale devices due to quantized electron energies and electron charge. This work demonstrates that the simulation of electronic structure and electron transport on these length scales must not only be fundamentally quantum mechanical, but it must also include the atomic granularity of the device. Various elements of the theoretical, numerical, and software foundation of the prototype development of a Nanoelectronic Modeling tool (NEMO 3-D) which enables this class of device simulation on Beowulf cluster computers are presented. The electronic system is represented in a sparse complex Hamiltonian matrix of the order of hundreds of millions. A custom parallel matrix vector multiply algorithm that is coupled to a Lanczos and/or Rayleigh- Ritz eigenvalue solver has been developed. Benchmarks of the parallel electronic structure and the parallel strain calculation performed on various Beowulf cluster computers and a SGI Origin 2000 are presented. The Beowulf cluster benchmarks show that the competition for memory access on dual CPU PC boards renders the utility of one of the CPUs useless, if the memory usage per node is about 1-2 GB. A new strain treatment for the sp3s∗ and sp3d5s∗ tight-binding models is developed and parameterized for bulk material properties of GaAs and InAs. The utility of the new tool is demonstrated by an atomistic analysis of the effects of disorder in alloys. In particular bulk InxGa1−xAs and In0.6Ga0.4As quantum dots are examined. The quantum dot simulations show that the random atom configurations in the alloy, without any size or shape variations can lead to optical transition energy variations of several meV. The electron and hole wave functions show significant spatial variations due to spatial disorder indicating variations in electron and hole localization

    Large scale simulations of swirling and particle-laden flows using the Lattice-Boltzmann Method

    Get PDF
    Since the development of high performance computers, numerical simulations have evolved into an important scientific tool by means of mathematical modeling to address physical problems that are complex to handle experimentally. Predicting the behavior of physical systems which are not directly observable helps to design and optimize new technology. Computational fluid dynamics in specific aims to understand natural flow phenomena as well as to design and operate engineering processes in industry. With the continuous increase in computational power every year, the question of how to efficiently use computational resources becomes increasingly more important. Improving existing practices involves a better understanding of the underlying physical mechanisms as well as optimizing the algorithms that are used to solve them with robust and rapid numerical methods. The Lattice-Boltzmann method (LBM) is a mesoscopic approach to approximate the macroscopic equations of mass and momentum balance equations for a fluid flow. The objective of this study is to apply this concept to large scale problems and present its capabilities in terms of physical modelling and computing efficiency. As a validation step, computational models are tested against referenced theoretical, numerical and experimental evidences over a wide range of hydrodynamic conditions from creeping to turbulent flows and granular media. Turbulent flows are multi-scale flows that required fine meshes and long simulation times to converge statistics.Special care is taken to verify the fluid-solid interface for dispersed two-phase flows.Two main setups are examined – the non-reacting, swirling flow inside an injector and a particle-laden flow around a cylinder. Swirling flows are typical of aeronautical combustion chambers. The selected configuration is used to benchmark three different large eddy simulation solvers regarding their accuracy and computational efficiency. The obtained numerical results are compared to experimental results in terms of mean and fluctuating velocity profiles and pressure drop. The scaling, that is the code performance on a large range of processors, is characterized. Differences between several algorithmic approaches and different solvers are evaluated and commented. Next, we focused on particle laden flows around a cylinder as generic configuration for the interaction of a dispersed phase and flow hydrodynamic instabilities. It has been shown that the viscosity of a suspension increases relative to the particle volume fraction and for a certain range of particle material and concentration, this a fairly good model of interphase coupling. This phenomenon only occurs in numerical simulations that are able to describe finite size effects for rigid bodies. Comparing global flow parameters of suspensions at different particle volume fractions and sizes have shown that these flow features can be obtained for an equivalent single phase fluid with effective viscosity. Starting from neutrally buoyant particles the transition to granular flow is investigated. By increasing the relative density of particles, the influence of particle inertia on the equivalent fluid prediction is investigated and the contribution of particle collisions on the drag coefficient for varying relative densities is discussed. Conclusions are drawn regarding the code performance and physical representativeness of results

    Technologies for Biomechanically-Informed Image Guidance of Laparoscopic Liver Surgery

    Get PDF
    Laparoscopic surgery for liver resection has a number medical advantages over open surgery, but also comes with inherent technical challenges. The surgeon only has a very limited field of view through the imaging modalities routinely employed intra-operatively, laparoscopic video and ultrasound, and the pneumoperitoneum required to create the operating space and gaining access to the organ can significantly deform and displace the liver from its pre-operative configuration. This can make relating what is visible intra-operatively to the pre-operative plan and inferring the location of sub-surface anatomy a very challenging task. Image guidance systems can help overcome these challenges by updating the pre-operative plan to the situation in theatre and visualising it in relation to the position of surgical instruments. In this thesis, I present a series of contributions to a biomechanically-informed image-guidance system made during my PhD. The most recent one is work on a pipeline for the estimation of the post-insufflation configuration of the liver by means of an algorithm that uses a database of segmented training images of patient abdomens where the post-insufflation configuration of the liver is known. The pipeline comprises an algorithm for inter and intra-subject registration of liver meshes by means of non-rigid spectral point-correspondence finding. My other contributions are more fundamental and less application specific, and are all contained and made available to the public in the NiftySim open-source finite element modelling package. Two of my contributions to NiftySim are of particular interest with regards to image guidance of laparoscopic liver surgery: 1) a novel general purpose contact modelling algorithm that can be used to simulate contact interactions between, e.g., the liver and surrounding anatomy; 2) membrane and shell elements that can be used to, e.g., simulate the Glisson capsule that has been shown to significantly influence the organ’s measured stiffness

    Software for Exascale Computing - SPPEXA 2016-2019

    Get PDF
    This open access book summarizes the research done and results obtained in the second funding phase of the Priority Program 1648 "Software for Exascale Computing" (SPPEXA) of the German Research Foundation (DFG) presented at the SPPEXA Symposium in Dresden during October 21-23, 2019. In that respect, it both represents a continuation of Vol. 113 in Springer’s series Lecture Notes in Computational Science and Engineering, the corresponding report of SPPEXA’s first funding phase, and provides an overview of SPPEXA’s contributions towards exascale computing in today's sumpercomputer technology. The individual chapters address one or more of the research directions (1) computational algorithms, (2) system software, (3) application software, (4) data management and exploration, (5) programming, and (6) software tools. The book has an interdisciplinary appeal: scholars from computational sub-fields in computer science, mathematics, physics, or engineering will find it of particular interest

    Application of statistical learning theory to plankton image analysis

    Get PDF
    Submitted to the Joint Program in Applied Ocean Science and Engineering in partial fulfillment of the requirements for the degree of Doctor of Philosophy At the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution June 2006A fundamental problem in limnology and oceanography is the inability to quickly identify and map distributions of plankton. This thesis addresses the problem by applying statistical machine learning to video images collected by an optical sampler, the Video Plankton Recorder (VPR). The research is focused on development of a real-time automatic plankton recognition system to estimate plankton abundance. The system includes four major components: pattern representation/feature measurement, feature extraction/selection, classification, and abundance estimation. After an extensive study on a traditional learning vector quantization (LVQ) neural network (NN) classifier built on shape-based features and different pattern representation methods, I developed a classification system combined multi-scale cooccurrence matrices feature with support vector machine classifier. This new method outperforms the traditional shape-based-NN classifier method by 12% in classification accuracy. Subsequent plankton abundance estimates are improved in the regions of low relative abundance by more than 50%. Both the NN and SVM classifiers have no rejection metrics. In this thesis, two rejection metrics were developed. One was based on the Euclidean distance in the feature space for NN classifier. The other used dual classifier (NN and SVM) voting as output. Using the dual-classification method alone yields almost as good abundance estimation as human labeling on a test-bed of real world data. However, the distance rejection metric for NN classifier might be more useful when the training samples are not “good” ie, representative of the field data. In summary, this thesis advances the current state-of-the-art plankton recognition system by demonstrating multi-scale texture-based features are more suitable for classifying field-collected images. The system was verified on a very large realworld dataset in systematic way for the first time. The accomplishments include developing a multi-scale occurrence matrices and support vector machine system, a dual-classification system, automatic correction in abundance estimation, and ability to get accurate abundance estimation from real-time automatic classification. The methods developed are generic and are likely to work on range of other image classification applications.This work was supported by National Science Foundation Grants OCE-9820099 and Woods Hole Oceanographic Institution academic program
    • 

    corecore