9 research outputs found

    Simulations of time harmonic blood flow in the Mesenteric artery: comparing finite element and lattice Boltzmann methods

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Systolic blood flow has been simulated in the abdominal aorta and the superior mesenteric artery. The simulations were carried out using two different computational hemodynamic methods: the finite element method to solve the Navier Stokes equations and the lattice Boltzmann method.</p> <p>Results</p> <p>We have validated the lattice Boltzmann method for systolic flows by comparing the velocity and pressure profiles of simulated blood flow between methods. We have also analyzed flow-specific characteristics such as the formation of a vortex at curvatures and traces of flow.</p> <p>Conclusion</p> <p>The lattice Boltzmann Method is as accurate as a Navier Stokes solver for computing complex blood flows. As such it is a good alternative for computational hemodynamics, certainly in situation where coupling to other models is required.</p

    Service-oriented visualization applied to medical data analysis

    Get PDF
    With the era of Grid computing, data driven experiments and simulations have become very advanced and complicated. To allow specialists from various domains to deal with large datasets, aside from developing efficient extraction techniques, it is necessary to have available computational facilities to visualize and interact with the results of an extraction process. Having this in mind, we developed an Interactive Visualization Framework, which supports a service-oriented architecture. This framework allows, on one hand visualization experts to construct visualizations to view and interact with large datasets, and on the other hand end-users (e.g., medical specialists) to explore these visualizations irrespective of their geographical location and available computing resources. The image-based analysis of vascular disorders served as a case study for this project. The paper presents main research findings and reports on the current implementation status

    HOW TO MAKE ANALYSIS WORK IN BUSINESS INTELLIGENCE SOFTWARE

    No full text
    Competitive Intelligence (CI) has been defined by many authors. These definitions do have certain differences but all of them have a main common feature: They put the accent on the analysis. The most precise definition is given by the Society for Competitive Intelligence Professionals (SCIP): “A systematic and ethical program for gathering, analyzing, and managing external information that can affect your company’s plans, decisions, and operations”. Business Intelligence (BI) is much broader concept than CI. It has rather technical meaning while CI is more about managerial perspective of intelligence. BI includes activities such as data mining, market analysis, sales analysis, and analysis of customer and supplier records and behavior (Bouthillier et al., 2003). However, in some European countries, such as Sweden and Denmark, BI and CI have the similar meaning (Bouthillier et al., 2003). Either way, the main feature of both concepts is the ability to analyze data and information and to deduct intelligence out of them. Currently, a large number of BI and/or CI software is available, and being developed worldwide. A simple search of the “Business Intelligence software” term in Google gives about 548.000 results. Most of these software are quite enhanced and well developed but only a few of them have a good analysis tool, and even fewer give a choice of analysis tools to their users. An extensive work has been done on BI software evaluation by Amara et al. (2009) to classify the top BI software vendors according to the extent of their analysis by using the SSAV (Solberg Söilen, Amara, Vriens) model. A number of analyses for Business Intelligence have been summed up also in Solberg Söilen (2005). The conclusion of both works was the same: BI software need robust analysis tools. In this research we pursue two goals: First we investigate what are the major obstacles for making a better analysis function in the Business Intelligence (BI) software and second we examine how those obstacles can be solved both technically and from a managerial perspective. The intention in this study is to investigate how the analysis module is functioning in the BI software and see how it could be implemented more effectively. This means that the study has two sides one Competitive Intelligence (CI) for the managerial approach and one Business Intelligence (BI) for the more technical approach. First we present a comprehensive literature review and pin point the problems and obstacles defined by many authors. Then we propose a method to solve the identified problems and finally we concentrate on advantages and disadvantages of the proposed method. The proposed technical solution is under construction in the BI software called Subsoft developed by Dr. Klaus Solberg Söilen. We investigate to what extent conclusions here can be used to develop the software further. The managerial perspective of the solutions is explored in close collaboration with two other BI companies: Sentient and Crystalloids, both based in Amsterdam, The Netherlands

    GPU-Acceleration of A High Order Finite Difference Code Using Curvilinear Coordinates

    No full text
    GPU-accelerated computing is becoming a popular technology due to the emergence of techniques such as OpenACC, which makes it easy to port codes in their original form to GPU systems using compiler directives, and thereby speeding up computation times relatively simply. In this study we have developed an OpenACC implementation of the high order finite difference CFD solver ESSENSE for simulating compressible flows. The solver is based on summation-by-part form difference operators, and the boundary and interface conditions are weakly implemented using simultaneous approximation terms. This case study focuses on porting code to GPUs for the most time-consuming parts namely sparse matrix vector multiplications and the evaluations of fluxes. The resulting OpenACC implementation is used to simulate the Taylor-Green vortex which produces a maximum speed-up of 61.3 on a single V100 GPU by compared to serial CPU version

    Comparing Different Approaches for Solving Large Scale Power-Flow Problems With the Newton-Raphson Method

    No full text
    This paper focuses on using the Newton-Raphson method to solve the power-fiow problems. Since the most computationally demanding part of the Newton-Raphson method is to solve the linear equations at each iteration, this study investigates different approaches to solve the linear equations on both central processing unit (CPU) and graphical processing unit (GPU). Six different approaches have been developed and evaluated in this paper: two approaches of these run entirely on CPU while other two of these run entirely on GPU, and the remaining two are hybrid approaches that run on both CPU and GPU. All six direct linear solvers use either LU or QR factorization to solve the linear equations. Two different hardware platforms have been used to conduct the experiments. The performance results show that the CPU version with LU factorization gives better performance compared to the GPU version using standard library called cuSOLVER even for the larger power-fiow problems. Moreover, it has been proven that the best performance is achieved using a hybrid method where the Jacobian matrix is assembled on GPU, the preprocessing with a sparse high performance linear solver called KLU is performed on the CPU in the first iteration, and the linear equation is factorized on the GPU and solved on the CPU. Maximum speed up in this study is obtained on the largest case with 25000 buses. The hybrid version shows a speedup factor of 9:6 with a NVIDIA P100 GPU while 13:1 with a NVIDIA V100 GPU in comparison with baseline CPU version on an Intel Xeon Gold 6132 CPU

    High Performance Computational Hemodynamics with the Lattice Boltzmann Method High Performance Computational Hemodynamics with the Lattice Boltzmann Method

    No full text
    ter verkrijging van de graad van doctor aan de Universiteit van Amsterdam op gezag van de Rector Magnificus prof. dr. D. C. van den Boom ten overstaan van een door het college voor promoties ingestelde commissie, in het openbaar te verdedigen in de Agnietenkapel op dinsdag 18 December 2007, te 10:00 uur doo
    corecore