40 research outputs found

    SciAgents tool: User\u27s Guide

    Get PDF

    Automated Estimation of Relaxation Parameters for Interface Relaxation

    Get PDF
    An adaptive procedure, based on Automatic Differentiation, for estimating "good" values for the relaxation paramaters for general multi-dimensional problems is proposed

    Fine Tuning Interfacea Relaxation Methods for Elliptic Differential

    Get PDF

    Analysis of an interface relaxation method for composite elliptic differential equations

    Get PDF
    The theoretical analysis on both the continuous (differential) and the discrete (linear algebra) levels of an interface relaxation method for solving elliptic differential equations is presented. The convergence of the method for 1-dimensional problems is proved. The region of convergence and the optimal values for the relaxation parameters involved are determined for model problems. Numerical data for 1- and 2-dimensional problems that confirm the theoretical results, exhibit the effectiveness of the method and elucidate its characteristics are presented. (c) 2008 Elsevier B.V. All rights reserved

    Interface Relaxation Methods for Elliptic Differential Equations

    Get PDF
    Two simple interface relaxation techniques for solving elliptic differential equations are considered. A theoretical analysis is carried out at the differential level and "optimal" relaxation parameters are obtained for model problems. A comprehensive experimental numerical study for 1- and 2-dimensional problems is also presented. We present a complete analysis of convergence and optimum parameters for two 1-dimensional methods applied to Helmholtz equations: the averaging method AVE and the Robin-type method ROB. We then present experimental studies for 1- and 2-dimensional methods and more general equations. These studies confirm the theoretical results and suggest they are valid in these more general cases

    Automated Estimation of Relaxation for Interface Relaxation

    Get PDF

    Rapid prototype development for studying human activity

    No full text
    Recent years there is an extremely growing interest in the study of human motion. A large amount of scientific research projects deal with problems like monitoring human motion, gesture and posture recognition, fall detection etc. Wearable computers and electronic textiles have been successfully used for the study of human physiology, rehabilitation and ergonomics. We present a platform and a methodology for rapid prototype development of e-textile applications for human activity monitoring. © 2010 International Federation for Medical and Biological Engineering

    Investigating the efficiency of machine learning algorithms on mapreduce clusters with SSDs

    No full text
    In the big data era, the efficient processing of large volumes of data has became a standard requirement for both organizations and enterprises. Since single workstations cannot sustain such tremendous workloads, MapReduce was introduced with the aim of providing a robust, easy, and fault-tolerant parallelization framework for the execution of applications on large clusters. One of the most representative examples of such applications is the machine learning algorithms which dominate the broad research area of data mining. Simultaneously, the recent advances in hardware technology led to the introduction of high-performing alternative devices for secondary storage, known as Solid State Drives (SSDs). In this paper we examine the perfor-mance of several parallel data mining algorithms on MapReduce clusters equipped with such modern hardware. More specifically, we investigate standard dataset preprocessing methods including vectorization and dimensionality reduction, and two supervised classifiers, Naive Bayes and Linear Regression. We compare the execution times of these algorithms on an experimental cluster equipped with both standard magnetic disks and SSDs, by employing two different datasets and by applying several different cluster configurations. Our experiments demonstrate that the usage of SSDs can accelerate the execution of machine learning methods by a margin which depends on the cluster setup and the nature of the applied algorithms. © 2018 IEEE

    Efficient solution of large sparse linear systems in modern hardware

    No full text
    The solution of large-scale sparse linear systems arises in numerous scientific and engineering problems. Typical examples involve study of many real world multi-physics problems and the analysis of electric power systems. The latter involve key functions such as contingency, power flow and state estimation whose analysis amounts at solving linear systems with thousands or millions of equations. As a result, efficient and accurate solution of such systems is of paramount importance. The methods for solving sparse systems are distinguished in two categories, direct and iterative. Direct methods are robust but require large amounts of memory, as the size of the problem grows. On the other hand, iterative methods provide better performance but may exhibit numerical problems. In addition, continuous advances in computer hardware and computational infrastructures imposes new challenges and opportunities. GPUs, multi-core CPUs, late memory and storage technologies (flash and phase change memories) introduce new capabilities to optimizing sparse solvers. This work presents a comprehensive study of the performance of some, state of the art, sparse direct and iterative solvers on modern computer infrastructure and aims to identify the limits of each method on different computing platforms. We evaluated two direct solvers in different hardware configurations, examining their strengths and weaknesses both in main memory (in-core) and secondary memory (out-of-core) execution in a series of representative matrices from multi-physics and electric grid problems. Also, we provide a comparison with an iterative method, utilizing a general purpose preconditioner, implemented both on a GPU and a multi-core processor. Based on the evaluation results, we observe that direct solvers can be as efficient as their iterative counterparts if proper memory optimizations are applied. In addition, we demonstrate that GPUs can be utilized as efficient computational platforms for tackling the analysis of electric power systems. © 2015 IEEE

    Evaluating the Effects of Modern Storage Devices on the Efficiency of Parallel Machine Learning Algorithms

    No full text
    Big Data analytics is presently one of the most emerging areas of research for both organizations and enterprises. The requirement for deployment of efficient machine learning algorithms over huge amounts of data led to the development of parallelization frameworks and of specialized libraries (like Mahout and MLlib) which implement the most important among these algorithms. Moreover, the recent advances in storage technology resulted in the introduction of high-performing devices, broadly known as Solid State Drives (SSDs). Compared to the traditional Hard Drives (HDDs), SSDs offer considerably higher performance and lower power consumption. Motivated by these appealing features and the growing necessity for efficient large-scale data processing, we compared the performance of several machine learning algorithms on MapReduce clusters whose nodes are equipped with HDDs, SSDs, and devices which implement the latest 3D XPoint technology. In particular, we evaluate several dataset preprocessing methods like vectorization and dimensionality reduction, two supervised classifiers, Naive Bayes and Linear Regression, and the popular k-Means clustering algorithm. We use an experimental cluster equipped with the three aforementioned storage devices under different configurations, and two large datasets, Wikipedia and HIGGS. The experiments showed that the benefits which derive from the usage of SSDs depend on the cluster setup and the nature of the applied algorithms. © 2020 World Scientific Publishing Company
    corecore