190,796 research outputs found

    Development of an oceanographic application in HPC

    Get PDF
    High Performance Computing (HPC) is used for running advanced application programs efficiently, reliably, and quickly. In earlier decades, performance analysis of HPC applications was evaluated based on speed, scalability of threads, memory hierarchy. Now, it is essential to consider the energy or the power consumed by the system while executing an application. In fact, the High Power Consumption (HPC) is one of biggest problems for the High Performance Computing (HPC) community and one of the major obstacles for exascale systems design. The new generations of HPC systems intend to achieve exaflop performances and will demand even more energy to processing and cooling. Nowadays, the growth of HPC systems is limited by energy issues Recently, many research centers have focused the attention on doing an automatic tuning of HPC applications which require a wide study of HPC applications in terms of power efficiency. In this context, this paper aims to propose the study of an oceanographic application, named OceanVar, that implements Domain Decomposition based 4D Variational model (DD-4DVar), one of the most commonly used HPC applications, going to evaluate not only the classic aspects of performance but also aspects related to power efficiency in different case of studies. These work were realized at Bsc (Barcelona Supercomputing Center), Spain within the Mont-Blanc project, performing the test first on HCA server with Intel technology and then on a mini-cluster Thunder with ARM technology. In this work of thesis it was initially explained the concept of assimilation date, the context in which it is developed, and a brief description of the mathematical model 4DVAR. After this problem’s close examination, it was performed a porting from Matlab description of the problem of data-assimilation to its sequential version in C language. Secondly, after identifying the most onerous computational kernels in order of time, it has been developed a parallel version of the application with a parallel multiprocessor programming style, using the MPI (Message Passing Interface) protocol. The experiments results, in terms of performance, have shown that, in the case of running on HCA server, an Intel architecture, values of efficiency of the two most onerous functions obtained, growing the number of process, are approximately equal to 80%. In the case of running on ARM architecture, specifically on Thunder mini-cluster, instead, the trend obtained is labeled as "SuperLinear Speedup" and, in our case, it can be explained by a more efficient use of resources (cache memory access) compared with the sequential case. In the second part of this paper was presented an analysis of the some issues of this application that has impact in the energy efficiency. After a brief discussion about the energy consumption characteristics of the Thunder chip in technological landscape, through the use of a power consumption detector, the Yokogawa Power Meter, values of energy consumption of mini-cluster Thunder were evaluated in order to determine an overview on the power-to-solution of this application to use as the basic standard for successive analysis with other parallel styles. Finally, a comprehensive performance evaluation, targeted to estimate the goodness of MPI parallelization, is conducted using a suitable performance tool named Paraver, developed by BSC. Paraver is such a performance analysis and visualisation tool which can be used to analyse MPI, threaded or mixed mode programmes and represents the key to perform a parallel profiling and to optimise the code for High Performance Computing. A set of graphical representation of these statistics make it easy for a developer to identify performance problems. Some of the problems that can be easily identified are load imbalanced decompositions, excessive communication overheads and poor average floating operations per second achieved. Paraver can also report statistics based on hardware counters, which are provided by the underlying hardware. This project aimed to use Paraver configuration files to allow certain metrics to be analysed for this application. To explain in some way the performance trend obtained in the case of analysis on the mini-cluster Thunder, the tracks were extracted from various case of studies and the results achieved is what expected, that is a drastic drop of cache misses by the case ppn (process per node) = 1 to case ppn = 16. This in some way explains a more efficient use of cluster resources with an increase of the number of processes

    Towards a Theory of Software Development Expertise

    Full text link
    Software development includes diverse tasks such as implementing new features, analyzing requirements, and fixing bugs. Being an expert in those tasks requires a certain set of skills, knowledge, and experience. Several studies investigated individual aspects of software development expertise, but what is missing is a comprehensive theory. We present a first conceptual theory of software development expertise that is grounded in data from a mixed-methods survey with 335 software developers and in literature on expertise and expert performance. Our theory currently focuses on programming, but already provides valuable insights for researchers, developers, and employers. The theory describes important properties of software development expertise and which factors foster or hinder its formation, including how developers' performance may decline over time. Moreover, our quantitative results show that developers' expertise self-assessments are context-dependent and that experience is not necessarily related to expertise.Comment: 14 pages, 5 figures, 26th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE 2018), ACM, 201

    Revisiting Actor Programming in C++

    Full text link
    The actor model of computation has gained significant popularity over the last decade. Its high level of abstraction makes it appealing for concurrent applications in parallel and distributed systems. However, designing a real-world actor framework that subsumes full scalability, strong reliability, and high resource efficiency requires many conceptual and algorithmic additives to the original model. In this paper, we report on designing and building CAF, the "C++ Actor Framework". CAF targets at providing a concurrent and distributed native environment for scaling up to very large, high-performance applications, and equally well down to small constrained systems. We present the key specifications and design concepts---in particular a message-transparent architecture, type-safe message interfaces, and pattern matching facilities---that make native actors a viable approach for many robust, elastic, and highly distributed developments. We demonstrate the feasibility of CAF in three scenarios: first for elastic, upscaling environments, second for including heterogeneous hardware like GPGPUs, and third for distributed runtime systems. Extensive performance evaluations indicate ideal runtime behaviour for up to 64 cores at very low memory footprint, or in the presence of GPUs. In these tests, CAF continuously outperforms the competing actor environments Erlang, Charm++, SalsaLite, Scala, ActorFoundry, and even the OpenMPI.Comment: 33 page

    Optimization of the long-term planning of supply chains with decaying performance

    Get PDF
    This master's thesis addresses the optimization of supply and distribution chains considering the effect that equipment aging may cause over the performance of facilities involved in the process. The decaying performance of the facilities is modeled as an exponential equation and can be either physical or economic, thus giving rise to a novel mixed integer non-linear programming (MINLP) formulation. The optimization model has been developed based on a typical chemical supply chain. Thus, the best long-term investment plan has to be determined given production nodes, their production capacity and expected evolution; aggregated consumption nodes (urban or industrial districts) and their lumped demand (and expected evolution); actual and potential distribution nodes; distances between the nodes of the network; and a time horizon. The model includes the balances in each node, a general decaying performance function, and a cost function, as well as constraints to be satisfied. Hence, the investment plan (decision variables) consists not only on the start-up and shutdown of alternative distribution facilities, but also on the sizing of the lines satisfying the flows. The model has been implemented using GAMS optimization software. Results considering a variety of scenarios have been discussed. In addition, different approaches to the starting point for the model have been compared, showing the importance of initializing the optimization algorithm. The capabilities of the proposed approach have been tested through its application to two case studies: a natural gas network with physical decaying performance and an electricity distribution network with economic decaying performance. Each case study is solved with a different procedure to obtain results. Results demonstrate that overlooking the effect of equipment aging can lead to infeasible (for physical decaying performance) or unrealistic (for economic decaying performance) solutions in practice and show how the proposed model allows overcoming such limitations thus becoming a practical tool to support the decision-making process in the distribution secto

    Detecting semantic groups in MIP models

    Get PDF

    PowerModels.jl: An Open-Source Framework for Exploring Power Flow Formulations

    Full text link
    In recent years, the power system research community has seen an explosion of novel methods for formulating and solving power network optimization problems. These emerging methods range from new power flow approximations, which go beyond the traditional DC power flow by capturing reactive power, to convex relaxations, which provide solution quality and runtime performance guarantees. Unfortunately, the sophistication of these emerging methods often presents a significant barrier to evaluating them on a wide variety of power system optimization applications. To address this issue, this work proposes PowerModels, an open-source platform for comparing power flow formulations. From its inception, PowerModels was designed to streamline the process of evaluating different power flow formulations on shared optimization problem specifications. This work provides a brief introduction to the design of PowerModels, validates its implementation, and demonstrates its effectiveness with a proof-of-concept study analyzing five different formulations of the Optimal Power Flow problem

    Preparing Black and Latino Young Men for College and Careers: A Description of the Schools and Strategies in NYC's Expanded Success Initiative

    Get PDF
    The Expanded Success Initiative (ESI) provides funding and technical support to 40 relatively successful New York City high schools to help them improve college and career readiness among black and Latino male students. This preliminary report describes key components and strategies of ESI and begins to look at factors that might influence the potential to apply ESI more broadly
    • …
    corecore