17 research outputs found

    AMGA tutorial

    Get PDF

    The Study of AMGA RAP-based Web Application

    Get PDF
    The ARDA Metadata Catalog Grid Application (AMGA) web application has been widely used; however, it has drawbacks such as easy-to-use interface, no direct building of the Virtual Organization Membership Service (VOMS) proxy and no maintenance after AMGA server version 1.3. In response, we adapted a new development procedure and toolkit from Graphic User Interface (GUI) client, a Client/Server (C/S) program, to a web application to manage the both Eclipse Rich Client Platform (RCP) and Rich Ajax Platform (RAP) at the same time. The AMGA web application provides many interesting features for manipulation of collections, metadata schema, entries, access control, user/group information, federation and others. Additionally, this web application includes a powerful SQL query editor that enables users to make complicated sentences under specific query conditions. In this paper, we describe the implementation of the AMGA web application focusing on the transformation of AMGA Manager using Eclipse RCP to a RAP-based web application

    Geant4 simulation model of electromagnetic processes in oriented crystals for the accelerator physics

    Full text link
    Electromagnetic processes of charged particles interaction with oriented crystals provide a wide variety of innovative applications such as beam steering, crystal-based extraction/collimation of leptons and hadrons in an accelerator, a fixed-target experiment on magnetic and electric dipole moment measurement, X-ray and gamma radiation source for radiotherapy and nuclear physics and a positron source for lepton and muon colliders, a compact crystalline calorimeter as well as plasma acceleration in the crystal media. One of the main challenges is to develop an up-to-date, universal and fast simulation tool to simulate these applications. We present a new simulation model of electromagnetic processes in oriented crystals implemented into Geant4, which is a toolkit for the simulation of the passage of particles through matter. We validate the model with the experimental data as well as discuss the advantages and perspectives of this model for the applications of oriented crystals mentioned above.Comment: 18 pages, 9 figure

    e-Science Activities in KISTI/Korea

    No full text

    Resource Profiling and Performance Modeling for Distributed Scientific Computing Environments

    No full text
    Scientific applications often require substantial amount of computing resources for running challenging jobs potentially consisting of many tasks from hundreds of thousands to even millions. As a result, many institutions collaborate to solve large-scale problems by creating virtual organizations (VOs), and integrate hundreds of thousands of geographically distributed heterogeneous computing resources. Over the past decade, VOs have been proven to be a powerful research testbed for accessing massive amount of computing resources shared by several organizations at almost no cost. However, VOs often suffer from providing exact dynamic resource information due to their scale and autonomous resource management policies. Furthermore, shared resources are inconsistent, making it difficult to accurately forecast resource capacity. An effective VO’s resource profiling and modeling system can address these problems by forecasting resource characteristics and availability. This paper presents effective resource profiling and performance prediction models including Adaptive Filter-based Online Linear Regression (AFOLR) and Adaptive Filter-based Moving Average (AFMV) based on the linear difference equation combining past predicted values and recent profiled information, which aim to support large-scale applications in distributed scientific computing environments. We performed quantitative analysis and conducted microbenchmark experiments on a real multinational shared computing platform. Our evaluation results demonstrate that the proposed prediction schemes outperform well-known common approaches in terms of accuracy, and actually can help users in a shared resource environment to run their large-scale applications by effectively forecasting various computing resource capacity and performance

    Resource Profiling and Performance Modeling for Distributed Scientific Computing Environments

    No full text
    Scientific applications often require substantial amount of computing resources for running challenging jobs potentially consisting of many tasks from hundreds of thousands to even millions. As a result, many institutions collaborate to solve large-scale problems by creating virtual organizations (VOs), and integrate hundreds of thousands of geographically distributed heterogeneous computing resources. Over the past decade, VOs have been proven to be a powerful research testbed for accessing massive amount of computing resources shared by several organizations at almost no cost. However, VOs often suffer from providing exact dynamic resource information due to their scale and autonomous resource management policies. Furthermore, shared resources are inconsistent, making it difficult to accurately forecast resource capacity. An effective VO’s resource profiling and modeling system can address these problems by forecasting resource characteristics and availability. This paper presents effective resource profiling and performance prediction models including Adaptive Filter-based Online Linear Regression (AFOLR) and Adaptive Filter-based Moving Average (AFMV) based on the linear difference equation combining past predicted values and recent profiled information, which aim to support large-scale applications in distributed scientific computing environments. We performed quantitative analysis and conducted microbenchmark experiments on a real multinational shared computing platform. Our evaluation results demonstrate that the proposed prediction schemes outperform well-known common approaches in terms of accuracy, and actually can help users in a shared resource environment to run their large-scale applications by effectively forecasting various computing resource capacity and performance

    Neural Network-Based Joint Velocity Estimation Method for Improving Robot Control Performance

    No full text
    Joint velocity estimation is one of the essential properties that implement for accurate robot motion control. Although conventional approaches such as numerical differentiation of position measurements and model-based observers exhibit feasible performance for velocity estimation, instability can be occurred because of phase lag or model inaccuracy. This study proposes a model-free approach that can estimate the velocity with less phase lag by batch training of a neural network with pre-collected encoder measurements. By learning a weighted moving average, the proposed method successfully estimates the velocity with less latency imposed by the noise attenuation compared to the conventional methods. Practical experiments with two robot platforms with high degrees of freedom are conducted to validate the effectiveness of the proposed method

    Performance Analysis of Loosely Coupled Applications in Heterogeneous Distributed Computing Systems

    No full text
    Loosely coupled applications composed of a potentially very large number (from tens of thousands to even billions) of tasks are commonly used in High-Throughput Computing (HTC) and Many-Task Computing (MTC) paradigms. To efficiently execute large-scale computations which can exceed the capability in a single type of computing resources within expected time, we should be able to effectively integrate resources from Heterogeneous Distributed Computing (HDC) systems such as Clusters, Grids, and Clouds. In this paper, we quantitatively analyze the performance of three different real scientific applications consisting of many tasks on top of HDC systems based on a Partnership of Distributed Computing Clusters, Grids, and Clouds to show practical issues that normal scientific users can face during the course of leveraging these systems. Our experimental results show that the performance of a loosely coupled application can be significantly affected by the characteristics of a HDC system, along with hardware specification of a node, and their impacts on the performance can vary widely depending on the resource usage pattern of each application. Throughout our extensive performance study with representative HDC systems and real scientific applications, we aim to give an insight to the research community on design and implementation of a next generation middleware system that can intelligently support large-scale loosely coupled applications by considering both of resource and application perspectives

    Resource Allocation Policies for Loosely Coupled Applications in Heterogeneous Computing Systems

    No full text
    High-Throughput Computing (HTC) and Many-Task Computing (MTC) paradigms employ loosely coupled applications which consist of a large number, from tens of thousands to even billions, of independent tasks. To support such large-scale applications, a heterogeneous computing system composed of multiple computing platforms with different types such as supercomputers, grids, and clouds can be used. On allocating heterogeneous resources of the system to multiple users, there are three important aspects to consider: fairness among users, efficiency for maximizing the system throughput, and user satisfaction for reducing the average user response time. In this paper, we present three resource allocation policies for multi-user and multi-application workloads in a heterogeneous computing system. These three policies are a fairness policy, a greedy efficiency policy, and a fair efficiency policy. We evaluate and compare the performance of the three resource allocation policies over various settings of a heterogeneous computing system and loosely coupled applications, using simulation based on the trace from real experiments. Our simulation results show that the fair efficiency policy can provide competitive efficiency, with a balanced level of fairness and user satisfaction, compared to the other two resource allocation policies.clos
    corecore