793 research outputs found
IMPLEMENTATION OF WEBIX DYNAMIC SCRIPTING FOR WEB-BASED FORM APP DEVELOPMENT AS AN ALTERNATIVE TO DATA-COLLECTING APPLICATIONS
In the 4th Industrial revolution, science and technology became increasingly advanced and more evolved. Besides that, data also became a new commodity that can be used to accomplish various things. One of the Indonesian retail companies that used data to improve the quality of the product is PT. XYZ. The oracle form is an application of form that used by PT XYZ for data collection. But the oracle form can no longer complied the company's needs, so the company needs an alternative form application that can be used to replace the oracle form. Therefore, a web-based form application was created and combined with dynamic scripting. Dynamic scripting was a proggraming technique to avoid writing the same code repeatedly. In this study, dynamic scripting's application was combined with the webix framework, resulting in a new framework that can shorten the code line and increase the time efficiency of the building of the form application
PyCUDA and PyOpenCL: A Scripting-Based Approach to GPU Run-Time Code Generation
High-performance computing has recently seen a surge of interest in
heterogeneous systems, with an emphasis on modern Graphics Processing Units
(GPUs). These devices offer tremendous potential for performance and efficiency
in important large-scale applications of computational science. However,
exploiting this potential can be challenging, as one must adapt to the
specialized and rapidly evolving computing environment currently exhibited by
GPUs. One way of addressing this challenge is to embrace better techniques and
develop tools tailored to their needs. This article presents one simple
technique, GPU run-time code generation (RTCG), along with PyCUDA and PyOpenCL,
two open-source toolkits that support this technique.
In introducing PyCUDA and PyOpenCL, this article proposes the combination of
a dynamic, high-level scripting language with the massive performance of a GPU
as a compelling two-tiered computing platform, potentially offering significant
performance and productivity advantages over conventional single-tier, static
systems. The concept of RTCG is simple and easily implemented using existing,
robust infrastructure. Nonetheless it is powerful enough to support (and
encourage) the creation of custom application-specific tools by its users. The
premise of the paper is illustrated by a wide range of examples where the
technique has been applied with considerable success.Comment: Submitted to Parallel Computing, Elsevie
The Scalability-Efficiency/Maintainability-Portability Trade-off in Simulation Software Engineering: Examples and a Preliminary Systematic Literature Review
Large-scale simulations play a central role in science and the industry.
Several challenges occur when building simulation software, because simulations
require complex software developed in a dynamic construction process. That is
why simulation software engineering (SSE) is emerging lately as a research
focus. The dichotomous trade-off between scalability and efficiency (SE) on the
one hand and maintainability and portability (MP) on the other hand is one of
the core challenges. We report on the SE/MP trade-off in the context of an
ongoing systematic literature review (SLR). After characterizing the issue of
the SE/MP trade-off using two examples from our own research, we (1) review the
33 identified articles that assess the trade-off, (2) summarize the proposed
solutions for the trade-off, and (3) discuss the findings for SSE and future
work. Overall, we see evidence for the SE/MP trade-off and first solution
approaches. However, a strong empirical foundation has yet to be established;
general quantitative metrics and methods supporting software developers in
addressing the trade-off have to be developed. We foresee considerable future
work in SSE across scientific communities.Comment: 9 pages, 2 figures. Accepted for presentation at the Fourth
International Workshop on Software Engineering for High Performance Computing
in Computational Science and Engineering (SEHPCCSE 2016
The OpenModelica integrated environment for modeling, simulation, and model-based development
OpenModelica is a unique large-scale integrated open-source Modelica- and FMI-based modeling, simulation, optimization, model-based analysis and development environment. Moreover, the OpenModelica environment provides a number of facilities such as debugging; optimization; visualization and 3D animation; web-based model editing and simulation; scripting from Modelica, Python, Julia, and Matlab; efficient simulation and co-simulation of FMI-based models; compilation for embedded systems; Modelica- UML integration; requirement verification; and generation of parallel code for multi-core architectures. The environment is based on the equation-based object-oriented Modelica language and currently uses the MetaModelica extended version of Modelica for its model compiler implementation. This overview paper gives an up-to-date description of the capabilities of the system, short overviews of used open source symbolic and numeric algorithms with pointers to published literature, tool integration aspects, some lessons learned, and the main vision behind its development.Fil: Fritzson, Peter. Linköping University; SueciaFil: Pop, Adrian. Linköping University; SueciaFil: Abdelhak, Karim. Fachhochschule Bielefeld; AlemaniaFil: Asghar, Adeel. Linköping University; SueciaFil: Bachmann, Bernhard. Fachhochschule Bielefeld; AlemaniaFil: Braun, Willi. Fachhochschule Bielefeld; AlemaniaFil: Bouskela, Daniel. Electricité de France; FranciaFil: Braun, Robert. Linköping University; SueciaFil: Buffoni, Lena. Linköping University; SueciaFil: Casella, Francesco. Politecnico di Milano; ItaliaFil: Castro, Rodrigo Daniel. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Ciudad Universitaria. Instituto de Investigación en Ciencias de la Computación. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Instituto de Investigación en Ciencias de la Computación; ArgentinaFil: Franke, Rüdiger. Abb Group; AlemaniaFil: Fritzson, Dag. Linköping University; SueciaFil: Gebremedhin, Mahder. Linköping University; SueciaFil: Heuermann, Andreas. Linköping University; SueciaFil: Lie, Bernt. University of South-Eastern Norway; NoruegaFil: Mengist, Alachew. Linköping University; SueciaFil: Mikelsons, Lars. Linköping University; SueciaFil: Moudgalya, Kannan. Indian Institute Of Technology Bombay; IndiaFil: Ochel, Lennart. Linköping University; SueciaFil: Palanisamy, Arunkumar. Linköping University; SueciaFil: Ruge, Vitalij. Fachhochschule Bielefeld; AlemaniaFil: Schamai, Wladimir. Danfoss Power Solutions GmbH & Co; AlemaniaFil: Sjolund, Martin. Linköping University; SueciaFil: Thiele, Bernhard. Linköping University; SueciaFil: Tinnerholm, John. Linköping University; SueciaFil: Ostlund, Per. Linköping University; Sueci
On-the-fly tracing for data-centric computing : parallelization, workflow and applications
As data-centric computing becomes the trend in science and engineering, more and more hardware systems, as well as middleware frameworks, are emerging to handle the intensive computations associated with big data. At the programming level, it is crucial to have corresponding programming paradigms for dealing with big data. Although MapReduce is now a known programming model for data-centric computing where parallelization is completely replaced by partitioning the computing task through data, not all programs particularly those using statistical computing and data mining algorithms with interdependence can be re-factorized in such a fashion. On the other hand, many traditional automatic parallelization methods put an emphasis on formalism and may not achieve optimal performance with the given limited computing resources. In this work we propose a cross-platform programming paradigm, called on-the-fly data tracing , to provide source-to-source transformation where the same framework also provides the functionality of workflow optimization on larger applications. Using a big-data approximation computations related to large-scale data input are identified in the code and workflow and a simplified core dependence graph is built based on the computational load taking in to account big data. The code can then be partitioned into sections for efficient parallelization; and at the workflow level, optimization can be performed by adjusting the scheduling for big-data considerations, including the I/O performance of the machine. Regarding each unit in both source code and workflow as a model, this framework enables model-based parallel programming that matches the available computing resources. The techniques used in model-based parallel programming as well as the design of the software framework for both parallelization and workflow optimization as well as its implementations with multiple programming languages are presented in the dissertation. Then, the following experiments are performed to validate the framework: i) the benchmarking of parallelization speed-up using typical examples in data analysis and machine learning (e.g. naive Bayes, k-means) and ii) three real-world applications in data-centric computing with the framework are also described to illustrate the efficiency: pattern detection from hurricane and storm surge simulations, road traffic flow prediction and text mining from social media data. In the applications, it illustrates how to build scalable workflows with the framework along with performance enhancements
Neuroimaging study designs, computational analyses and data provenance using the LONI pipeline.
Modern computational neuroscience employs diverse software tools and multidisciplinary expertise to analyze heterogeneous brain data. The classical problems of gathering meaningful data, fitting specific models, and discovering appropriate analysis and visualization tools give way to a new class of computational challenges--management of large and incongruous data, integration and interoperability of computational resources, and data provenance. We designed, implemented and validated a new paradigm for addressing these challenges in the neuroimaging field. Our solution is based on the LONI Pipeline environment [3], [4], a graphical workflow environment for constructing and executing complex data processing protocols. We developed study-design, database and visual language programming functionalities within the LONI Pipeline that enable the construction of complete, elaborate and robust graphical workflows for analyzing neuroimaging and other data. These workflows facilitate open sharing and communication of data and metadata, concrete processing protocols, result validation, and study replication among different investigators and research groups. The LONI Pipeline features include distributed grid-enabled infrastructure, virtualized execution environment, efficient integration, data provenance, validation and distribution of new computational tools, automated data format conversion, and an intuitive graphical user interface. We demonstrate the new LONI Pipeline features using large scale neuroimaging studies based on data from the International Consortium for Brain Mapping [5] and the Alzheimer's Disease Neuroimaging Initiative [6]. User guides, forums, instructions and downloads of the LONI Pipeline environment are available at http://pipeline.loni.ucla.edu
Serverification of Molecular Modeling Applications: the Rosetta Online Server that Includes Everyone (ROSIE)
The Rosetta molecular modeling software package provides experimentally
tested and rapidly evolving tools for the 3D structure prediction and
high-resolution design of proteins, nucleic acids, and a growing number of
non-natural polymers. Despite its free availability to academic users and
improving documentation, use of Rosetta has largely remained confined to
developers and their immediate collaborators due to the code's difficulty of
use, the requirement for large computational resources, and the unavailability
of servers for most of the Rosetta applications. Here, we present a unified web
framework for Rosetta applications called ROSIE (Rosetta Online Server that
Includes Everyone). ROSIE provides (a) a common user interface for Rosetta
protocols, (b) a stable application programming interface for developers to add
additional protocols, (c) a flexible back-end to allow leveraging of computer
cluster resources shared by RosettaCommons member institutions, and (d)
centralized administration by the RosettaCommons to ensure continuous
maintenance. This paper describes the ROSIE server infrastructure, a
step-by-step 'serverification' protocol for use by Rosetta developers, and the
deployment of the first nine ROSIE applications by six separate developer
teams: Docking, RNA de novo, ERRASER, Antibody, Sequence Tolerance,
Supercharge, Beta peptide design, NCBB design, and VIP redesign. As illustrated
by the number and diversity of these applications, ROSIE offers a general and
speedy paradigm for serverification of Rosetta applications that incurs
negligible cost to developers and lowers barriers to Rosetta use for the
broader biological community. ROSIE is available at
http://rosie.rosettacommons.org
- …