7,263 research outputs found

    Conceptual modeling for requirements of government to citizen service provision

    Get PDF

    Cost reduction using process analysis in company PEGRES obuv s.r.o.

    Get PDF
    Firma PEGRES obuv s.r.o. se již delší dobu potýká se stagnací v oblasti plánování a řízení výroby. Některé podnikové procesy jsou nyní značně zastaralé a v aktuálních podmínkách již neefektivní. Cíl práce je snížení nákladů s využitím procesní analýzy. Pro dosažení tohoto cíle bude provedena analýza současného stavu zastaralých procesů a budou popsány vybrané metody řízení výroby, které jsou svou povahou relevantní pro výrobu obuvi. Výstupem práce je sada doporučení a návrhů na změny v existujících procesech. Vybrané návrhy budou v prostředí firmy implementovány a práce zahrne zhodnocení výsledků po zavedení těchto změn.Company PEGRES obuv s.r.o. has been long time struggling with stagnation in production planning and control. Some of the internal processes are now obsolete and in current conditions no longer effective. The goal of the paper is to reduce the costs using process analysis. To achieve this goal, analysis of the current state of outdated processes will be performed, followed by description of selected methods of production management, which by their nature are relevant to the production of the shoes. Output of the work is a set of recommendations and proposals for changes to existing processes. Selected proposals will be implemented in the company and paper will include evaluation of results after the implementation of these changes.

    Resilient store: a heuristic-based data format selector for intermediate results

    Get PDF
    The final publication is available at link.springer.comLarge-scale data analysis is an important activity in many organizations that typically requires the deployment of data-intensive workflows. As data is processed these workflows generate large intermediate results, which are typically pipelined from one operator to the following. However, if materialized, these results become reusable, hence, subsequent workflows need not recompute them. There are already many solutions that materialize intermediate results but all of them assume a fixed data format. A fixed format, however, may not be the optimal one for every situation. For example, it is well-known that different data fragmentation strategies (e.g., horizontal and vertical) behave better or worse according to the access patterns of the subsequent operations. In this paper, we present ResilientStore, which assists on selecting the most appropriate data format for materializing intermediate results. Given a workflow and a set of materialization points, it uses rule-based heuristics to choose the best storage data format based on subsequent access patterns.We have implemented ResilientStore for HDFS and three different data formats: SequenceFile, Parquet and Avro. Experimental results show that our solution gives 18% better performance than any solution based on a single fixed format.Peer ReviewedPostprint (author's final draft

    Dynamic Virtualized Deployment of Particle Physics Environments on a High Performance Computing Cluster

    Full text link
    The NEMO High Performance Computing Cluster at the University of Freiburg has been made available to researchers of the ATLAS and CMS experiments. Users access the cluster from external machines connected to the World-wide LHC Computing Grid (WLCG). This paper describes how the full software environment of the WLCG is provided in a virtual machine image. The interplay between the schedulers for NEMO and for the external clusters is coordinated through the ROCED service. A cloud computing infrastructure is deployed at NEMO to orchestrate the simultaneous usage by bare metal and virtualized jobs. Through the setup, resources are provided to users in a transparent, automatized, and on-demand way. The performance of the virtualized environment has been evaluated for particle physics applications

    PaPaS: A Portable, Lightweight, and Generic Framework for Parallel Parameter Studies

    Full text link
    The current landscape of scientific research is widely based on modeling and simulation, typically with complexity in the simulation's flow of execution and parameterization properties. Execution flows are not necessarily straightforward since they may need multiple processing tasks and iterations. Furthermore, parameter and performance studies are common approaches used to characterize a simulation, often requiring traversal of a large parameter space. High-performance computers offer practical resources at the expense of users handling the setup, submission, and management of jobs. This work presents the design of PaPaS, a portable, lightweight, and generic workflow framework for conducting parallel parameter and performance studies. Workflows are defined using parameter files based on keyword-value pairs syntax, thus removing from the user the overhead of creating complex scripts to manage the workflow. A parameter set consists of any combination of environment variables, files, partial file contents, and command line arguments. PaPaS is being developed in Python 3 with support for distributed parallelization using SSH, batch systems, and C++ MPI. The PaPaS framework will run as user processes, and can be used in single/multi-node and multi-tenant computing systems. An example simulation using the BehaviorSpace tool from NetLogo and a matrix multiply using OpenMP are presented as parameter and performance studies, respectively. The results demonstrate that the PaPaS framework offers a simple method for defining and managing parameter studies, while increasing resource utilization.Comment: 8 pages, 6 figures, PEARC '18: Practice and Experience in Advanced Research Computing, July 22--26, 2018, Pittsburgh, PA, US
    corecore