163,847 research outputs found

    Modeling training content for software engineers in parallel computing

    Get PDF
    This study proposes a robust framework for the training of software engineers specializing in parallel computing. We first curated essential content for parallel computing education based on international standards and evolving recommendations from Computing Curricula. We then systematically structured the content and designed a well-defined learning pathway for aspiring software engineers. Concurrently, we conducted a comprehensive assessment of the current state of training for parallel computing in Ukrainian higher education institutions. We analyzed bachelor’s programs in Information Technologies and scrutinized individual course syllabi to identify valuable insights. By merging our findings with the review of educational programs, we formulated a comprehensive model for training in parallel computing. We also examined the pivotal role of the course ”Parallel and Distributed Computing” in the developed curriculum and identified essential tools and methodologies for developing parallel and distributed programs. Our research contributes to the advancement of parallel computing education and provides a valuable reference point for curriculum designers and educators

    Teaching the Grid: Learning Distributed Computing with the M-grid Framework

    No full text
    A classic challenge within Computer Science is to distribute data and processes so as to take advantage of multiple computers tackling a single problem in a simultaneous and coordinated way. This situation arises in a number of different scenarios, including Grid computing which is a secure, service-based architecture for tackling massively parallel problems and creating virtual organizations. Although the Grid seems destined to be an important part of the future computing landscape, it is very difficult to learn how to use as real Grid software requires extensive setting up and complex security processes. M-grid mimics the core features of the Grid, in a much simpler way, enabling the rapid prototyping of distributed applications. We describe m-grid and explore how it may be used to teach foundation Grid computing skills at the Higher Education level and report some of our experiences of deploying it as an exercise within a programming course

    Implement of a high-performance computing system for parallel processing of scientific applications and the teaching of multicore and parallel programming

    Full text link
    [EN] Increasingly complex algorithms for the modeling and resolution of different problems, which are currently facing humanity, has made it necessary the advent of new data processing requirements and the consequent implementation of high performance computing systems; but due to the high economic cost of this type of equipment and considering that an education institution cannot acquire, it is necessary to develop and implement computable architectures that are economical and scalable in their construction, such as heterogeneous distributed computing systems, constituted by several clustering of multicore processing elements with shared and distributed memory systems. This paper presents the analysis, design and implementation of a high-performance computing system called Liebres InTELigentes, whose purpose is the design and execution of intrinsically parallel algorithms, which require high amounts of storage and excessive processing times. The proposed computer system is constituted by conventional computing equipment (desktop computers, lap top equipment and servers), linked by a high-speed network. The main objective of this research is to build technology for the purposes of scientific and educational research.This project is sponsored by Tecnologico Nacional de México TecNM. 2018-2 110Velarde Martinez, A. (2019). Implement of a high-performance computing system for parallel processing of scientific applications and the teaching of multicore and parallel programming. En INNODOCT/18. International Conference on Innovation, Documentation and Education. Editorial Universitat Politècnica de València. 203-213. https://doi.org/10.4995/INN2018.2018.8908OCS20321

    The framework of P systems applied to solve optimal watermarking problem

    Get PDF
    Membrane computing (known as P systems) is a novel class of distributed parallel computing models inspired by the structure and functioning of living cells and organs, and its application to the real-world problems has become a hot topic in recent years. This paper discusses an interesting open problem in digital watermarking domain, optimal watermarking problem, and proposes a new optimal image watermarking method under the framework of P systems. A special membrane structure is designed and its cells as parallel computing units are used to find the optimal watermarking parameters for image blocks. Some cells use the position-velocity model to evolve watermarking parameters of image blocks, while another cell evaluates the objects in the system. In addition to the evolution rules, communication rules are used to exchange and share information between the cells. Simulation experiments on large image set compare the proposed framework with other existing watermarking methods and demonstrate its superiority.National Natural Science Foundation of China No 61170030Chunhui Project Foundation of the Education Department of China No. Z2012025Chunhui Project Foundation of the Education Department of China No. Z2012031Sichuan Key Technology Research and Development Program No. 2013GZX015

    Supercomputing: An Interview with Henry Neeman

    Get PDF
    Introduction: “Dr. Henry Neeman is the Director of the OU Supercomputing Center for Education & Research and an adjunct assistant professor in the School of Computer Science at the University of Oklahoma. . . . In addition to his own teaching and research, Dr. Neeman collaborates with dozens of research groups, applying High Performance Computing techniques in fields such as numerical weather prediction, bioinformatics and genomics, data mining, high energy physics, astronomy, nanotechnology, petroleum reservoir management, river basin modeling and engineering optimization. . . . Dr. Neeman’s research interests include high performance computing, scientific computing, parallel and distributed computing and computer science education” (Oklahoma Supercomputing Symposium 2011).N

    Dynamic resource allocation heuristics that manage tradeoff between makespan and robustness

    Get PDF
    Final draft post refereeing.Includes bibliographical references.Heterogeneous parallel and distributed computing systems may operate in an environment where certain system performance features degrade due to unpredictable circumstances. Robustness can be defined as the degree to which a system can function correctly in the presence of parameter values different from those assumed. This work develops a model for quantifying robustness in a dynamic heterogeneous computing environment where task execution time estimates are known to contain errors. This mathematical expression of robustness is then applied to two different problem environments. Several heuristic solutions to both problem variations are presented that utilize this expression of robustness to influence mapping decisions.This research was supported by the DARPA Information Exploitation Office under contract No. NBCHC030137, by the Colorado State University Center for Robustness in Computer Systems (funded by the Colorado Commission on Higher Education Technology Advancement Group through the Colorado Institute of Technology), and by the Colorado State University George T. Abell Endowment

    Lightning talks of EduHPC 2022

    Get PDF
    The lightning talks at EduHPC provide an opportunity to share early results and insights on parallel and distributed computing (PDC) education and training efforts. The four lightning talks at EduHPC 2022 cover a range of topics in broadening PDC education: (i) curriculum development efforts for the European Masters in HPC program, (ii) bootcamps for CI professionals who support the running of AI workloads on HPC systems, (iii) a GPU programming course following the Carpentries model and (iv) peer-review assignments to help students write efficient parallel algorithms within sustainable software libraries.Peer ReviewedArticle signat per 26 autors/es: Apan Qasem 1, Hartwig Anzt 2,3, Eduard Ayguade 4, Katharine Cahill 5, Ramon Canal 4, Jany Chan 6, Eric Fosler-Lussier 6, Fritz Göbel 2, Arpan Jain 6, Marcel Koch 2, Mateusz Kuzak 7, Josep Llosa 4, Raghu Machiraju 6, Xavier Martorell 4, Pratik Nayak 2, Shameema Oottikkal 5, Marcin Ostasz 8, Dhabaleswar K. Panda 6, Dirk Pleiter 9, Rajiv Ramnath 6, Maria-Ribera Sancho 4, Alessio Sclocco 7, Aamir Shafi 6, Hanno Spreeuw 7, Hari Subramoni 6, Karen Tomko 7 / 1 Department of Computer Science, Texas State University, USA; 2 Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany; 3 University of Tennessee (UTK), Knoxville, USA; 4 Barcelona Supercomputing Center and Universitat Politècnica de Catalunya, Spain; 5 Ohio Supercomputer Center, USA; 6 College of Engineering, The Ohio State University, USA; 7 Netherlands eScience Center, The Netherlands; 8 ETP4HPC, The Netherlands; 9 PDC Center for High Performance Computing, KTH Royal Institute of Technology, SwedenPostprint (author's final draft

    Learning Parallel Computations with ParaLab

    Full text link
    In this paper, we present the ParaLab teachware system, which can be used for learning the parallel computation methods. ParaLab provides the tools for simulating the multiprocessor computational systems with various network topologies, for carrying out the computational experiments in the simulation mode, and for evaluating the efficiency of the parallel computation methods. The visual presentation of the parallel computations taking place in the computational experiments is the key feature of the system. ParaLab can be used for the laboratory training within various teaching courses in the field of parallel, distributed, and supercomputer computations

    SNAP, Crackle, WebWindows!

    Get PDF
    We elaborate the SNAP---Scalable (ATM) Network and (PC) Platforms---view of computing in the year 2000. The World Wide Web will continue its rapid evolution, and in the future, applications will not be written for Windows NT/95 or UNIX, but rather for WebWindows with interfaces defined by the standards of Web servers and clients. This universal environment will support WebTop productivity tools, such as WebWord, WebLotus123, and WebNotes built in modular dynamic fashion, and undermining the business model for large software companies. We define a layered WebWindows software architecture in which applications are built on top of multi-use services. We discuss examples including business enterprise systems (IntraNets), health care, financial services and education. HPCC is implicit throughout this discussion for there is no larger parallel system than the World Wide metacomputer. We suggest building the MPP programming environment in terms of pervasive sustainable WebWindows technologies. In particular, WebFlow will support naturally dataflow integrating data and compute intensive applications on distributed heterogeneous systems
    corecore