282,545 research outputs found

    SPEDE: A Simple Programming Environment for Distributed Execution (Users\u27 Manual)

    Get PDF
    Traditional single processor computers are quickly reaching their full computational potentials. The quest for faster and faster chips have brought technology to the point where the laws of physics are hampering future gains. Significant gains in speed must therefore come from using multiple processors instead of a single processor. This technology usually represents itself in the form of a parallel computer, such as the Connection Machine Model 5. Recently however, much interest has been focused on software that organizes single processor computers to behave like a parallel computer. This is desirable for sites which have large installations of workstations, since the cost of new parallel systems are prohibitive. SPEDE, a Simple Programming Environment for Distributed Execution, was designed for this purpose. It allows UNIX based machines of varying hardware types to be organized and utilized by a programmer of parallel applications. SPEDE is a user level system in that it requires no special privileges to run. Every user keeps a separate copy of the system so that security issues are covered by the normal UNIX operating environment. SPEDE is characterized as a large grained distributed environment. This means that applications which have a large processing to I/O ratio will be much more effective than those with a small ratio. SPEDE allows users to coordinate the use of many computers through a straightforward interface. Machines are organized by classes, which are terms that can be used to label and group them into more manageable units. For example, users might want to create a class based on the byte ordering of machines, or by their location. Users can then specify more completely which machines they want to use for a particular session. Sessions are essentially the interaction between objects in the SPEDE environment. A user creates an object to perform a certain task, such as constructing part of a fractal image. Objects can send and receive messages from other objects using a simple interface provided with SPEDE. Objects are machine independent, which means that the same object can be run simultaneously on different platforms. This is achieved by translating all messages into standard network byte ordering. However, if user data is being passed between objects, it is the user\u27s responsibility to make sure byte ordering is correct. The SPEDE system involves several major components. These components help control and manage object interactions. Figure 1 shows a running session running with three machines (each surrounded by an oval rectangle). There are also three objects running, two named MandComp and one named Mand. Each object is on a different machine, although it is possible to have multiple objects on a single machine. In the figure, the lines connecting the various entities represent socket connections. UNIX sockets are the transport mechanism used in SPEDE, although one could implement a lower level protocol for more efficient communication. Sockets can also be a problem because some machines have strict limits on the number of connections a user can have open at any given time

    Virtual Machines and Networks - Installation, Performance Study, Advantages and Virtualization Options

    Full text link
    The interest in virtualization has been growing rapidly in the IT industry because of inherent benefits like better resource utilization and ease of system manageability. The experimentation and use of virtualization as well as the simultaneous deployment of virtual software are increasingly getting popular and in use by educational institutions for research and teaching. This paper stresses on the potential advantages associated with virtualization and the use of virtual machines for scenarios, which cannot be easily implemented and/or studied in a traditional academic network environment, but need to be explored and experimented by students to meet the raising needs and knowledge-base demanded by the IT industry. In this context, we discuss various aspects of virtualization - starting from the working principle of virtual machines, installation procedure for a virtual guest operating system on a physical host operating system, virtualization options and a performance study measuring the throughput obtained on a network of virtual machines and physical host machines. In addition, the paper extensively evaluates the use of virtual machines and virtual networks in an academic environment and also specifically discusses sample projects on network security, which may not be feasible enough to be conducted in a physical network of personal computers; but could be conducted only using virtual machines

    Phenomenology Tools on Cloud Infrastructures using OpenStack

    Get PDF
    We present a new environment for computations in particle physics phenomenology employing recent developments in cloud computing. On this environment users can create and manage "virtual" machines on which the phenomenology codes/tools can be deployed easily in an automated way. We analyze the performance of this environment based on "virtual" machines versus the utilization of "real" physical hardware. In this way we provide a qualitative result for the influence of the host operating system on the performance of a representative set of applications for phenomenology calculations.Comment: 25 pages, 12 figures; information on memory usage included, as well as minor modifications. Version to appear in EPJ

    Hardware and software status of QCDOC

    Full text link
    QCDOC is a massively parallel supercomputer whose processing nodes are based on an application-specific integrated circuit (ASIC). This ASIC was custom-designed so that crucial lattice QCD kernels achieve an overall sustained performance of 50% on machines with several 10,000 nodes. This strong scalability, together with low power consumption and a price/performance ratio of $1 per sustained MFlops, enable QCDOC to attack the most demanding lattice QCD problems. The first ASICs became available in June of 2003, and the testing performed so far has shown all systems functioning according to specification. We review the hardware and software status of QCDOC and present performance figures obtained in real hardware as well as in simulation.Comment: Lattice2003(machine), 6 pages, 5 figure

    The RAppArmor Package: Enforcing Security Policies in R Using Dynamic Sandboxing on Linux

    Get PDF
    The increasing availability of cloud computing and scientific super computers brings great potential for making R accessible through public or shared resources. This allows us to efficiently run code requiring lots of cycles and memory, or embed R functionality into, e.g., systems and web services. However some important security concerns need to be addressed before this can be put in production. The prime use case in the design of R has always been a single statistician running R on the local machine through the interactive console. Therefore the execution environment of R is entirely unrestricted, which could result in malicious behavior or excessive use of hardware resources in a shared environment. Properly securing an R process turns out to be a complex problem. We describe various approaches and illustrate potential issues using some of our personal experiences in hosting public web services. Finally we introduce the RAppArmor package: a Linux based reference implementation for dynamic sandboxing in R on the level of the operating system

    Grid Infrastructure for Domain Decomposition Methods in Computational ElectroMagnetics

    Get PDF
    The accurate and efficient solution of Maxwell's equation is the problem addressed by the scientific discipline called Computational ElectroMagnetics (CEM). Many macroscopic phenomena in a great number of fields are governed by this set of differential equations: electronic, geophysics, medical and biomedical technologies, virtual EM prototyping, besides the traditional antenna and propagation applications. Therefore, many efforts are focussed on the development of new and more efficient approach to solve Maxwell's equation. The interest in CEM applications is growing on. Several problems, hard to figure out few years ago, can now be easily addressed thanks to the reliability and flexibility of new technologies, together with the increased computational power. This technology evolution opens the possibility to address large and complex tasks. Many of these applications aim to simulate the electromagnetic behavior, for example in terms of input impedance and radiation pattern in antenna problems, or Radar Cross Section for scattering applications. Instead, problems, which solution requires high accuracy, need to implement full wave analysis techniques, e.g., virtual prototyping context, where the objective is to obtain reliable simulations in order to minimize measurement number, and as consequence their cost. Besides, other tasks require the analysis of complete structures (that include an high number of details) by directly simulating a CAD Model. This approach allows to relieve researcher of the burden of removing useless details, while maintaining the original complexity and taking into account all details. Unfortunately, this reduction implies: (a) high computational effort, due to the increased number of degrees of freedom, and (b) worsening of spectral properties of the linear system during complex analysis. The above considerations underline the needs to identify appropriate information technologies that ease solution achievement and fasten required elaborations. The authors analysis and expertise infer that Grid Computing techniques can be very useful to these purposes. Grids appear mainly in high performance computing environments. In this context, hundreds of off-the-shelf nodes are linked together and work in parallel to solve problems, that, previously, could be addressed sequentially or by using supercomputers. Grid Computing is a technique developed to elaborate enormous amounts of data and enables large-scale resource sharing to solve problem by exploiting distributed scenarios. The main advantage of Grid is due to parallel computing, indeed if a problem can be split in smaller tasks, that can be executed independently, its solution calculation fasten up considerably. To exploit this advantage, it is necessary to identify a technique able to split original electromagnetic task into a set of smaller subproblems. The Domain Decomposition (DD) technique, based on the block generation algorithm introduced in Matekovits et al. (2007) and Francavilla et al. (2011), perfectly addresses our requirements (see Section 3.4 for details). In this chapter, a Grid Computing infrastructure is presented. This architecture allows parallel block execution by distributing tasks to nodes that belong to the Grid. The set of nodes is composed by physical machines and virtualized ones. This feature enables great flexibility and increase available computational power. Furthermore, the presence of virtual nodes allows a full and efficient Grid usage, indeed the presented architecture can be used by different users that run different applications
    • ā€¦
    corecore