4,923 research outputs found

    Secure Data Sharing With AdHoc

    Get PDF
    In the scientific circles, there is pressing need to form temporary and dynamic collaborations to share diverse resources (e.g. data, an access to services, applications or various instruments). Theoretically, the traditional grid technologies respond to this need with the abstraction of a Virtual Organization (VO). In practice its procedures are characterized by latency, administrative overhead and are inconvenient to its users. We would like to propose the Manifesto for Secure Sharing. The main postulate is that users should be able to share data and resources by themselves without any intervention on the system administrator's side. In addition, operating an intuitive interface does not require IT skills. AdHoc is a resource sharing interface designed for users willing to share data or computational resources within seconds and almost effortlessly. The AdHoc application is built on the top of traditional security frameworks, such as the PKI X.509 certificate scheme, Globus GSI, gLite VOMS and Shibboleth. It enables users rapid and secure collaboration

    Grid Infrastructure for Domain Decomposition Methods in Computational ElectroMagnetics

    Get PDF
    The accurate and efficient solution of Maxwell's equation is the problem addressed by the scientific discipline called Computational ElectroMagnetics (CEM). Many macroscopic phenomena in a great number of fields are governed by this set of differential equations: electronic, geophysics, medical and biomedical technologies, virtual EM prototyping, besides the traditional antenna and propagation applications. Therefore, many efforts are focussed on the development of new and more efficient approach to solve Maxwell's equation. The interest in CEM applications is growing on. Several problems, hard to figure out few years ago, can now be easily addressed thanks to the reliability and flexibility of new technologies, together with the increased computational power. This technology evolution opens the possibility to address large and complex tasks. Many of these applications aim to simulate the electromagnetic behavior, for example in terms of input impedance and radiation pattern in antenna problems, or Radar Cross Section for scattering applications. Instead, problems, which solution requires high accuracy, need to implement full wave analysis techniques, e.g., virtual prototyping context, where the objective is to obtain reliable simulations in order to minimize measurement number, and as consequence their cost. Besides, other tasks require the analysis of complete structures (that include an high number of details) by directly simulating a CAD Model. This approach allows to relieve researcher of the burden of removing useless details, while maintaining the original complexity and taking into account all details. Unfortunately, this reduction implies: (a) high computational effort, due to the increased number of degrees of freedom, and (b) worsening of spectral properties of the linear system during complex analysis. The above considerations underline the needs to identify appropriate information technologies that ease solution achievement and fasten required elaborations. The authors analysis and expertise infer that Grid Computing techniques can be very useful to these purposes. Grids appear mainly in high performance computing environments. In this context, hundreds of off-the-shelf nodes are linked together and work in parallel to solve problems, that, previously, could be addressed sequentially or by using supercomputers. Grid Computing is a technique developed to elaborate enormous amounts of data and enables large-scale resource sharing to solve problem by exploiting distributed scenarios. The main advantage of Grid is due to parallel computing, indeed if a problem can be split in smaller tasks, that can be executed independently, its solution calculation fasten up considerably. To exploit this advantage, it is necessary to identify a technique able to split original electromagnetic task into a set of smaller subproblems. The Domain Decomposition (DD) technique, based on the block generation algorithm introduced in Matekovits et al. (2007) and Francavilla et al. (2011), perfectly addresses our requirements (see Section 3.4 for details). In this chapter, a Grid Computing infrastructure is presented. This architecture allows parallel block execution by distributing tasks to nodes that belong to the Grid. The set of nodes is composed by physical machines and virtualized ones. This feature enables great flexibility and increase available computational power. Furthermore, the presence of virtual nodes allows a full and efficient Grid usage, indeed the presented architecture can be used by different users that run different applications

    CyberGuarder: a virtualization security assurance architecture for green cloud computing

    Get PDF
    Cloud Computing, Green Computing, Virtualization, Virtual Security Appliance, Security Isolation

    D.2.1.2 First integrated Grid infrastructure

    No full text

    Private Cloud Deployment on Shared Computer Labs

    Get PDF
    A computer laboratory in a school or college is often shared for multiple class and lab sessions. However, often the computers in the lab are just left idling for an extended period of time. Those are potential resources to be harvested for cloud services. This manuscript details the deployment of a private cloud on the shared computer labs. Fundamental services like operation manager, configuration manager, cloud manager, and schedule manager were put up to power on/off computers remotely, specify each computerā€™s OS configuration, manage cloud services (i.e., provision and retire virtual machines), and schedule OS switching tasks, respectively. OpenStack was employed to manage computer resources for cloud services. The deployment of private cloud can improve the computersā€™ utilization on the shared computer labs

    Survey and Analysis of Production Distributed Computing Infrastructures

    Full text link
    This report has two objectives. First, we describe a set of the production distributed infrastructures currently available, so that the reader has a basic understanding of them. This includes explaining why each infrastructure was created and made available and how it has succeeded and failed. The set is not complete, but we believe it is representative. Second, we describe the infrastructures in terms of their use, which is a combination of how they were designed to be used and how users have found ways to use them. Applications are often designed and created with specific infrastructures in mind, with both an appreciation of the existing capabilities provided by those infrastructures and an anticipation of their future capabilities. Here, the infrastructures we discuss were often designed and created with specific applications in mind, or at least specific types of applications. The reader should understand how the interplay between the infrastructure providers and the users leads to such usages, which we call usage modalities. These usage modalities are really abstractions that exist between the infrastructures and the applications; they influence the infrastructures by representing the applications, and they influence the ap- plications by representing the infrastructures
    • ā€¦
    corecore