5,879 research outputs found

    High-Performance Cloud Computing: A View of Scientific Applications

    Full text link
    Scientific computing often requires the availability of a massive number of computers for performing large scale experiments. Traditionally, these needs have been addressed by using high-performance computing solutions and installed facilities such as clusters and super computers, which are difficult to setup, maintain, and operate. Cloud computing provides scientists with a completely new model of utilizing the computing infrastructure. Compute resources, storage resources, as well as applications, can be dynamically provisioned (and integrated within the existing infrastructure) on a pay per use basis. These resources can be released when they are no more needed. Such services are often offered within the context of a Service Level Agreement (SLA), which ensure the desired Quality of Service (QoS). Aneka, an enterprise Cloud computing solution, harnesses the power of compute resources by relying on private and public Clouds and delivers to users the desired QoS. Its flexible and service based infrastructure supports multiple programming paradigms that make Aneka address a variety of different scenarios: from finance applications to computational science. As examples of scientific computing in the Cloud, we present a preliminary case study on using Aneka for the classification of gene expression data and the execution of fMRI brain imaging workflow.Comment: 13 pages, 9 figures, conference pape

    Cloudbus Toolkit for Market-Oriented Cloud Computing

    Full text link
    This keynote paper: (1) presents the 21st century vision of computing and identifies various IT paradigms promising to deliver computing as a utility; (2) defines the architecture for creating market-oriented Clouds and computing atmosphere by leveraging technologies such as virtual machines; (3) provides thoughts on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain SLA-oriented resource allocation; (4) presents the work carried out as part of our new Cloud Computing initiative, called Cloudbus: (i) Aneka, a Platform as a Service software system containing SDK (Software Development Kit) for construction of Cloud applications and deployment on private or public Clouds, in addition to supporting market-oriented resource management; (ii) internetworking of Clouds for dynamic creation of federated computing environments for scaling of elastic applications; (iii) creation of 3rd party Cloud brokering services for building content delivery networks and e-Science applications and their deployment on capabilities of IaaS providers such as Amazon along with Grid mashups; (iv) CloudSim supporting modelling and simulation of Clouds for performance studies; (v) Energy Efficient Resource Allocation Mechanisms and Techniques for creation and management of Green Clouds; and (vi) pathways for future research.Comment: 21 pages, 6 figures, 2 tables, Conference pape

    Biometrics-as-a-Service: A Framework to Promote Innovative Biometric Recognition in the Cloud

    Full text link
    Biometric recognition, or simply biometrics, is the use of biological attributes such as face, fingerprints or iris in order to recognize an individual in an automated manner. A key application of biometrics is authentication; i.e., using said biological attributes to provide access by verifying the claimed identity of an individual. This paper presents a framework for Biometrics-as-a-Service (BaaS) that performs biometric matching operations in the cloud, while relying on simple and ubiquitous consumer devices such as smartphones. Further, the framework promotes innovation by providing interfaces for a plurality of software developers to upload their matching algorithms to the cloud. When a biometric authentication request is submitted, the system uses a criteria to automatically select an appropriate matching algorithm. Every time a particular algorithm is selected, the corresponding developer is rendered a micropayment. This creates an innovative and competitive ecosystem that benefits both software developers and the consumers. As a case study, we have implemented the following: (a) an ocular recognition system using a mobile web interface providing user access to a biometric authentication service, and (b) a Linux-based virtual machine environment used by software developers for algorithm development and submission

    Orchestration of a large infrastructure of Remote Desktop Windows Servers

    Get PDF
    The CERN Windows Terminal Service infrastructure is an aggregation of multiple virtual servers running Remote Desktop Services, accessed by hundreds of users every day; it has two purposes: provide external access to the CERN network, and exercise access control to certain parts of the accelerator complex. Currently, the deployment and configuration of these servers and services requires some interaction by system administrators, although scripts and tools developed at CERN do contribute to alleviate the problem. Scaling up and down the infrastructure (i.e., adding or removing servers) is also an issue, since it’s done manually. However, recent changes in the infrastructure and the adoption of new software tools that automate software deployment and configuration open new possibilities to improve and orchestrate the current service. Automation and Orchestration will not only reduce the time and effort necessary to deploy new instances, but also simplify operations like patching, analysis and rebuilding of compromised nodes and will provide better performance in response to load increase. The goal of this CERN project, we’re now a part of, is to automate provisioning (and decommissioning) and scaling (up and down) of the infrastructure. Given the scope and magnitude of problems that must be solved, no single solution is capable of addressing all; therefore, multiple technologies are required. For deployment and configuration of Windows Server systems we resort to Puppet, while for orchestration tasks, Microsoft Service Management Automation will be used

    A study of the security implications involved with the use of executable World Wide Web content

    Get PDF
    Malicious executable code is nothing new. While many consider that the concept of malicious code began in the 1980s when the first PC viruses began to emerge, the concept does in fact date back even earlier. Throughout the history of malicious code, methods of hostile code delivery have mirrored prevailing patterns of code distribution. In the 1980s, file infecting and boot sector viruses were common, mirroring the fact that during this time, executable code was commonly transferred via floppy disks. Since the 1990s email has been a major vector for malicious code attacks. Again, this mirrors the fact that during this period of time email has been a common means of sharing code and documents. This thesis examines another model of executable code distribution. It considers the security risks involved with the use of executable code embedded or attached to World Wide Web pages. In particular, two technologies are examined. Sun Microsystems\u27 Java Programming Language and Microsoft\u27s ActiveX Control Architecture are both technologies that can be used to connect executable program code to World Wide Web pages. This thesis examines the architectures on which these technologies are based, as well as the security and trust models that they implement. In doing so, this thesis aims to assess the level of risk posed by such technologies and to highlight similar risks that might occur with similar future technologies. ()

    MVNC: A Multiview Network Computer Architecture

    Get PDF
    In this paper, MVNC, a multiview network computer system for a high usability thin-client computing environment, is introduced. MVNC uses a revised SBC model to offer a new framework for thin client computing. MVNC can be used as a full functional Windows machine, or used as a Linux workstation, or a~graphic terminal. Its multiview work style is achieved by the attempts on GUI seamless integration technology, device integration technology and local video playback support. MVNC is implemented in an embedded Linux environment using a MIPS-4KC microprocessor. Test results on video application show that MVNC system uses its client hardware more efficiently and the load of MVNC server is lightened

    Parallel Performance of MPI Sorting Algorithms on Dual-Core Processor Windows-Based Systems

    Full text link
    Message Passing Interface (MPI) is widely used to implement parallel programs. Although Windowsbased architectures provide the facilities of parallel execution and multi-threading, little attention has been focused on using MPI on these platforms. In this paper we use the dual core Window-based platform to study the effect of parallel processes number and also the number of cores on the performance of three MPI parallel implementations for some sorting algorithms
    • …
    corecore