204 research outputs found

    DE-FG02-04ER25606 Identity Federation and Policy Management Guide: Final Report

    Full text link
    The goal of this 3-year project was to facilitate a more productive dynamic matching between resource providers and resource consumers in Grid environments by explicitly specifying policies. There were broadly two problems being addressed by this project. First, there was a lack of an Open Grid Services Architecture (OGSA)-compliant mechanism for expressing, storing and retrieving user policies and Virtual Organization (VO) policies. Second, there was a lack of tools to resolve and enforce policies in the Open Services Grid Architecture. To address these problems, our overall approach in this project was to make all policies explicit (e.g., virtual organization policies, resource provider policies, resource consumer policies), thereby facilitating policy matching and policy negotiation. Policies defined on a per-user basis were created, held, and updated in MyPolMan, thereby providing a Grid user to centralize (where appropriate) and manage his/her policies. Organizationally, the corresponding service was VOPolMan, in which the policies of the Virtual Organization are expressed, managed, and dynamically consulted. Overall, we successfully defined, prototyped, and evaluated policy-based resource management and access control for OGSA-based Grids. This DOE project partially supported 17 peer-reviewed publications on a number of different topics: General security for Grids, credential management, Web services/OGSA/OGSI, policy-based grid authorization (for remote execution and for access to information), policy-directed Grid data movement/placement, policies for large-scale virtual organizations, and large-scale policy-aware grid architectures. In addition to supporting the PI, this project partially supported the training of 5 PhD students

    Auto-scaling to minimize cost and meet application deadlines in cloud workflows"

    Get PDF
    ABSTRACT A goal in cloud computing is to allocate (and thus pay for) only those cloud resources that are truly needed. To date, cloud practitioners have pursued schedule-based (e.g., time-of-day) and rule-based mechanisms to attempt to automate this matching between computing requirements and computing resources. However, most of these "auto-scaling" mechanisms only support simple resource utilization indicators and do not specifically consider both user performance requirements and budget concerns. In this paper, we present an approach whereby the basic computing elements are virtual machines (VMs) of various sizes/costs, jobs are specified as workflows, users specify performance requirements by assigning (soft) deadlines to jobs, and the goal is to ensure all jobs are finished within their deadlines at minimum financial cost. We accomplish our goal by dynamically allocating/deallocating VMs and scheduling tasks on the most cost-efficient instances. We evaluate our approach in four representative cloud workload patterns and show cost savings from 9.8% to 40.4% compared to other approaches

    Control-theoretic dynamic frequency and voltage scaling for multimedia workloads

    Get PDF

    Fault Tolerance and Scaling in e-Science Cloud Applications: Observations from the Continuing Development of MODISAzure

    Full text link
    It can be natural to believe that many of the traditional issues of scale have been eliminated or at least greatly reduced via cloud computing. That is, if one can create a seemingly wellfunctioning cloud application that operates correctly on small or moderate-sized problems, then the very nature of cloud programming abstractions means that the same application will run as well on potentially significantly larger problems. In this paper, we present our experiences taking MODISAzure, our satellite data processing system built on the Windows Azure cloud computing platform, from the proof-of-concept stage to a point of being able to run on significantly larger problem sizes (e.g., from national-scale data sizes to global-scale data sizes). To our knowledge, this is the longest-running eScience application on the nascent Windows Azure platform. We found that while many infrastructure-level issues were thankfully masked from us by the cloud infrastructure, it was valuable to design additional redundancy and fault-tolerance capabilities such as transparent idempotent task retry and logging to support debugging of user code encountering unanticipated data issues. Further, we found that using a commercial cloud means anticipating inconsistent performance and black-box behavior of virtualized compute instances, as well as leveraging changing platform capabilities over time. We believe that the experiences presented in this paper can help future eScience cloud application developers on Windows Azure and other commercial cloud providers

    First-Step Mutations for Adaptation at Elevated Temperature Increase Capsid Stability in a Virus

    Get PDF
    The relationship between mutation, protein stability and protein function plays a central role in molecular evolution. Mutations tend to be destabilizing, including those that would confer novel functions such as host-switching or antibiotic resistance. Elevated temperature may play an important role in preadapting a protein for such novel functions by selecting for stabilizing mutations. In this study, we test the stability change conferred by single mutations that arise in a G4-like bacteriophage adapting to elevated temperature. The vast majority of these mutations map to interfaces between viral coat proteins, suggesting they affect protein-protein interactions. We assess their effects by estimating thermodynamic stability using molecular dynamic simulations and measuring kinetic stability using experimental decay assays. The results indicate that most, though not all, of the observed mutations are stabilizing

    A new class of hybrid secretion system is employed in Pseudomonas amyloid biogenesis

    Get PDF
    Gram-negative bacteria possess specialised biogenesis machineries that facilitate the export of amyloid subunits for construction of a biofilm matrix. The secretion of bacterial functional amyloid requires a bespoke outer-membrane protein channel through which unfolded amyloid substrates are translocated. Here, we combine X-ray crystallography, native mass spectrometry, single-channel electrical recording, molecular simulations and circular dichroism measurements to provide high-resolution structural insight into the functional amyloid transporter from Pseudomonas, FapF. FapF forms a trimer of gated β-barrel channels in which opening is regulated by a helical plug connected to an extended coil-coiled platform spanning the bacterial periplasm. Although FapF represents a unique type of secretion system, it shares mechanistic features with a diverse range of peptide translocation systems. Our findings highlight alternative strategies for handling and export of amyloid protein sequences

    Materials in particulate form for tissue engineering. 1 Basic concepts

    Get PDF
    For biomedical applications, materials small in size are growing in importance. In an era where ‘nano’ is the new trend, micro- and nano-materials are in the forefront of developments. Materials in the particulate form aim to designate systems with a reduced size, such as micro- and nanoparticles. These systems can be produced starting from a diversity of materials, of which polymers are the most used. Similarly, a multitude of methods are used to produce particulate systems, and both materials and methods are critically reviewed here. Among the varied applications that materials in the particulate form can have, drug delivery systems are probably the most prominent, as these have been in the forefront of interest for biomedical applications. The basic concepts pertaining to drug delivery are summarized, and the role of polymers as drug delivery systems conclude this review
    • …
    corecore