42 research outputs found
A new class of hybrid secretion system is employed in Pseudomonas amyloid biogenesis
Gram-negative bacteria possess specialised biogenesis machineries that facilitate the export of amyloid subunits for construction of a biofilm matrix. The secretion of bacterial functional amyloid requires a bespoke outer-membrane protein channel through which unfolded amyloid substrates are translocated. Here, we combine X-ray crystallography, native mass spectrometry, single-channel electrical recording, molecular simulations and circular dichroism measurements to provide high-resolution structural insight into the functional amyloid transporter from Pseudomonas, FapF. FapF forms a trimer of gated β-barrel channels in which opening is regulated by a helical plug connected to an extended coil-coiled platform spanning the bacterial periplasm. Although FapF represents a unique type of secretion system, it shares mechanistic features with a diverse range of peptide translocation systems. Our findings highlight alternative strategies for handling and export of amyloid protein sequences
First-Step Mutations for Adaptation at Elevated Temperature Increase Capsid Stability in a Virus
The relationship between mutation, protein stability and protein function plays a central role in molecular evolution. Mutations tend to be destabilizing, including those that would confer novel functions such as host-switching or antibiotic resistance. Elevated temperature may play an important role in preadapting a protein for such novel functions by selecting for stabilizing mutations. In this study, we test the stability change conferred by single mutations that arise in a G4-like bacteriophage adapting to elevated temperature. The vast majority of these mutations map to interfaces between viral coat proteins, suggesting they affect protein-protein interactions. We assess their effects by estimating thermodynamic stability using molecular dynamic simulations and measuring kinetic stability using experimental decay assays. The results indicate that most, though not all, of the observed mutations are stabilizing
The FLUXNET2015 dataset and the ONEFlux processing pipeline for eddy covariance data
The FLUXNET2015 dataset provides ecosystem-scale data on CO2, water, and energy exchange between the biosphere and the atmosphere, and other meteorological and biological measurements, from 212 sites around the globe (over 1500 site-years, up to and including year 2014). These sites, independently managed and operated, voluntarily contributed their data to create global datasets. Data were quality controlled and processed using uniform methods, to improve consistency and intercomparability across sites. The dataset is already being used in a number of applications, including ecophysiology studies, remote sensing studies, and development of ecosystem and Earth system models. FLUXNET2015 includes derived-data products, such as gap-filled time series, ecosystem respiration and photosynthetic uptake estimates, estimation of uncertainties, and metadata about the measurements, presented for the first time in this paper. In addition, 206 of these sites are for the first time distributed under a Creative Commons (CC-BY 4.0) license. This paper details this enhanced dataset and the processing methods, now made available as open-source codes, making the dataset more accessible, transparent, and reproducible.Peer reviewe
Calibration of SWAT models using the cloud
This paper evaluates a recently created Soil and Water Assessment Tool (SWAT) calibration tool built using the Windows Azure Cloud environment and a parallel version of the Dynamically Dimensioned Search (DDS) calibration method modified to run in Azure. The calibration tool was tested for six model scenarios constructed for three watersheds of increasing size each for a 2 year and 10 year simulation duration. Results show significant speedup in calibration time and, for up to 64 cores, minimal losses in speedup for all watershed sizes and simulation durations. An empirical relationship is presented for estimating the time needed to calibration a SWAT model using the cloud calibration tool as a function of the number of Hydrologic Response Units (HRUs), time steps, and cores used for the calibration
WDCloud: An end to end system for large-scale watershed delineation on cloud
Watershed delineation is a process to compute the drainage area for a point on the land surface, which is a critical step in hydrologic and water resources analysis. However, existing watershed delineation tools are still insufficient to support hydrologists and watershed researchers due to the lack of essential capabilities such as fully leveraging scalable and high performance computing infrastructure (public cloud), and providing predictable performance for the delineation tasks. To solve these problems, this paper reports on WDCloud, which is a system for large-scale watershed delineation on public cloud. For the design and implementation of WDCloud, we employ three main approaches: 1) an automated catchment search mechanism for a public data set, 2) three performance improvement strategies (Data-reuse, parallel-union, and MapReduce), and 3) local linear regression-based execution time estimator for watershed delineation. Moreover, WDCloud extensively utilizes several compute and storage capabilities from Amazon Web Services in order to maximize the performance, scalability, and elasticity of watershed delineation system. Our evaluations on WDCloud focus on two main aspects of WDCloud; the performance improvement for watershed delineation via three strategies and the estimation accuracy for watershed delineation time by local linear regression. The evaluation results show that WDCloud can achieve 18x-111x of speed-ups for delineating any scale of watersheds in the contiguous United States as compared to commodity laptop environments, and accurately predict execution time for watershed delineation with 85.6% of prediction accuracy, which is 23%-13% higher than other state-of-the-art approaches
Support for Extensibility and Site Autonomy in the Legion Grid System Object Model
Grid computing is the use of large collections of heterogeneous, distributed resources (including machines, databases, devices, and users) to support largescale computations and wide-area data access. The Legion system is an implementation of a software architecture for grid computing. The basic philosophy underlying this architecture is the presentation of all grid resources as components of a single, seamless, virtual machine. Legion’s architecture was designed to address the challenges of using and managing wide-area resources. Features of the architecture include: global, shared namespaces; support for heterogeneity; security; wide-area data sharing; wide-area parallel processing; application-adjustable fault-tolerance; efficient scheduling and comprehensive resource management. We present the core design of the Legion architecture, with focus on the critical issues of extensibility and site autonomy. Grid systems software must be extensible because no static set of system-level decisions can meet all of the diverse, often conflicting, requirements of present and future user communities, nor take best advantage of unanticipated future hardware advances. Grid systems software must also support complete site autonomy, as resource owners will not turn control of their resources over to a dictatorial system
From Legion to Avaki: The Persistence of Vision
Grids have metamorphosed from academic projects to commercial ventures. Avaki, a leading commercial vendor of Grids, has its roots in Legion, a Grid project at the University of Virginia begun in 1993. In this chapter, we present fundamental challenges and requirements for Grid architectures that we believe are universal, our architectural philosophy in addressing those requirements, an overview of Legion as used in production systems and a synopsis of the Legion architecture and implementation. We also describe the history of the transformation from Legion – an academic, research project – to Avaki, a commercially supported, marketed product. Several of the design principles as well as the vision underlying Legion have continued to be employed in Avaki. As a product sold to customers, Avaki has been made more robust, more easily manageable and easier to configure than Legion, at the expense of eliminating some features and tools that are of less immediate use to customers. Finally, we place Legion in the context of OGSI, a standards effort underway in Global Grid Forum