3,026 research outputs found

    First characterization of a class F sortase and establishment of a microreactor-based assay for its directed evolution

    Get PDF
    Sortases are a family of enzymes responsible for the covalent anchoring of proteins to the cell wall of Gram-positive bacteria via a transpeptidation reaction. These cysteine transpeptidases specifically recognize and cleave a five amino acid long sorting motif on the target proteins and then catalyzed the formation of a new peptide bond between the C-terminus of the cleaved sorting motif and the free amino group of a cell wall component. The transpeptidation activity of the well-characterized class A sortase from Staphylococcus aureus (SaSrtA) and evolved variants thereof continues to see increasing use in a wide range of biotechnological applications (Sortagging). Due to low activity, sortases from classes other than class A are not currently used for this purpose and, with the exception of SaSrtA, laboratory evolution of other sortases has not been performed. In the first part of this work, we report on the exploration of the natural diversity of sortases and de-scribe the in-depth characterization of a sortase enzyme that belongs to the not-yet-investigated class F, sortase F from Propionibacterium acnes (PaSrtF). We showed that PaSrtF exhibits similar behaviour to the wild type SaSrtA in terms of catalytic activity and sequence specificity and demonstrated its usefulness for protein engineering applications. In the second part of the work, the development of a novel assay for the screening of sortase variants with improved properties is described. Hydrogel bead-based microreactors, suitable for high-throughput screening using a large particle flow cytometer, were prepared and evaluated for their capability to act as individual evolutionary units that link sortases activity with a fluorescent readout. The microreactor-based assay, developed and optimized with the wild type SaSrtA, was successfully validated with the newly characterized PaSrtF and can, therefore, be used for its directed evolution

    Effects of ambient temperature, humidity, and other meteorological variables on hospital admissions for angina pectoris.

    Get PDF
    BACKGROUND: Seasonal peaks in cardiovascular disease incidence have been widely reported, suggesting weather has a role. DESIGN: The aim of our study was to determine the influence of climatic variables on angina pectoris hospital admissions. METHODS: We correlated the daily number of angina cases admitted to a western Sicilian hospital over a period of 12 years and local weather conditions (temperature, humidity, wind force and direction, precipitation, sunny hours and atmospheric pressure) on a day-to-day basis. A total of 2459 consecutive patients were admitted over the period 1987-1998 (1562 men, 867 women; M/F - 1:8). RESULTS: A seasonal variation was found with a noticeable winter peak. The results of Multivariate Poisson analysis showed a significant association between the daily number of angina hospital admission, temperature, and humidity. Significant incidence relative ratios (95% confidence intervals/measure unit) were, in males, 0.988 (0.980-0.996) (p = 0.004) for minimal temperature, 0.990 (0.984-0.996) (p = 0.001) for maximal humidity, and 1.002 (1.000-1.004) (p = 0.045) for minimal humidity. The corresponding values in females were 0.973 (0.951-0.995) (p < 0.017) for maximal temperature and 1.024 (1.001-1.048) (p = 0.037) for minimal temperature. CONCLUSIONS: Environmental temperature and humidity may play an important role in the pathogenesis of angina, although it seems different according to the gender. These data may help to understand the mechanisms that trigger ischemic events and to better organize hospital assistance throughout the year

    Skyline on sliding window data stream: a parallel approach

    Get PDF
    In this thesis we apply high-performance Parallel Data Stream Processing methodologies to approach the problem of computing the skyline over a stream of d-dimensional points. Since the stream is possibly unbounded, we adopt the sliding window specifications in order to maintain the skyline over the most recent received points. We propose a parallel implementation of a module that given as input a stream of points, produces skyline updates

    SimFS: A Simulation Data Virtualizing File System Interface

    Full text link
    Nowadays simulations can produce petabytes of data to be stored in parallel filesystems or large-scale databases. This data is accessed over the course of decades often by thousands of analysts and scientists. However, storing these volumes of data for long periods of time is not cost effective and, in some cases, practically impossible. We propose to transparently virtualize the simulation data, relaxing the storage requirements by not storing the full output and re-simulating the missing data on demand. We develop SimFS, a file system interface that exposes a virtualized view of the simulation output to the analysis applications and manages the re-simulations. SimFS monitors the access patterns of the analysis applications in order to (1) decide the data to keep stored for faster accesses and (2) to employ prefetching strategies to reduce the access time of missing data. Virtualizing simulation data allows us to trade storage for computation: this paradigm becomes similar to traditional on-disk analysis (all data is stored) or in situ (no data is stored) according with the storage resources that are assigned to SimFS. Overall, by exploiting the growing computing power and relaxing the storage capacity requirements, SimFS offers a viable path towards exa-scale simulations

    Flare: Flexible In-Network Allreduce

    Full text link
    The allreduce operation is one of the most commonly used communication routines in distributed applications. To improve its bandwidth and to reduce network traffic, this operation can be accelerated by offloading it to network switches, that aggregate the data received from the hosts, and send them back the aggregated result. However, existing solutions provide limited customization opportunities and might provide suboptimal performance when dealing with custom operators and data types, with sparse data, or when reproducibility of the aggregation is a concern. To deal with these problems, in this work we design a flexible programmable switch by using as a building block PsPIN, a RISC-V architecture implementing the sPIN programming model. We then design, model, and analyze different algorithms for executing the aggregation on this architecture, showing performance improvements compared to state-of-the-art approaches

    An In-Depth Analysis of the Slingshot Interconnect

    Full text link
    The interconnect is one of the most critical components in large scale computing systems, and its impact on the performance of applications is going to increase with the system size. In this paper, we will describe Slingshot, an interconnection network for large scale computing systems. Slingshot is based on high-radix switches, which allow building exascale and hyperscale datacenters networks with at most three switch-to-switch hops. Moreover, Slingshot provides efficient adaptive routing and congestion control algorithms, and highly tunable traffic classes. Slingshot uses an optimized Ethernet protocol, which allows it to be interoperable with standard Ethernet devices while providing high performance to HPC applications. We analyze the extent to which Slingshot provides these features, evaluating it on microbenchmarks and on several applications from the datacenter and AI worlds, as well as on HPC applications. We find that applications running on Slingshot are less affected by congestion compared to previous generation networks.Comment: To be published in Proceedings of The International Conference for High Performance Computing Networking, Storage, and Analysis (SC '20) (2020

    Correlation between Primary Myelofibrosis and the Association of Portal Thrombosis with Portal-Biliary Cavernoma: US, MDCT, and MRI Features

    Get PDF
    Abstract Objective Myelofibrosis is a rare chronic myelolymphoproliferative disease and is associated with increased risk of venous thromboembolism. The objective of this study is to retrospectively evaluate patients with primary myelofibrosis who underwent abdominal US, MDCT and MRI, in order to identify the development of portal thrombosis and its correlation with portal-biliary cavernoma. Methods We evaluated 125 patients with initial diagnosis of primary myelofibrosis and nonspecific abdominal pain who had undergone US with color Doppler. In 13 patients (8 men, 5 females; age: 45–85), US detected portal thrombosis with associated portal-biliary cavernoma. All patients subsequently underwent contrast-enhanced MDCT and MRI and 4 patients MR-cholangiography. The correlation between primary myelofibrosis and portal thrombosis and cavernoma respectively was calculated using χ2 test. Results About 10% of patients with primary myelofibrosis preliminary evaluated with US had partial (8 pts) or complete (5 pts) portal thrombosis associated with portal-biliary cavernoma with a χ2 = 0. In all patients, US detected a concentric thickening of main bile duct (MBD) wall (mean value: 7 mm); color Doppler always showed dilated venous vessels within the thickened wall of the biliary tract. Contrast-enhanced CT and MRI confirmed thickening of MBD walls with their progressive enhancement and allowed better assessment of the extent of the portal system thrombosis. MR-cholangiography showed a thin appearance of the MBD lumen with evidence of ab extrinsic compression. Conclusions The evidence of portal thrombosis and portal-biliary cavernoma in 10% of the patients with primary myelofibrosis indicates a close correlation between the two diseases. In the detection of portal thrombosis and portal-biliary cavernoma, US with color Doppler is the most reliable and economical diagnostic technique while contrast-enhanced MDCT and MRI allow better assessment of the extent of the portal vein thrombosis and of the complications of myelofibrosis

    Canary: Congestion-Aware In-Network Allreduce Using Dynamic Trees

    Full text link
    The allreduce operation is an essential building block for many distributed applications, ranging from the training of deep learning models to scientific computing. In an allreduce operation, data from multiple hosts is aggregated together and then broadcasted to each host participating in the operation. Allreduce performance can be improved by a factor of two by aggregating the data directly in the network. Switches aggregate data coming from multiple ports before forwarding the partially aggregated result to the next hop. In all existing solutions, each switch needs to know the ports from which it will receive the data to aggregate. However, this forces packets to traverse a predefined set of switches, making these solutions prone to congestion. For this reason, we design Canary, the first congestion-aware in-network allreduce algorithm. Canary uses load balancing algorithms to forward packets on the least congested paths. Because switches do not know from which ports they will receive the data to aggregate, they use timeouts to aggregate the data in a best-effort way. We develop a P4 Canary prototype and evaluate it on a Tofino switch. We then validate Canary through simulations on large networks, showing performance improvements up to 40% compared to the state-of-the-art
    corecore