170 research outputs found

    Heart failure is associated with exaggerated endothelial ischaemia-reperfusion injury and attenuated effect of ischaemic preconditioning

    Get PDF
    Background Reperfusion is mandatory after ischaemia, but it also triggers ischaemia–reperfusion (IR)-injury. It is currently unknown whether heart failure alters the magnitude of IR-injury. Ischaemic preconditioning can limit IR-injury. Since ischaemic preconditioning is typically applied in subjects at risk for cardiovascular complications, it is of clinical importance to understand its efficacy in heart failure patients. Objective To examine the magnitude of endothelial IR-injury, and the ability of ischaemic preconditioning to protect against endothelial IR-injury in heart failure. Methods We included 15 subjects with heart failure (67 ± 10 years, New York Heart Association class II/III) and 15 healthy, age- and sex-matched controls (65 ± 9 years). We examined brachial artery endothelial function using flow-mediated dilation before and after arm IR (induced by 5-min ischaemic handgrip exercise +15 min reperfusion). IR was preceded by ischaemic preconditioning (consisting in three cycles of 5-min upper arm cuff inflation to 220 mmHg) or no inflation. Results A significant interaction-effect was found for the change in flow-mediated dilation after IR between groups (two-way ANOVA interaction-effect: p = 0.01). Whilst post-hoc analysis revealed a significantly decline in flow-mediated dilation in both groups (p < 0.05), the decline in flow-mediated dilation in heart failure patients (6.2 ± 3.6% to 3.3 ± 1.8%) was significantly larger than that observed in controls (4.9 ± 2.1 to 4.1 ± 2.0). Neither in heart failure patients nor controls was the decrease in flow-mediated dilation after IR altered by ischaemic preconditioning (three-way ANOVA interaction: p = 0.87). Conclusion We found that patients with heart failure are associated with exaggerated endothelial IR-injury compared with age- and sex-matched, healthy controls, which may contribute to the poor clinical prognosis in heart failure. Furthermore, we found no protective effect of ischaemic preconditioning (3 × 5-min forearm ischaemia) against endothelial IR-injury in heart failure patients

    Linear Depth Integer-Wise Homomorphic Division

    Get PDF
    Part 3: CryptographyInternational audienceWe propose a secure integer-wise homomorphic division algorithm on fully homomorphic encryption schemes (FHE). For integer-wise algorithms, we encrypt plaintexts as integers without encoding them into bit values, while in bit-wise algorithms, plaintexts are encoded into binary and bit values are encrypted one by one. All the publicly available division algorithms are constructed in bit-wise style, and to the best of our knowledge there are no known integer-wise algorithm for secure division. We derive some empirical results on the FHE library HElib and show that our algorithm is 2.45x faster than the fastest bit-wise algorithm. We also show that the multiplicative depth of our algorithm is O(l), where l is the integer bit length, while that of existing division algorithms is O(l2)O(l^2). Furthermore, we generalise our secure division algorithm and propose a method for secure calculation of a general 2-variable function. The order of multiplicative depth of the algorithm, which is a main factor of the complexity of a FHE algorithm, is exactly the same as our secure division algorithm

    From Nonspecific DNA–Protein Encounter Complexes to the Prediction of DNA–Protein Interactions

    Get PDF
    ©2009 Gao, Skolnick. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.doi:10.1371/journal.pcbi.1000341DNA–protein interactions are involved in many essential biological activities. Because there is no simple mapping code between DNA base pairs and protein amino acids, the prediction of DNA–protein interactions is a challenging problem. Here, we present a novel computational approach for predicting DNA-binding protein residues and DNA–protein interaction modes without knowing its specific DNA target sequence. Given the structure of a DNA-binding protein, the method first generates an ensemble of complex structures obtained by rigid-body docking with a nonspecific canonical B-DNA. Representative models are subsequently selected through clustering and ranking by their DNA–protein interfacial energy. Analysis of these encounter complex models suggests that the recognition sites for specific DNA binding are usually favorable interaction sites for the nonspecific DNA probe and that nonspecific DNA–protein interaction modes exhibit some similarity to specific DNA–protein binding modes. Although the method requires as input the knowledge that the protein binds DNA, in benchmark tests, it achieves better performance in identifying DNA-binding sites than three previously established methods, which are based on sophisticated machine-learning techniques. We further apply our method to protein structures predicted through modeling and demonstrate that our method performs satisfactorily on protein models whose root-mean-square Ca deviation from native is up to 5 Å from their native structures. This study provides valuable structural insights into how a specific DNA-binding protein interacts with a nonspecific DNA sequence. The similarity between the specific DNA–protein interaction mode and nonspecific interaction modes may reflect an important sampling step in search of its specific DNA targets by a DNA-binding protein

    cuHE: A Homomorphic Encryption Accelerator Library

    Get PDF
    We introduce a CUDA GPU library to accelerate evaluations with homomorphic schemes defined over polynomial rings enabled with a number of optimizations including algebraic techniques for efficient evaluation, memory minimization techniques, memory and thread scheduling and low level CUDA hand-tuned assembly optimizations to take full advantage of the mass parallelism and high memory bandwidth GPUs offer. The arithmetic functions constructed to handle very large polynomial operands using number-theoretic transform (NTT) and Chinese remainder theorem (CRT) based methods are then extended to implement the primitives of the leveled homomorphic encryption scheme proposed by Löpez-Alt, Tromer and Vaikuntanathan. To compare the performance of the proposed CUDA library we implemented two applications: the Prince block cipher and homomorphic sorting algorithms on two GPU platforms in single GPU and multiple GPU configurations. We observed a speedup of 25 times and 51 times over the best previous GPU implementation for Prince with single and triple GPUs, respectively. Similarly for homomorphic sorting we obtained 12-41 times speedup depending on the number and size of the sorted elements

    Efficient Evaluation of Low Degree Multivariate Polynomials in Ring-LWE Homomorphic Encryption Schemes

    Get PDF
    Homomorphic encryption schemes allow to perform computations over encrypted data. In schemes based on RLWE assumption the plaintext data is a ring polynomial. In many use cases of homomorphic encryption only the degree-0 coefficient of this polynomial is used to encrypt data. In this context any computation on encrypted data can be performed. It is trickier to perform generic computations when more than one coefficient per ciphertext is used. In this paper we introduce a method to efficiently evaluate low-degree multivariate polynomials over encrypted data. The main idea is to encode several messages in the coefficients of a plaintext space polynomial. Using ring homomorphism operations and multiplications between ciphertexts, we compute multivariate monomials up to a given degree. Afterwards, using ciphertext additions we evaluate the input multivariate polynomial. We perform extensive experimentations of the proposed evaluation method. As example, evaluating an arbitrary multivariate degree-3 polynomial with 100 variables over Boolean space takes under 13 seconds

    Three-dimensional echocardiography for left ventricular quantification: fundamental validation and clinical applications

    Get PDF
    One of the earliest applications of clinical echocardiography is evaluation of left ventricular (LV) function and size. Accurate, reproducible and quantitative evaluation of LV function and size is vital for diagnosis, treatment and prediction of prognosis of heart disease. Early three-dimensional (3D) echocardiographic techniques showed better reproducibility than two-dimensional (2D) echocardiography and narrower limits of agreement for assessment of LV function and size in comparison to reference methods, mostly cardiac magnetic resonance (CMR) imaging, but acquisition methods were cumbersome and a lack of user-friendly analysis software initially precluded widespread use. Through the advent of matrix transducers enabling real-time three-dimensional echocardiography (3DE) and improvements in analysis software featuring semi-automated volumetric analysis, 3D echocardiography evolved into a simple and fast imaging modality for everyday clinical use. 3DE provides the possibility to evaluate the entire LV in three spatial dimensions during the complete cardiac cycle, offering a more accurate and complete quantitative evaluation the LV. Improved efficiency in acquisition and analysis may provide clinicians with important diagnostic information within minutes. The current article reviews the methodology and application of 3DE for quantitative evaluation of the LV, provides the scientific evidence for its current clinical use, and discusses its current limitations and potential future directions

    Generalized Connective Tissue Disease in Crtap-/- Mouse

    Get PDF
    Mutations in CRTAP (coding for cartilage-associated protein), LEPRE1 (coding for prolyl 3-hydroxylase 1 [P3H1]) or PPIB (coding for Cyclophilin B [CYPB]) cause recessive forms of osteogenesis imperfecta and loss or decrease of type I collagen prolyl 3-hydroxylation. A comprehensive analysis of the phenotype of the Crtap-/- mice revealed multiple abnormalities of connective tissue, including in the lungs, kidneys, and skin, consistent with systemic dysregulation of collagen homeostasis within the extracellular matrix. Both Crtap-/- lung and kidney glomeruli showed increased cellular proliferation. Histologically, the lungs showed increased alveolar spacing, while the kidneys showed evidence of segmental glomerulosclerosis, with abnormal collagen deposition. The Crtap-/- skin had decreased mechanical integrity. In addition to the expected loss of proline 986 3-hydroxylation in α1(I) and α1(II) chains, there was also loss of 3Hyp at proline 986 in α2(V) chains. In contrast, at two of the known 3Hyp sites in α1(IV) chains from Crtap-/- kidneys there were normal levels of 3-hydroxylation. On a cellular level, loss of CRTAP in human OI fibroblasts led to a secondary loss of P3H1, and vice versa. These data suggest that both CRTAP and P3H1 are required to maintain a stable complex that 3-hydroxylates canonical proline sites within clade A (types I, II, and V) collagen chains. Loss of this activity leads to a multi-systemic connective tissue disease that affects bone, cartilage, lung, kidney, and skin

    Unsupervised machine learning on encrypted data

    Get PDF
    In the context of Fully Homomorphic Encryption, which allows computations on encrypted data, Machine Learning has been one of the most popular applications in the recent past. All of these works, however, have focused on supervised learning, where there is a labeled training set that is used to configure the model. In this work, we take the first step into the realm of unsupervised learning, which is an important area in Machine Learning and has many real-world applications, by addressing the clustering problem. To this end, we show how to implement the K-Means-Algorithm. This algorithm poses several challenges in the FHE context, including a division, which we tackle by using a natural encoding that allows division and may be of independent interest. While this theoretically solves the problem, performance in practice is not optimal, so we then propose some changes to the clustering algorithm to make it executable under more conventional encodings. We show that our new algorithm achieves a clustering accuracy comparable to the original K-Means-Algorithm, but has less than 5%5\% of its runtime
    • …
    corecore