45 research outputs found

    Chronic Fibrotic Changes in Experimental Pulmonary Embolization in the Rat Model

    Get PDF
    Comparative Medicine - OneHealth and Comparative Medicine Poster SessionIntroduction: Fat embolism, a subclinical event, occurs in many clinical settings, such as long bones fractures, liposuction and during cardiopulmonary bypass. Some cases, especially with trauma, result in fat embolism syndrome (FES), a serious manifestation of fat embolism. FES is reported to occur in 5-10% of major trauma cases and can produce profound respiratory problems that may culminate in adult respiratory distress syndrome (ARDS). Embolized fat is hydrolyzed by lipase into free fatty acids which have been shown by previous histological studies to be toxic to the lung. An animal model of fat embolism has been developed utilizing triolein given intravenously (i.v.) to rats. We hypothesized that i.v. triolein will produce histological changes in the lung that are similar to the changes seen in human FES. Methods: Following University animal care approval, unanesthetized Sprague Dawley rats (study n=13, control n=12) were injected with either triolein, 0.2 mL (study) or saline, 0.2 mL (control). Weights were recorded until necropsy at 3 weeks (n=13) and 6 weeks (n=12). Morphometric measurements were made on both H&E and fat-stained tissues from the lungs, heart, kidneys and spleen. All vessels were examined using high magnification fields. Arterial wall thickness (lumen patency) was calculated by vessel luminal and external diameters. The medial-adventitial ratio was calculated from the outer medial diameter divided by the outer adventitial diameter. These values were keyed into statistical software and analysis as a function of time and treatment was calculated using t-tests with significance noted at a p<0.05. Results: Gross pathological changes were seen in lung, heart, kidneys, liver and spleen of the triolein group. Pulmonary histological examination revealed diffuse intra-alveolar hemorrhages and edema with peri-bronchial inflammation. Vasculitis was more prominent in the peri-bronchial areas as well. Pulmonary arteries revealed significant medial thickening as compared with the control groups with lumen patency p=0.004. Adventitia/media ratio, with large variability in the triolein group, was not statistically significant. Conclusions: Our data showed that injected triolein remains in the rat lung after 3 and 6 weeks with associated vascular and septal damage in the lung tissue compared to controls. Discussion: This study is a continuation of our previous study showing an increase of severe pulmonary damage within 3-6 hours following triolein induced fat embolism in the rat, reaching a peak at 96 hrs post injection. Despite unmedicated recovery of general condition and body weight and reopening of the pulmonary arteries and arterioles, collagen and vasculitis persisted up to 6 weeks. Further studies are needed to verify the eventual recovery or the organ evolution toward chronic fibrosis

    Optimal Merging in Quantum k-xor and k-sum Algorithms

    Get PDF
    International audienceThe k-xor or Generalized Birthday Problem aims at finding, given k lists of bit-strings, a k-tuple among them XORing to 0. If the lists are unbounded, the best classical (exponential) time complexity has withstood since Wagner's CRYPTO 2002 paper. If the lists are bounded (of the same size) and such that there is a single solution, the dissection algorithms of Dinur et al. (CRYPTO 2012) improve the memory usage over a simple meet-in-the-middle. In this paper, we study quantum algorithms for the k-xor problem. With unbounded lists and quantum access, we improve previous work by Grassi et al. (ASIACRYPT 2018) for almost all k. Next, we extend our study to lists of any size and with classical access only. We define a set of "merging trees" which represent the best known strategies for quantum and classical merging in k-xor algorithms, and prove that our method is optimal among these. Our complexities are confirmed by a Mixed Integer Linear Program that computes the best strategy for a given k-xor problem. All our algorithms apply also when considering modular additions instead of bitwise xors. This framework enables us to give new improved quantum k-xor algorithms for all k and list sizes. Applications include the subset-sum problem, LPN with limited memory and the multiple-encryption problem

    Improved Low-Memory Subset Sum and LPN Algorithms via Multiple Collisions

    Get PDF
    For enabling post-quantum cryptanalytic experiments on a meaningful scale, there is a strong need for low-memory algorithms. We show that the combination of techniques from representations, multiple collision finding, and the Schroeppel-Shamir algorithm leeds to improved low-memory algorithms. For random subset sum instances (a1,
,an,t)(a_1, \ldots, a_n,t) defined modulo 2n2^n, our algorithms improve over the Dissection technique for small memory M<20.02nM < 2^{0.02n} and in the mid-memory regime 20.13n<M<20.2n2^{0.13n} < M < 2^{0.2n}. An application of our technique to LPN of dimension kk and constant error pp yields significant time complexity improvements over the Dissection-BKW algorithm from Crypto 2018 for all memory parameters M<20.35klog⁡kM< 2^{0.35 \frac{k}{\log k}}

    Improved Quantum Information Set Decoding

    No full text
    In this paper we present quantum information set decoding (ISD) algorithms for binary linear codes. First, we give an alternative view on the quantum walk based algorithms proposed by Kachigar and Tillich (PQCrypto'17). It is more general and allows to consider any ISD algorithm that has certain properties. The algorithms of May-Meuer-Thomae and Becker-Jeux-May-Meuer satisfy these properties. Second, we translate May-Ozerov Near Neighbour technique (Eurocrypt'15) to an `update-and-query' language more suitable for the quantum walk framework. First, this re-interpretation makes possible to analyse a broader class of algorithms and, second, allows us to combine Near Neighbour search with the quantum walk framework and use both techniques to give a quantum version of Dumer's ISD with Near Neighbour.Comment: This is a full and corrected version of the paper appeared in PQCrypto201

    Experimental Study of the Cooling of Electrical Components Using Water Film Evaporation

    No full text
    Heat and mass transfer, which occur in the evaporation of a falling film of water, are studied experimentally. This evaporation allows the dissipation of the heat flux produced by twelve resistors, which simulate electrical components on the back side of an aluminium plate. On the front side of the plate, a falling film of water flows by the action of gravity. An inverse heat conduction model, associated with a spatial regularisation, was developed and produces the local heat fluxes on the plate using the measured temperatures. The efficiency of this evaporative process has been studied with respect to several parameters: imposed heat flux, inlet mass flow rate, and geometry. A comparison of the latent and sensible fluxes used to dissipate the imposed heat flux was studied in the case of a plexiglass sheet in front of the falling film at different distances from the aluminium plate

    Efficient Dissection of Composite Problems, with Applications to Cryptanalysis, Knapsacks, and Combinatorial Search Problems

    No full text
    Abstract. In this paper we show that a large class of diverse problems have a bicomposite structure which makes it possible to solve them with a new type of algorithm called dissection, which has much better time/memory tradeoffs than previously known algorithms. A typical example is the problem of finding the key of multiple encryption schemes with r independent n-bit keys. All the previous error-free attacks required time T and memory M satisfying T M = 2 rn, and even if “false negatives ” are allowed, no attack could achieve T M &lt; 2 3rn/4. Our new technique yields the first algorithm which never errs and finds all the possible keys with a smaller product of T M, such as T = 2 4n time and M = 2 n memory for breaking the sequential execution of r=7 block ciphers. The improvement ratio we obtain increases in an unbounded way as r increases, and if we allow algorithms which can sometimes miss solutions, we can get even better tradeoffs by combining our dissection technique with parallel collision search. To demonstrate the generality of the new dissection technique, we show how to use it in a generic way in order to attack hash functions with a rebound attack, to solve hard knapsack problems, and to find the shortest solution to a generalized version of Rubik’s cube with better time complexities (for small memory complexities) than the best previously known algorithms. Keywords: Cryptanalysis, TM-tradeoff, multi-encryption, knapsacks, bicomposite, dissection, reboun

    Consistent subset sampling

    No full text
    Consistent sampling is a technique for specifying, in small space, a subset SS of a potentially large universe UU such that the elements in SS satisfy a suitably chosen sampling condition. Given a subset I⊆U\mathcal{I}\subseteq U it should be possible to quickly compute I∩S\mathcal{I}\cap S, i.e., the elements in I\mathcal{I} satisfying the sampling condition. Consistent sampling has important applications in similarity estimation, and estimation of the number of distinct items in a data stream. In this paper we generalize consistent sampling to the setting where we are interested in sampling size-kk subsets occurring in some set in a collection of sets of bounded size bb, where kk is a small integer. This can be done by applying standard consistent sampling to the kk-subsets of each set, but that approach requires time Θ(bk)\Theta(b^k). Using a carefully designed hash function, for a given sampling probability p∈(0,1]p \in (0,1], we show how to improve the time complexity to Θ(b⌈k/2⌉log⁥log⁥b+pbk)\Theta(b^{\lceil k/2\rceil}\log \log b + pb^k) in expectation, while maintaining strong concentration bounds for the sample. The space usage of our method is Θ(b⌈k/4⌉)\Theta(b^{\lceil k/4\rceil}). We demonstrate the utility of our technique by applying it to several well-studied data mining problems. We show how to efficiently estimate the number of frequent kk-itemsets in a stream of transactions and the number of bipartite cliques in a graph given as incidence stream. Further, building upon a recent work by Campagna et al., we show that our approach can be applied to frequent itemset mining in a parallel or distributed setting. We also present applications in graph stream mining.Comment: To appear in SWAT 201
    corecore