1,269 research outputs found

    Machine Learning Techniques and Testing

    Get PDF
    In this paper, we have discussed about various algorithms of the machine learning. These algorithms are used in various processes like image automated medical diagnostics, online advertising, robot incomotion etc.   

    An Improved BKW Algorithm for LWE with Applications to Cryptography and Lattices

    Get PDF
    In this paper, we study the Learning With Errors problem and its binary variant, where secrets and errors are binary or taken in a small interval. We introduce a new variant of the Blum, Kalai and Wasserman algorithm, relying on a quantization step that generalizes and fine-tunes modulus switching. In general this new technique yields a significant gain in the constant in front of the exponent in the overall complexity. We illustrate this by solving p within half a day a LWE instance with dimension n = 128, modulus q=n2q = n^2, Gaussian noise α=1/(n/πlog2n)\alpha = 1/(\sqrt{n/\pi} \log^2 n) and binary secret, using 2282^{28} samples, while the previous best result based on BKW claims a time complexity of 2742^{74} with 2602^{60} samples for the same parameters. We then introduce variants of BDD, GapSVP and UniqueSVP, where the target point is required to lie in the fundamental parallelepiped, and show how the previous algorithm is able to solve these variants in subexponential time. Moreover, we also show how the previous algorithm can be used to solve the BinaryLWE problem with n samples in subexponential time 2(ln2/2+o(1))n/loglogn2^{(\ln 2/2+o(1))n/\log \log n}. This analysis does not require any heuristic assumption, contrary to other algebraic approaches; instead, it uses a variant of an idea by Lyubashevsky to generate many samples from a small number of samples. This makes it possible to asymptotically and heuristically break the NTRU cryptosystem in subexponential time (without contradicting its security assumption). We are also able to solve subset sum problems in subexponential time for density o(1)o(1), which is of independent interest: for such density, the previous best algorithm requires exponential time. As a direct application, we can solve in subexponential time the parameters of a cryptosystem based on this problem proposed at TCC 2010.Comment: CRYPTO 201

    Colored Non-Crossing Euclidean Steiner Forest

    Full text link
    Given a set of kk-colored points in the plane, we consider the problem of finding kk trees such that each tree connects all points of one color class, no two trees cross, and the total edge length of the trees is minimized. For k=1k=1, this is the well-known Euclidean Steiner tree problem. For general kk, a kρk\rho-approximation algorithm is known, where ρ1.21\rho \le 1.21 is the Steiner ratio. We present a PTAS for k=2k=2, a (5/3+ε)(5/3+\varepsilon)-approximation algorithm for k=3k=3, and two approximation algorithms for general~kk, with ratios O(nlogk)O(\sqrt n \log k) and k+εk+\varepsilon

    Simple Encrypted Arithmetic Library - SEAL v2.1

    Get PDF
    Achieving fully homomorphic encryption was a longstanding open problem in cryptography until it was resolved by Gentry in 2009. Soon after, several homomorphic encryption schemes were proposed. The early homomorphic encryption schemes were extremely impractical, but recently new implementations, new data encoding techniques, and a better understanding of the applications have started to change the situation. In this paper we introduce the most recent version (v2.1) of Simple Encrypted Arithmetic Library - SEAL, a homomorphic encryption library developed by Microsoft Research, and describe some of its core functionality

    Conscious monitoring and control (reinvestment) in surgical performance under pressure.

    Get PDF
    Research on intraoperative stressors has focused on external factors without considering individual differences in the ability to cope with stress. One individual difference that is implicated in adverse effects of stress on performance is "reinvestment," the propensity for conscious monitoring and control of movements. The aim of this study was to examine the impact of reinvestment on laparoscopic performance under time pressure

    Socioeconomic inequalities in dental caries and their determinants in adolescents in New Delhi, India.

    Get PDF
    To determine whether socioeconomic inequalities are correlated to dental caries experience and decayed teeth of Indian adolescents, and assess whether behavioural and psychosocial factors mediate this association

    Ordering a sparse graph to minimize the sum of right ends of edges

    Get PDF
    Motivated by a warehouse logistics problem we study mappings of the vertices of a graph onto prescribed points on the real line that minimize the sum (or equivalently, the average) of the coordinates of the right ends of all edges. We focus on graphs whose edge numbers do not exceed the vertex numbers too much, that is, graphs with few cycles. Intuitively, dense subgraphs should be placed early in the ordering, in order to finish many edges soon. However, our main “calculation trick” is to compare the objective function with the case when (almost) every vertex is the right end of exactly one edge. The deviations from this case are described by “charges” that can form “dipoles”. This reformulation enables us to derive polynomial algorithms and NP-completeness results for relevant special cases, and FPT results

    Travelling on Graphs with Small Highway Dimension

    Get PDF
    We study the Travelling Salesperson (TSP) and the Steiner Tree problem (STP) in graphs of low highway dimension. This graph parameter was introduced by Abraham et al. [SODA 2010] as a model for transportation networks, on which TSP and STP naturally occur for various applications in logistics. It was previously shown [Feldmann et al. ICALP 2015] that these problems admit a quasi-polynomial time approximation scheme (QPTAS) on graphs of constant highway dimension. We demonstrate that a significant improvement is possible in the special case when the highway dimension is 1, for which we present a fully-polynomial time approximation scheme (FPTAS). We also prove that STP is weakly NP-hard for these restricted graphs. For TSP we show NP-hardness for graphs of highway dimension 6, which answers an open problem posed in [Feldmann et al. ICALP 2015]

    Clarifying Assumptions about Intraoperative Stress during Surgical Performance: More Than a Stab in the Dark: Reply

    Get PDF
    Ó The Author(s) 2011. This article is published with open access at Springerlink.com We thank Dr. Ali for his concise annotation of our efforts to validate a tool that evaluates mental workload in surgery [1, 2]. Unlike other safety critical domains, the field of surgery has been slow to acknowledge the impact of intraoperative stress on surgical performance, but recently a sea change has been triggered by authorities in the field of surgical education [3]. We agree with Ali that stress is not by default detrimental to performance. Our aim was to develop a diagnostic tool that identifies the factors that contribute to disrupted performance, should it occur. Indeed, studies of the effects of acute stress on operating performance have shown considerable variability, ranging from no effect to either facilitative or debilitative effects [3–5]. The Yerkes-Dodson law emerged from the earliest attempts to explain the relationship between physiological arousal and performance, but it has been criticized for treating stress as a unitary construct, influenced solely by physiological factors [6]. More recently, Catastrophe Theory has been invoked to model the relationship, using both physiological and psychological (cognitive anxiety) components of stress [7]. The model proposes that physiological arousal displays a mild inverted-U relationship with performance when cognitive anxiety is low, but that catastrophic declines in performance can occur if both physiological arousal and cognitive anxiety are high. Recent surgical literature has elucidated the complexity of M. Wilson (&

    Revisiting the Hardness of Binary Error LWE

    Get PDF
    Binary error LWE is the particular case of the learning with errors (LWE) problem in which errors are chosen in {0,1}\{0,1\}. It has various cryptographic applications, and in particular, has been used to construct efficient encryption schemes for use in constrained devices. Arora and Ge showed that the problem can be solved in polynomial time given a number of samples quadratic in the dimension nn. On the other hand, the problem is known to be as hard as standard LWE given only slightly more than nn samples. In this paper, we first examine more generally how the hardness of the problem varies with the number of available samples. Under standard heuristics on the Arora--Ge polynomial system, we show that, for any ϵ>0\epsilon >0, binary error LWE can be solved in polynomial time nO(1/ϵ)n^{O(1/\epsilon)} given ϵn2\epsilon\cdot n^{2} samples. Similarly, it can be solved in subexponential time 2O~(n1α)2^{\tilde O(n^{1-\alpha})} given n1+αn^{1+\alpha} samples, for 0<α<10<\alpha<1. As a second contribution, we also generalize the binary error LWE to problem the case of a non-uniform error probability, and analyze the hardness of the non-uniform binary error LWE with respect to the error rate and the number of available samples. We show that, for any error rate 0<p<10 < p < 1, non-uniform binary error LWE is also as hard as worst-case lattice problems provided that the number of samples is suitably restricted. This is a generalization of Micciancio and Peikert\u27s hardness proof for uniform binary error LWE. Furthermore, we also discuss attacks on the problem when the number of available samples is linear but significantly larger than nn, and show that for sufficiently low error rates, subexponential or even polynomial time attacks are possible
    corecore