1,269 research outputs found
Machine Learning Techniques and Testing
In this paper, we have discussed about various algorithms of the machine learning. These algorithms are used in various processes like image automated medical diagnostics, online advertising, robot incomotion etc.
An Improved BKW Algorithm for LWE with Applications to Cryptography and Lattices
In this paper, we study the Learning With Errors problem and its binary
variant, where secrets and errors are binary or taken in a small interval. We
introduce a new variant of the Blum, Kalai and Wasserman algorithm, relying on
a quantization step that generalizes and fine-tunes modulus switching. In
general this new technique yields a significant gain in the constant in front
of the exponent in the overall complexity. We illustrate this by solving p
within half a day a LWE instance with dimension n = 128, modulus ,
Gaussian noise and binary secret, using
samples, while the previous best result based on BKW claims a time
complexity of with samples for the same parameters. We then
introduce variants of BDD, GapSVP and UniqueSVP, where the target point is
required to lie in the fundamental parallelepiped, and show how the previous
algorithm is able to solve these variants in subexponential time. Moreover, we
also show how the previous algorithm can be used to solve the BinaryLWE problem
with n samples in subexponential time . This
analysis does not require any heuristic assumption, contrary to other algebraic
approaches; instead, it uses a variant of an idea by Lyubashevsky to generate
many samples from a small number of samples. This makes it possible to
asymptotically and heuristically break the NTRU cryptosystem in subexponential
time (without contradicting its security assumption). We are also able to solve
subset sum problems in subexponential time for density , which is of
independent interest: for such density, the previous best algorithm requires
exponential time. As a direct application, we can solve in subexponential time
the parameters of a cryptosystem based on this problem proposed at TCC 2010.Comment: CRYPTO 201
Colored Non-Crossing Euclidean Steiner Forest
Given a set of -colored points in the plane, we consider the problem of
finding trees such that each tree connects all points of one color class,
no two trees cross, and the total edge length of the trees is minimized. For
, this is the well-known Euclidean Steiner tree problem. For general ,
a -approximation algorithm is known, where is the
Steiner ratio.
We present a PTAS for , a -approximation algorithm
for , and two approximation algorithms for general~, with ratios
and
Simple Encrypted Arithmetic Library - SEAL v2.1
Achieving fully homomorphic encryption was a longstanding open problem in cryptography until it was resolved by Gentry in 2009. Soon after, several homomorphic encryption schemes were proposed. The early homomorphic encryption schemes were extremely impractical, but recently new implementations, new data encoding techniques, and a better understanding of the applications have started to change the situation. In this paper we introduce the most recent version (v2.1) of Simple Encrypted Arithmetic Library - SEAL, a homomorphic encryption library developed by Microsoft Research, and describe some of its core functionality
Conscious monitoring and control (reinvestment) in surgical performance under pressure.
Research on intraoperative stressors has focused on external factors without considering individual differences in the ability to cope with stress. One individual difference that is implicated in adverse effects of stress on performance is "reinvestment," the propensity for conscious monitoring and control of movements. The aim of this study was to examine the impact of reinvestment on laparoscopic performance under time pressure
Socioeconomic inequalities in dental caries and their determinants in adolescents in New Delhi, India.
To determine whether socioeconomic inequalities are correlated to dental caries experience and decayed teeth of Indian adolescents, and assess whether behavioural and psychosocial factors mediate this association
Ordering a sparse graph to minimize the sum of right ends of edges
Motivated by a warehouse logistics problem we study mappings of the vertices of a graph onto prescribed points on the real line that minimize the sum (or equivalently, the average) of the coordinates of the right ends of all edges. We focus on graphs whose edge numbers do not exceed the vertex numbers too much, that is, graphs with few cycles. Intuitively, dense subgraphs should be placed early in the ordering, in order to finish many edges soon. However, our main “calculation trick” is to compare the objective function with the case when (almost) every vertex is the right end of exactly one edge. The deviations from this case are described by “charges” that can form “dipoles”. This reformulation enables us to derive polynomial algorithms and NP-completeness results for relevant special cases, and FPT results
Travelling on Graphs with Small Highway Dimension
We study the Travelling Salesperson (TSP) and the Steiner Tree problem (STP)
in graphs of low highway dimension. This graph parameter was introduced by
Abraham et al. [SODA 2010] as a model for transportation networks, on which TSP
and STP naturally occur for various applications in logistics. It was
previously shown [Feldmann et al. ICALP 2015] that these problems admit a
quasi-polynomial time approximation scheme (QPTAS) on graphs of constant
highway dimension. We demonstrate that a significant improvement is possible in
the special case when the highway dimension is 1, for which we present a
fully-polynomial time approximation scheme (FPTAS). We also prove that STP is
weakly NP-hard for these restricted graphs. For TSP we show NP-hardness for
graphs of highway dimension 6, which answers an open problem posed in [Feldmann
et al. ICALP 2015]
Clarifying Assumptions about Intraoperative Stress during Surgical Performance: More Than a Stab in the Dark: Reply
Ó The Author(s) 2011. This article is published with open access at Springerlink.com We thank Dr. Ali for his concise annotation of our efforts to validate a tool that evaluates mental workload in surgery [1, 2]. Unlike other safety critical domains, the field of surgery has been slow to acknowledge the impact of intraoperative stress on surgical performance, but recently a sea change has been triggered by authorities in the field of surgical education [3]. We agree with Ali that stress is not by default detrimental to performance. Our aim was to develop a diagnostic tool that identifies the factors that contribute to disrupted performance, should it occur. Indeed, studies of the effects of acute stress on operating performance have shown considerable variability, ranging from no effect to either facilitative or debilitative effects [3–5]. The Yerkes-Dodson law emerged from the earliest attempts to explain the relationship between physiological arousal and performance, but it has been criticized for treating stress as a unitary construct, influenced solely by physiological factors [6]. More recently, Catastrophe Theory has been invoked to model the relationship, using both physiological and psychological (cognitive anxiety) components of stress [7]. The model proposes that physiological arousal displays a mild inverted-U relationship with performance when cognitive anxiety is low, but that catastrophic declines in performance can occur if both physiological arousal and cognitive anxiety are high. Recent surgical literature has elucidated the complexity of M. Wilson (&
Revisiting the Hardness of Binary Error LWE
Binary error LWE is the particular case of the learning with errors
(LWE)
problem in which errors are chosen in . It has various
cryptographic applications, and in particular, has been used to construct
efficient encryption schemes for use in constrained devices.
Arora and Ge showed that the problem can be solved in polynomial time given a number
of samples quadratic in the dimension . On the other hand, the
problem is known to be as hard as standard LWE given only slightly more
than samples.
In this paper, we first examine more generally how the
hardness of the problem varies with the number of available samples.
Under standard heuristics on
the Arora--Ge polynomial system, we show that, for any ,
binary error LWE can be solved in polynomial time
given samples. Similarly,
it can be solved in subexponential time given
samples, for .
As a second contribution, we also generalize the binary error LWE to
problem the case of a non-uniform error probability, and
analyze the hardness of the non-uniform
binary error LWE with respect to the error rate and the number of available samples.
We show that, for any error rate , non-uniform binary error LWE is also as hard as
worst-case lattice problems provided that the number of samples is
suitably restricted. This is a generalization of Micciancio and Peikert\u27s hardness proof for uniform binary error LWE.
Furthermore, we also discuss attacks on the problem when the number
of available samples is linear but significantly larger than , and
show that for sufficiently low error rates, subexponential or even
polynomial time attacks are possible
- …