47,031 research outputs found
GraphLab: A New Framework for Parallel Machine Learning
Designing and implementing efficient, provably correct parallel machine
learning (ML) algorithms is challenging. Existing high-level parallel
abstractions like MapReduce are insufficiently expressive while low-level tools
like MPI and Pthreads leave ML experts repeatedly solving the same design
challenges. By targeting common patterns in ML, we developed GraphLab, which
improves upon abstractions like MapReduce by compactly expressing asynchronous
iterative algorithms with sparse computational dependencies while ensuring data
consistency and achieving a high degree of parallel performance. We demonstrate
the expressiveness of the GraphLab framework by designing and implementing
parallel versions of belief propagation, Gibbs sampling, Co-EM, Lasso and
Compressed Sensing. We show that using GraphLab we can achieve excellent
parallel performance on large scale real-world problems
Recommended from our members
5 S,15 S-Dihydroperoxyeicosatetraenoic Acid (5,15-diHpETE) as a Lipoxin Intermediate: Reactivity and Kinetics with Human Leukocyte 5-Lipoxygenase, Platelet 12-Lipoxygenase, and Reticulocyte 15-Lipoxygenase-1.
The reaction of 5 S,15 S-dihydroperoxyeicosatetraenoic acid (5,15-diHpETE) with human 5-lipoxygenase (LOX), human platelet 12-LOX, and human reticulocyte 15-LOX-1 was investigated to determine the reactivity and relative rates of producing lipoxins (LXs). 5-LOX does not react with 5,15-diHpETE, although it can produce LXA4 when 15-HpETE is the substrate. In contrast, both 12-LOX and 15-LOX-1 react with 5,15-diHpETE, forming specifically LXB4. For 12-LOX and 5,15-diHpETE, the kinetic parameters are kcat = 0.17 s-1 and kcat/ KM = 0.011 μM-1 s-1 [106- and 1600-fold lower than those for 12-LOX oxygenation of arachidonic acid (AA), respectively]. On the other hand, for 15-LOX-1 the equivalent parameters are kcat = 4.6 s-1 and kcat/ KM = 0.21 μM-1 s-1 (3-fold higher and similar to those for 12-HpETE formation by 15-LOX-1 from AA, respectively). This contrasts with the complete lack of reaction of 15-LOX-2 with 5,15-diHpETE [Green, A. R., et al. (2016) Biochemistry 55, 2832-2840]. Our data indicate that 12-LOX is markedly inferior to 15-LOX-1 in catalyzing the production of LXB4 from 5,15-diHpETE. Platelet aggregation was inhibited by the addition of 5,15-diHpETE, with an IC50 of 1.3 μM; however, LXB4 did not significantly inhibit collagen-mediated platelet activation up to 10 μM. In summary, LXB4 is the primary product of 12-LOX and 15-LOX-1 catalysis, if 5,15-diHpETE is the substrate, with 15-LOX-1 being 20-fold more efficient than 12-LOX. LXA4 is the primary product with 5-LOX but only if 15-HpETE is the substrate. Approximately equal proportions of LXA4 and LXB4 are produced by 12-LOX but only if LTA4 is the substrate, as described previously [Sheppard, K. A., et al. (1992) Biochim. Biophys. Acta 1133, 223-234]
Robustness Verification of Support Vector Machines
We study the problem of formally verifying the robustness to adversarial
examples of support vector machines (SVMs), a major machine learning model for
classification and regression tasks. Following a recent stream of works on
formal robustness verification of (deep) neural networks, our approach relies
on a sound abstract version of a given SVM classifier to be used for checking
its robustness. This methodology is parametric on a given numerical abstraction
of real values and, analogously to the case of neural networks, needs neither
abstract least upper bounds nor widening operators on this abstraction. The
standard interval domain provides a simple instantiation of our abstraction
technique, which is enhanced with the domain of reduced affine forms, which is
an efficient abstraction of the zonotope abstract domain. This robustness
verification technique has been fully implemented and experimentally evaluated
on SVMs based on linear and nonlinear (polynomial and radial basis function)
kernels, which have been trained on the popular MNIST dataset of images and on
the recent and more challenging Fashion-MNIST dataset. The experimental results
of our prototype SVM robustness verifier appear to be encouraging: this
automated verification is fast, scalable and shows significantly high
percentages of provable robustness on the test set of MNIST, in particular
compared to the analogous provable robustness of neural networks
ATP allosterically activates the human 5-lipoxygenase molecular mechanism of arachidonic acid and 5(S)-hydroperoxy-6(E),8(Z),11(Z),14(Z)-eicosatetraenoic acid.
5-Lipoxygenase (5-LOX) reacts with arachidonic acid (AA) to first generate 5(S)-hydroperoxy-6(E),8(Z),11(Z),14(Z)-eicosatetraenoic acid [5(S)-HpETE] and then an epoxide from 5(S)-HpETE to form leukotriene A4, from a single polyunsaturated fatty acid. This work investigates the kinetic mechanism of these two processes and the role of ATP in their activation. Specifically, it was determined that epoxidation of 5(S)-HpETE (dehydration of the hydroperoxide) has a rate of substrate capture (Vmax/Km) significantly lower than that of AA hydroperoxidation (oxidation of AA to form the hydroperoxide); however, hyperbolic kinetic parameters for ATP activation indicate a similar activation for AA and 5(S)-HpETE. Solvent isotope effect results for both hydroperoxidation and epoxidation indicate that a specific step in its molecular mechanism is changed, possibly because of a lowering of the dependence of the rate-limiting step on hydrogen atom abstraction and an increase in the dependency on hydrogen bond rearrangement. Therefore, changes in ATP concentration in the cell could affect the production of 5-LOX products, such as leukotrienes and lipoxins, and thus have wide implications for the regulation of cellular inflammation
Beyond Good and Evil: Formalizing the Security Guarantees of Compartmentalizing Compilation
Compartmentalization is good security-engineering practice. By breaking a
large software system into mutually distrustful components that run with
minimal privileges, restricting their interactions to conform to well-defined
interfaces, we can limit the damage caused by low-level attacks such as
control-flow hijacking. When used to defend against such attacks,
compartmentalization is often implemented cooperatively by a compiler and a
low-level compartmentalization mechanism. However, the formal guarantees
provided by such compartmentalizing compilation have seen surprisingly little
investigation.
We propose a new security property, secure compartmentalizing compilation
(SCC), that formally characterizes the guarantees provided by
compartmentalizing compilation and clarifies its attacker model. We reconstruct
our property by starting from the well-established notion of fully abstract
compilation, then identifying and lifting three important limitations that make
standard full abstraction unsuitable for compartmentalization. The connection
to full abstraction allows us to prove SCC by adapting established proof
techniques; we illustrate this with a compiler from a simple unsafe imperative
language with procedures to a compartmentalized abstract machine.Comment: Nit
Adequacy of compositional translations for observational semantics
We investigate methods and tools for analysing translations between programming languages with respect to observational semantics. The behaviour of programs is observed in terms of may- and must-convergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalence, is taken to be the fundamental correctness condition. For compositional translations we propose a notion of convergence equivalence as a means for proving adequacy. This technique avoids explicit reasoning about contexts, and is able to deal with the subtle role of typing in implementations of language extension
A Cost-based Optimizer for Gradient Descent Optimization
As the use of machine learning (ML) permeates into diverse application
domains, there is an urgent need to support a declarative framework for ML.
Ideally, a user will specify an ML task in a high-level and easy-to-use
language and the framework will invoke the appropriate algorithms and system
configurations to execute it. An important observation towards designing such a
framework is that many ML tasks can be expressed as mathematical optimization
problems, which take a specific form. Furthermore, these optimization problems
can be efficiently solved using variations of the gradient descent (GD)
algorithm. Thus, to decouple a user specification of an ML task from its
execution, a key component is a GD optimizer. We propose a cost-based GD
optimizer that selects the best GD plan for a given ML task. To build our
optimizer, we introduce a set of abstract operators for expressing GD
algorithms and propose a novel approach to estimate the number of iterations a
GD algorithm requires to converge. Extensive experiments on real and synthetic
datasets show that our optimizer not only chooses the best GD plan but also
allows for optimizations that achieve orders of magnitude performance speed-up.Comment: Accepted at SIGMOD 201
- …