14,270 research outputs found
Lockdown: Dynamic Control-Flow Integrity
Applications written in low-level languages without type or memory safety are
especially prone to memory corruption. Attackers gain code execution
capabilities through such applications despite all currently deployed defenses
by exploiting memory corruption vulnerabilities. Control-Flow Integrity (CFI)
is a promising defense mechanism that restricts open control-flow transfers to
a static set of well-known locations. We present Lockdown, an approach to
dynamic CFI that protects legacy, binary-only executables and libraries.
Lockdown adaptively learns the control-flow graph of a running process using
information from a trusted dynamic loader. The sandbox component of Lockdown
restricts interactions between different shared objects to imported and
exported functions by enforcing fine-grained CFI checks. Our prototype
implementation shows that dynamic CFI results in low performance overhead.Comment: ETH Technical Repor
Deep Scattering: Rendering Atmospheric Clouds with Radiance-Predicting Neural Networks
We present a technique for efficiently synthesizing images of atmospheric
clouds using a combination of Monte Carlo integration and neural networks. The
intricacies of Lorenz-Mie scattering and the high albedo of cloud-forming
aerosols make rendering of clouds---e.g. the characteristic silverlining and
the "whiteness" of the inner body---challenging for methods based solely on
Monte Carlo integration or diffusion theory. We approach the problem
differently. Instead of simulating all light transport during rendering, we
pre-learn the spatial and directional distribution of radiant flux from tens of
cloud exemplars. To render a new scene, we sample visible points of the cloud
and, for each, extract a hierarchical 3D descriptor of the cloud geometry with
respect to the shading location and the light source. The descriptor is input
to a deep neural network that predicts the radiance function for each shading
configuration. We make the key observation that progressively feeding the
hierarchical descriptor into the network enhances the network's ability to
learn faster and predict with high accuracy while using few coefficients. We
also employ a block design with residual connections to further improve
performance. A GPU implementation of our method synthesizes images of clouds
that are nearly indistinguishable from the reference solution within seconds
interactively. Our method thus represents a viable solution for applications
such as cloud design and, thanks to its temporal stability, also for
high-quality production of animated content.Comment: ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia 2017
The LHC Discovery Potential of a Leptophilic Higgs
In this work, we examine a two-Higgs-doublet extension of the Standard Model
in which one Higgs doublet is responsible for giving mass to both up- and
down-type quarks, while a separate doublet is responsible for giving mass to
leptons. We examine both the theoretical and experimental constraints on the
model and show that large regions of parameter space are allowed by these
constraints in which the effective couplings between the lightest neutral Higgs
scalar and the Standard-Model leptons are substantially enhanced. We
investigate the collider phenomenology of such a "leptophilic"
two-Higgs-doublet model and show that in cases where the low-energy spectrum
contains only one light, CP-even scalar, a variety of collider processes
essentially irrelevant for the discovery of a Standard Model Higgs boson
(specifically those in which the Higgs boson decays directly into a
charged-lepton pair) can contribute significantly to the discovery potential of
a light-to-intermediate-mass (m_h < 140 GeV) Higgs boson at the LHC.Comment: 25 pages, LaVTeX, 11 figures, 1 tabl
- …