5,188 research outputs found
Inhomogeneous Boundary Value Problem for Hartree Type Equation
In this paper, we settle the problem for time-dependent Hartree equation with
inhomogeneous boundary condition in a bounded Lipschitz domain in
. A global existence result is derived.Comment: 10 page
Towards Practical Verification of Machine Learning: The Case of Computer Vision Systems
Due to the increasing usage of machine learning (ML) techniques in security-
and safety-critical domains, such as autonomous systems and medical diagnosis,
ensuring correct behavior of ML systems, especially for different corner cases,
is of growing importance. In this paper, we propose a generic framework for
evaluating security and robustness of ML systems using different real-world
safety properties. We further design, implement and evaluate VeriVis, a
scalable methodology that can verify a diverse set of safety properties for
state-of-the-art computer vision systems with only blackbox access. VeriVis
leverage different input space reduction techniques for efficient verification
of different safety properties. VeriVis is able to find thousands of safety
violations in fifteen state-of-the-art computer vision systems including ten
Deep Neural Networks (DNNs) such as Inception-v3 and Nvidia's Dave self-driving
system with thousands of neurons as well as five commercial third-party vision
APIs including Google vision and Clarifai for twelve different safety
properties. Furthermore, VeriVis can successfully verify local safety
properties, on average, for around 31.7% of the test images. VeriVis finds up
to 64.8x more violations than existing gradient-based methods that, unlike
VeriVis, cannot ensure non-existence of any violations. Finally, we show that
retraining using the safety violations detected by VeriVis can reduce the
average number of violations up to 60.2%.Comment: 16 pages, 11 tables, 11 figure
Multi-Adversarial Domain Adaptation
Recent advances in deep domain adaptation reveal that adversarial learning
can be embedded into deep networks to learn transferable features that reduce
distribution discrepancy between the source and target domains. Existing domain
adversarial adaptation methods based on single domain discriminator only align
the source and target data distributions without exploiting the complex
multimode structures. In this paper, we present a multi-adversarial domain
adaptation (MADA) approach, which captures multimode structures to enable
fine-grained alignment of different data distributions based on multiple domain
discriminators. The adaptation can be achieved by stochastic gradient descent
with the gradients computed by back-propagation in linear-time. Empirical
evidence demonstrates that the proposed model outperforms state of the art
methods on standard domain adaptation datasets.Comment: AAAI 2018 Oral. arXiv admin note: substantial text overlap with
arXiv:1705.10667, arXiv:1707.0790
- …