1,588 research outputs found
Advanced Mid-Water Tools for 4D Marine Data Fusion and Visualization
Mapping and charting of the seafloor underwent a revolution approximately 20 years ago with the introduction of multibeam sonars -- sonars that provided complete, high-resolution coverage of the seafloor rather than sparse measurements. The initial focus of these sonar systems was the charting of depths in support of safety of navigation and offshore exploration; more recently innovations in processing software have led to approaches to characterize seafloor type and for mapping seafloor habitat in support of fisheries research. In recent years, a new generation of multibeam sonars has been developed that, for the first time, have the ability to map the water column along with the seafloor. This ability will potentially allow multibeam sonars to address a number of critical ocean problems including the direct mapping of fish and marine mammals, the location of mid-water targets and, if water column properties are appropriate, a wide range of physical oceanographic processes. This potential relies on suitable software to make use of all of the new available data. Currently, the users of these sonars have a limited view of the mid-water data in real-time and limited capacity to store it, replay it, or run further analysis. The data also needs to be integrated with other sensor assets such as bathymetry, backscatter, sub-bottom, seafloor characterizations and other assets so that a “complete” picture of the marine environment under analysis can be realized. Software tools developed for this type of data integration should support a wide range of sonars with a unified format for the wide variety of mid-water sonar types. This paper describes the evolution and result of an effort to create a software tool that meets these needs, and details case studies using the new tools in the areas of fisheries research, static target search, wreck surveys and physical oceanographic processes
Predicting Properties of Quantum Systems with Conditional Generative Models
Machine learning has emerged recently as a powerful tool for predicting
properties of quantum many-body systems. For many ground states of gapped
Hamiltonians, generative models can learn from measurements of a single quantum
state to reconstruct the state accurately enough to predict local observables.
Alternatively, classification and regression models can predict local
observables by learning from measurements on different but related states. In
this work, we combine the benefits of both approaches and propose the use of
conditional generative models to simultaneously represent a family of states,
learning shared structures of different quantum states from measurements. The
trained model enables us to predict arbitrary local properties of ground
states, even for states not included in the training data, without
necessitating further training for new observables. We first numerically
validate our approach on 2D random Heisenberg models using simulations of up to
45 qubits. Furthermore, we conduct quantum simulations on a neutral-atom
quantum computer and demonstrate that our method can accurately predict the
quantum phases of square lattices of 1313 Rydberg atoms.Comment: 10 pages, 14 figures, 5 pages appendix. Open-source code is available
at https://github.com/PennyLaneAI/generative-quantum-state
RAB: Provable Robustness Against Backdoor Attacks
Recent studies have shown that deep neural networks (DNNs) are vulnerable to
adversarial attacks, including evasion and backdoor (poisoning) attacks. On the
defense side, there have been intensive efforts on improving both empirical and
provable robustness against evasion attacks; however, provable robustness
against backdoor attacks still remains largely unexplored. In this paper, we
focus on certifying the machine learning model robustness against general
threat models, especially backdoor attacks. We first provide a unified
framework via randomized smoothing techniques and show how it can be
instantiated to certify the robustness against both evasion and backdoor
attacks. We then propose the first robust training process, RAB, to smooth the
trained model and certify its robustness against backdoor attacks. We derive
the robustness bound for machine learning models trained with RAB, and prove
that our robustness bound is tight. In addition, we show that it is possible to
train the robust smoothed models efficiently for simple models such as
K-nearest neighbor classifiers, and we propose an exact smooth-training
algorithm which eliminates the need to sample from a noise distribution for
such models. Empirically, we conduct comprehensive experiments for different
machine learning (ML) models such as DNNs, differentially private DNNs, and
K-NN models on MNIST, CIFAR-10 and ImageNet datasets, and provide the first
benchmark for certified robustness against backdoor attacks. In addition, we
evaluate K-NN models on a spambase tabular dataset to demonstrate the
advantages of the proposed exact algorithm. Both the theoretic analysis and the
comprehensive evaluation on diverse ML models and datasets shed lights on
further robust learning strategies against general training time attacks.Comment: 31 pages, 5 figures, 7 table
Certifying Out-of-Domain Generalization for Blackbox Functions
Certifying the robustness of model performance under bounded data
distribution drifts has recently attracted intensive interest under the
umbrella of distributional robustness. However, existing techniques either make
strong assumptions on the model class and loss functions that can be certified,
such as smoothness expressed via Lipschitz continuity of gradients, or require
to solve complex optimization problems. As a result, the wider application of
these techniques is currently limited by its scalability and flexibility --
these techniques often do not scale to large-scale datasets with modern deep
neural networks or cannot handle loss functions which may be non-smooth such as
the 0-1 loss. In this paper, we focus on the problem of certifying
distributional robustness for blackbox models and bounded loss functions, and
propose a novel certification framework based on the Hellinger distance. Our
certification technique scales to ImageNet-scale datasets, complex models, and
a diverse set of loss functions. We then focus on one specific application
enabled by such scalability and flexibility, i.e., certifying out-of-domain
generalization for large neural networks and loss functions such as accuracy
and AUC. We experimentally validate our certification method on a number of
datasets, ranging from ImageNet, where we provide the first non-vacuous
certified out-of-domain generalization, to smaller classification tasks where
we are able to compare with the state-of-the-art and show that our method
performs considerably better.Comment: 39th International Conference on Machine Learning (ICML) 202
European institutions?
© 2016 The British Society for Phenomenology. The aim of this article is to sketch a phenomenological theory of political institutions and to apply it to some objections and questions raised by Pierre Manent about the project of the European Union and more specifically the question of “European Construction”, i.e. what is the aim of the European Project. Such a theory of political institutions is nested within a broader phenomenological account of institutions, dimensions of which I have tried to elaborate elsewhere. As a working conceptual delineation, we can describe institutions as (relatively) stable meaning structures. As such, the definition encompasses phenomena like the European Commission, Belgium, marriage, the Dollar, the Labour Party, but also political subjects themselves. In order to develop said theory of institutions, I will draw primarily upon resources in the work of Maurice Merleau-Ponty and John Searle
TSS: Transformation-Specific Smoothing for Robustness Certification
As machine learning (ML) systems become pervasive, safeguarding their
security is critical. However, recently it has been demonstrated that motivated
adversaries are able to mislead ML systems by perturbing test data using
semantic transformations. While there exists a rich body of research providing
provable robustness guarantees for ML models against norm bounded
adversarial perturbations, guarantees against semantic perturbations remain
largely underexplored. In this paper, we provide TSS -- a unified framework for
certifying ML robustness against general adversarial semantic transformations.
First, depending on the properties of each transformation, we divide common
transformations into two categories, namely resolvable (e.g., Gaussian blur)
and differentially resolvable (e.g., rotation) transformations. For the former,
we propose transformation-specific randomized smoothing strategies and obtain
strong robustness certification. The latter category covers transformations
that involve interpolation errors, and we propose a novel approach based on
stratified sampling to certify the robustness. Our framework TSS leverages
these certification strategies and combines with consistency-enhanced training
to provide rigorous certification of robustness. We conduct extensive
experiments on over ten types of challenging semantic transformations and show
that TSS significantly outperforms the state of the art. Moreover, to the best
of our knowledge, TSS is the first approach that achieves nontrivial certified
robustness on the large-scale ImageNet dataset. For instance, our framework
achieves 30.4% certified robust accuracy against rotation attack (within ) on ImageNet. Moreover, to consider a broader range of
transformations, we show TSS is also robust against adaptive attacks and
unforeseen image corruptions such as CIFAR-10-C and ImageNet-C.Comment: 2021 ACM SIGSAC Conference on Computer and Communications Security
(CCS '21
Toward reliability in the NISQ era: robust interval guarantee for quantum measurements on approximate states
Near-term quantum computation holds potential across multiple application domains. However, imperfect preparation and evolution of states due to algorithmic and experimental shortcomings, characteristic in the near-term implementation, would typically result in measurement outcomes deviating from the ideal setting. It is thus crucial for any near-term application to quantify and bound these output errors. We address this need by deriving robustness intervals which are guaranteed to contain the output in the ideal setting. The first type of interval is based on formulating robustness bounds as semidefinite programs, and uses only the first moment and the fidelity to the ideal state. Furthermore, we consider higher statistical moments of the observable and generalize bounds for pure states based on the non-negativity of Gram matrices to mixed states, thus enabling their applicability in the NISQ era where noisy scenarios are prevalent. Finally, we demonstrate our results in the context of the variational quantum eigensolver (VQE) on noisy and noiseless simulations
- …