132,350 research outputs found
Adversarial Network Bottleneck Features for Noise Robust Speaker Verification
In this paper, we propose a noise robust bottleneck feature representation
which is generated by an adversarial network (AN). The AN includes two cascade
connected networks, an encoding network (EN) and a discriminative network (DN).
Mel-frequency cepstral coefficients (MFCCs) of clean and noisy speech are used
as input to the EN and the output of the EN is used as the noise robust
feature. The EN and DN are trained in turn, namely, when training the DN, noise
types are selected as the training labels and when training the EN, all labels
are set as the same, i.e., the clean speech label, which aims to make the AN
features invariant to noise and thus achieve noise robustness. We evaluate the
performance of the proposed feature on a Gaussian Mixture Model-Universal
Background Model based speaker verification system, and make comparison to MFCC
features of speech enhanced by short-time spectral amplitude minimum mean
square error (STSA-MMSE) and deep neural network-based speech enhancement
(DNN-SE) methods. Experimental results on the RSR2015 database show that the
proposed AN bottleneck feature (AN-BN) dramatically outperforms the STSA-MMSE
and DNN-SE based MFCCs for different noise types and signal-to-noise ratios.
Furthermore, the AN-BN feature is able to improve the speaker verification
performance under the clean condition
Compiling symbolic attacks to protocol implementation tests
Recently efficient model-checking tools have been developed to find flaws in
security protocols specifications. These flaws can be interpreted as potential
attacks scenarios but the feasability of these scenarios need to be confirmed
at the implementation level. However, bridging the gap between an abstract
attack scenario derived from a specification and a penetration test on real
implementations of a protocol is still an open issue. This work investigates an
architecture for automatically generating abstract attacks and converting them
to concrete tests on protocol implementations. In particular we aim to improve
previously proposed blackbox testing methods in order to discover automatically
new attacks and vulnerabilities. As a proof of concept we have experimented our
proposed architecture to detect a renegotiation vulnerability on some
implementations of SSL/TLS, a protocol widely used for securing electronic
transactions.Comment: In Proceedings SCSS 2012, arXiv:1307.802
Verification of the linear matching method for limit and shakedown analysis by comparison with experiments
The Linear Matching Method (LMM), a direct numerical method for determining shakedown and ratchet limits of components, has seen significant development in recent years. Previous verifications of these developments against cyclic nonlinear finite element analysis have shown favourable results, and now this verification process is being extended to include comparisons with experimental results. This paper presents a comparison of LMM analysis with experimental tests for limit loads and shakedown limits available in the literature. The limit load and shakedown limits were determined for pipe intersections and nozzle-sphere intersections respectively, thus testing the accuracy of the LMM when analysing real plant components. Details of the component geometries, materials and test procedures used in the experiments are given. Following this a description of the LMM analysis is given which includes a description of how these features have been interpreted for numerical analysis. A comparison of the results shows that the LMM is capable of predicting accurate yet conservative limit loads and shakedown limits
Verification of the Tree-Based Hierarchical Read-Copy Update in the Linux Kernel
Read-Copy Update (RCU) is a scalable, high-performance Linux-kernel
synchronization mechanism that runs low-overhead readers concurrently with
updaters. Production-quality RCU implementations for multi-core systems are
decidedly non-trivial. Giving the ubiquity of Linux, a rare "million-year" bug
can occur several times per day across the installed base. Stringent validation
of RCU's complex behaviors is thus critically important. Exhaustive testing is
infeasible due to the exponential number of possible executions, which suggests
use of formal verification.
Previous verification efforts on RCU either focus on simple implementations
or use modeling languages, the latter requiring error-prone manual translation
that must be repeated frequently due to regular changes in the Linux kernel's
RCU implementation. In this paper, we first describe the implementation of Tree
RCU in the Linux kernel. We then discuss how to construct a model directly from
Tree RCU's source code in C, and use the CBMC model checker to verify its
safety and liveness properties. To our best knowledge, this is the first
verification of a significant part of RCU's source code, and is an important
step towards integration of formal verification into the Linux kernel's
regression test suite.Comment: This is a long version of a conference paper published in the 2018
Design, Automation and Test in Europe Conference (DATE
On Diagnosis of Forwarding Plane via Static Forwarding Rules in Software Defined Networks
Software Defined Networks (SDN) decouple the forwarding and control planes
from each other. The control plane is assumed to have a global knowledge of the
underlying physical and/or logical network topology so that it can monitor,
abstract and control the forwarding plane. In our paper, we present solutions
that install an optimal or near-optimal (i.e., within 14% of the optimal)
number of static forwarding rules on switches/routers so that any controller
can verify the topology connectivity and detect/locate link failures at data
plane speeds without relying on state updates from other controllers. Our upper
bounds on performance indicate that sub-second link failure localization is
possible even at data-center scale networks. For networks with hundreds or few
thousand links, tens of milliseconds of latency is achievable.Comment: Submitted to Infocom'14, 9 page
Generating feasible transition paths for testing from an extended finite state machine (EFSM)
The problem of testing from an extended finite state machine (EFSM) can be expressed in terms of finding suitable paths through the EFSM and then deriving test data to follow the paths. A chosen path may be infeasible and so it is desirable to have methods that can direct the search for appropriate paths through the EFSM towards those that are likely to be feasible. However, generating feasible transition paths (FTPs) for model based testing is a challenging task and is an open research problem. This paper introduces a novel fitness metric that analyzes data flow dependence among the actions and conditions of the transitions in order to estimate the feasibility of a transition path. The proposed fitness metric is evaluated by being used in a genetic algorithm to guide the search for FTPs
- …