2,805 research outputs found
Blind Restoration of Real-World Audio by 1D Operational GANs
Objective: Despite numerous studies proposed for audio restoration in the
literature, most of them focus on an isolated restoration problem such as
denoising or dereverberation, ignoring other artifacts. Moreover, assuming a
noisy or reverberant environment with limited number of fixed
signal-to-distortion ratio (SDR) levels is a common practice. However,
real-world audio is often corrupted by a blend of artifacts such as
reverberation, sensor noise, and background audio mixture with varying types,
severities, and duration. In this study, we propose a novel approach for blind
restoration of real-world audio signals by Operational Generative Adversarial
Networks (Op-GANs) with temporal and spectral objective metrics to enhance the
quality of restored audio signal regardless of the type and severity of each
artifact corrupting it. Methods: 1D Operational-GANs are used with generative
neuron model optimized for blind restoration of any corrupted audio signal.
Results: The proposed approach has been evaluated extensively over the
benchmark TIMIT-RAR (speech) and GTZAN-RAR (non-speech) datasets corrupted with
a random blend of artifacts each with a random severity to mimic real-world
audio signals. Average SDR improvements of over 7.2 dB and 4.9 dB are achieved,
respectively, which are substantial when compared with the baseline methods.
Significance: This is a pioneer study in blind audio restoration with the
unique capability of direct (time-domain) restoration of real-world audio
whilst achieving an unprecedented level of performance for a wide SDR range and
artifact types. Conclusion: 1D Op-GANs can achieve robust and computationally
effective real-world audio restoration with significantly improved performance.
The source codes and the generated real-world audio datasets are shared
publicly with the research community in a dedicated GitHub repository1
Addressing Dataset Bias in Deep Neural Networks
Deep Learning has achieved tremendous success in recent years in several areas such as image classification, text translation, autonomous agents, to name a few. Deep Neural Networks are able to learn non-linear features in a data-driven fashion from complex, large scale datasets to solve tasks. However, some fundamental issues remain to be fixed: the kind of data that is provided to the neural network directly influences its capability to generalize. This is especially true when training and test data come from different distributions (the so called domain gap or domain shift problem): in this case, the neural network may learn a data representation that is representative for the training data but not for the test, thus performing poorly when deployed in actual scenarios. The domain gap problem is addressed by the so-called Domain Adaptation, for which a large literature was recently developed.
In this thesis, we first present a novel method to perform Unsupervised Domain Adaptation. Starting from the typical scenario in which we dispose of labeled source distributions and an unlabeled target distribution, we pursue a pseudo-labeling approach to assign a label to the target data, and then, in an iterative way, we refine them using Generative Adversarial Networks.
Subsequently, we faced the debiasing problem. Simply speaking, bias occurs when there are factors in the data which are spuriously correlated with the task label, e.g., the background, which might be a strong clue to guess what class is depicted in an image. When this happens, neural networks may erroneously learn such spurious correlations as predictive factors, and may therefore fail when deployed on different scenarios. Learning a debiased model can be done using supervision regarding the type of bias affecting the data, or can be done without any annotation about what are the spurious correlations.
We tackled the problem of supervised debiasing -- where a ground truth annotation for the bias is given -- under the lens of information theory.
We designed a neural network architecture that learns to solve the task while achieving at the same time, statistical independence of the data embedding with respect to the bias label.
We finally addressed the unsupervised debiasing problem, in which there is no availability of bias annotation. we address this challenging problem by a two-stage approach: we first split coarsely the training dataset into two subsets, samples that exhibit spurious correlations and those that do not. Second, we learn a feature representation that can accommodate both subsets and an augmented version of them
Trustworthy Reinforcement Learning Against Intrinsic Vulnerabilities: Robustness, Safety, and Generalizability
A trustworthy reinforcement learning algorithm should be competent in solving
challenging real-world problems, including {robustly} handling uncertainties,
satisfying {safety} constraints to avoid catastrophic failures, and
{generalizing} to unseen scenarios during deployments. This study aims to
overview these main perspectives of trustworthy reinforcement learning
considering its intrinsic vulnerabilities on robustness, safety, and
generalizability. In particular, we give rigorous formulations, categorize
corresponding methodologies, and discuss benchmarks for each perspective.
Moreover, we provide an outlook section to spur promising future directions
with a brief discussion on extrinsic vulnerabilities considering human
feedback. We hope this survey could bring together separate threads of studies
together in a unified framework and promote the trustworthiness of
reinforcement learning.Comment: 36 pages, 5 figure
Improving the Cybersecurity of Cyber-Physical Systems Through Behavioral Game Theory and Model Checking in Practice and in Education
This dissertation presents automated methods based on behavioral game theory and model checking to improve the cybersecurity of cyber-physical systems (CPSs) and advocates teaching certain foundational principles of these methods to cybersecurity students. First, it encodes behavioral game theory\u27s concept of level-k reasoning into an integer linear program that models a newly defined security Colonel Blotto game. This approach is designed to achieve an efficient allocation of scarce protection resources by anticipating attack allocations. A human subjects experiment based on a CPS infrastructure demonstrates its effectiveness. Next, it rigorously defines the term adversarial thinking, one of cybersecurity educations most important and elusive learning objectives, but for which no proper definition exists. It spells out what it means to think like a hacker by examining the characteristic thought processes of hackers through the lens of Sternberg\u27s triarchic theory of intelligence. Next, a classroom experiment demonstrates that teaching basic game theory concepts to cybersecurity students significantly improves their strategic reasoning abilities. Finally, this dissertation applies the SPIN model checker to an electric power protection system and demonstrates a straightforward and effective technique for rigorously characterizing the degree of fault tolerance of complex CPSs, a key step in improving their defensive posture
- …