1,832 research outputs found
Incorporating ancestors' influence in genetic algorithms
A new criterion of fitness evaluation for Genetic Algorithms is introduced where the fitness value of an individual is determined by considering its own fitness as well as those of its ancestors. Some guidelines for selecting the weighting coefficients for quantifying the importance to be given to the fitness of the individual and its ancestors are provided. This is done both heuristically and automatically under fixed and adaptive frameworks. The Schema Theorem corresponding to the proposed concept is derived. The effectiveness of this new methodology is demonstrated extensively on the problems of optimizing complex functions including a noisy one and selecting optimal neural network parameters
SOTER: A Runtime Assurance Framework for Programming Safe Robotics Systems
The recent drive towards achieving greater autonomy and intelligence in
robotics has led to high levels of complexity. Autonomous robots increasingly
depend on third party off-the-shelf components and complex machine-learning
techniques. This trend makes it challenging to provide strong design-time
certification of correct operation.
To address these challenges, we present SOTER, a robotics programming
framework with two key components: (1) a programming language for implementing
and testing high-level reactive robotics software and (2) an integrated runtime
assurance (RTA) system that helps enable the use of uncertified components,
while still providing safety guarantees. SOTER provides language primitives to
declaratively construct a RTA module consisting of an advanced,
high-performance controller (uncertified), a safe, lower-performance controller
(certified), and the desired safety specification. The framework provides a
formal guarantee that a well-formed RTA module always satisfies the safety
specification, without completely sacrificing performance by using higher
performance uncertified components whenever safe. SOTER allows the complex
robotics software stack to be constructed as a composition of RTA modules,
where each uncertified component is protected using a RTA module.
To demonstrate the efficacy of our framework, we consider a real-world
case-study of building a safe drone surveillance system. Our experiments both
in simulation and on actual drones show that the SOTER-enabled RTA ensures the
safety of the system, including when untrusted third-party components have bugs
or deviate from the desired behavior
CFD Analysis of Turbo Expander for Cryogenic Refrigeration and Liquefaction Cycles
AbstractComputational Fluid Dynamics analysis has emerged as a necessary tool for designing of turbomachinery. It helps to understand the various sources of inefficiency through investigation of flow physics of the turbine. In this paper, 3D turbulent flow analysis of a cryogenic turboexpander for small scale air separation was performed using Ansys CFX®. The turboexpander has been designed following assumptions based on meanlineblade generation procedure provided in open literature and good engineering judgement. Through analysis of flow field, modifications and further analysis required to evolve a more robust design procedure, have been suggested
SLICER: Learning universal audio representations using low-resource self-supervised pre-training
We present a new Self-Supervised Learning (SSL) approach to pre-train
encoders on unlabeled audio data that reduces the need for large amounts of
labeled data for audio and speech classification. Our primary aim is to learn
audio representations that can generalize across a large variety of speech and
non-speech tasks in a low-resource un-labeled audio pre-training setting.
Inspired by the recent success of clustering and contrasting learning paradigms
for SSL-based speech representation learning, we propose SLICER (Symmetrical
Learning of Instance and Cluster-level Efficient Representations), which brings
together the best of both clustering and contrasting learning paradigms. We use
a symmetric loss between latent representations from student and teacher
encoders and simultaneously solve instance and cluster-level contrastive
learning tasks. We obtain cluster representations online by just projecting the
input spectrogram into an output subspace with dimensions equal to the number
of clusters. In addition, we propose a novel mel-spectrogram augmentation
procedure, k-mix, based on mixup, which does not require labels and aids
unsupervised representation learning for audio. Overall, SLICER achieves
state-of-the-art results on the LAPE Benchmark \cite{9868132}, significantly
outperforming DeLoRes-M and other prior approaches, which are pre-trained on
larger of unsupervised data. We will make all our codes available on
GitHub.Comment: Submitted to ICASSP 202
UNFUSED: UNsupervised Finetuning Using SElf supervised Distillation
In this paper, we introduce UnFuSeD, a novel approach to leverage
self-supervised learning and reduce the need for large amounts of labeled data
for audio classification. Unlike prior works, which directly fine-tune a
self-supervised pre-trained encoder on a target dataset, we use the encoder to
generate pseudo-labels for unsupervised fine-tuning before the actual
fine-tuning step. We first train an encoder using a novel self-supervised
learning algorithm (SSL) on an unlabeled audio dataset. Then, we use that
encoder to generate pseudo-labels on our target task dataset via clustering the
extracted representations. These pseudo-labels are then used to guide
self-distillation on a randomly initialized model, which we call unsupervised
fine-tuning. Finally, the resultant encoder is then fine-tuned on our target
task dataset. Through UnFuSeD, we propose the first system that moves away from
generic SSL paradigms in literature, which pre-train and fine-tune the same
encoder, and present a novel self-distillation-based system to leverage SSL
pre-training for low-resource audio classification. In practice, UnFuSeD
achieves state-of-the-art results on the LAPE Benchmark, significantly
outperforming all our baselines. Additionally, UnFuSeD allows us to achieve
this at a 40% reduction in the number of parameters over the previous
state-of-the-art system. We make all our codes publicly available.Comment: Under review at ICASSP 2023 SASB Worksho
MAST: Multiscale Audio Spectrogram Transformers
We present Multiscale Audio Spectrogram Transformer (MAST) for audio
classification, which brings the concept of multiscale feature hierarchies to
the Audio Spectrogram Transformer (AST). Given an input audio spectrogram we
first patchify and project it into an initial temporal resolution and embedding
dimension, post which the multiple stages in MAST progressively expand the
embedding dimension while reducing the temporal resolution of the input. We use
a pyramid structure that allows early layers of MAST operating at a high
temporal resolution but low embedding space to model simple low-level acoustic
information and deeper temporally coarse layers to model high-level acoustic
information with high-dimensional embeddings. We also extend our approach to
present a new Self-Supervised Learning (SSL) method called SS-MAST, which
calculates a symmetric contrastive loss between latent representations from a
student and a teacher encoder. In practice, MAST significantly outperforms AST
by an average accuracy of 3.4% across 8 speech and non-speech tasks from the
LAPE Benchmark. Moreover, SS-MAST achieves an absolute average improvement of
2.6% over SSAST for both AST and MAST encoders. We make all our codes available
on GitHub at the time of publication.Comment: Submitted ICASSP 202
- …