887 research outputs found
Multistable setups combining magnetic shape memory alloys with reluctance counterforces
Systems with the ability to hold a given position without consumption of energy, i.e. multistability, can be employed in a variety of applications. Apart from the most commonly frictionbased systems, smart materials are an option to create multistability. Here, the ability to create a multistable system from magnetic shape memory alloy (MSM) in a magnetic field, combined with a reluctance counterforce is discussed. For the approach the necessary design process is described, as well as the experimental characterization of a demonstrator system. With the multistable range of the stroke at 0.82 mm and an average resistance to disturbance of ±10 N, two key parameters of the multistable properties are determined. As an outlook, potential applications in the design of adaptable interfaces is discussed
OpenML Benchmarking Suites
Machine learning research depends on objectively interpretable, comparable,
and reproducible algorithm benchmarks. Therefore, we advocate the use of
curated, comprehensive suites of machine learning tasks to standardize the
setup, execution, and reporting of benchmarks. We enable this through software
tools that help to create and leverage these benchmarking suites. These are
seamlessly integrated into the OpenML platform, and accessible through
interfaces in Python, Java, and R. OpenML benchmarking suites are (a) easy to
use through standardized data formats, APIs, and client libraries; (b)
machine-readable, with extensive meta-information on the included datasets; and
(c) allow benchmarks to be shared and reused in future studies. We also present
a first, carefully curated and practical benchmarking suite for classification:
the OpenML Curated Classification benchmarking suite 2018 (OpenML-CC18)
Efficient Automated Deep Learning for Time Series Forecasting
Recent years have witnessed tremendously improved efficiency of Automated
Machine Learning (AutoML), especially Automated Deep Learning (AutoDL) systems,
but recent work focuses on tabular, image, or NLP tasks. So far, little
attention has been paid to general AutoDL frameworks for time series
forecasting, despite the enormous success in applying different novel
architectures to such tasks. In this paper, we propose an efficient approach
for the joint optimization of neural architecture and hyperparameters of the
entire data processing pipeline for time series forecasting. In contrast to
common NAS search spaces, we designed a novel neural architecture search space
covering various state-of-the-art architectures, allowing for an efficient
macro-search over different DL approaches. To efficiently search in such a
large configuration space, we use Bayesian optimization with multi-fidelity
optimization. We empirically study several different budget types enabling
efficient multi-fidelity optimization on different forecasting datasets.
Furthermore, we compared our resulting system, dubbed \system, against several
established baselines and show that it significantly outperforms all of them
across several datasets
ASlib: A Benchmark Library for Algorithm Selection
The task of algorithm selection involves choosing an algorithm from a set of
algorithms on a per-instance basis in order to exploit the varying performance
of algorithms over a set of instances. The algorithm selection problem is
attracting increasing attention from researchers and practitioners in AI. Years
of fruitful applications in a number of domains have resulted in a large amount
of data, but the community lacks a standard format or repository for this data.
This situation makes it difficult to share and compare different approaches
effectively, as is done in other, more established fields. It also
unnecessarily hinders new researchers who want to work in this area. To address
this problem, we introduce a standardized format for representing algorithm
selection scenarios and a repository that contains a growing number of data
sets from the literature. Our format has been designed to be able to express a
wide variety of different scenarios. Demonstrating the breadth and power of our
platform, we describe a set of example experiments that build and evaluate
algorithm selection models through a common interface. The results display the
potential of algorithm selection to achieve significant performance
improvements across a broad range of problems and algorithms.Comment: Accepted to be published in Artificial Intelligence Journa
Mind the Gap: Measuring Generalization Performance Across Multiple Objectives
Modern machine learning models are often constructed taking into account
multiple objectives, e.g., minimizing inference time while also maximizing
accuracy. Multi-objective hyperparameter optimization (MHPO) algorithms return
such candidate models, and the approximation of the Pareto front is used to
assess their performance. In practice, we also want to measure generalization
when moving from the validation to the test set. However, some of the models
might no longer be Pareto-optimal which makes it unclear how to quantify the
performance of the MHPO method when evaluated on the test set. To resolve this,
we provide a novel evaluation protocol that allows measuring the generalization
performance of MHPO methods and studying its capabilities for comparing two
optimization experiments
Can Fairness be Automated?:Guidelines and Opportunities for Fairness-aware AutoML
The field of automated machine learning (AutoML) introduces techniques that automate parts of the development of machine learning (ML) systems, accelerating the process and reducing barriers for novices. However, decisions derived from ML models can reproduce, amplify, or even introduce unfairness in our societies, causing harm to (groups of) individuals. In response, researchers have started to propose AutoML systems that jointly optimize fairness and predictive performance to mitigate fairness-related harm. However, fairness is a complex and inherently interdisciplinary subject, and solely posing it as an optimization problem can have adverse side effects. With this work, we aim to raise awareness among developers of AutoML systems about such limitations of fairness-aware AutoML, while also calling attention to the potential of AutoML as a tool for fairness research. We present a comprehensive overview of different ways in which fairness-related harm can arise and the ensuing implications for the design of fairness-aware AutoML. We conclude that while fairness cannot be automated, fairness-aware AutoML can play an important role in the toolbox of an ML practitioner. We highlight several open technical challenges for future work in this direction. Additionally, we advocate for the creation of more user-centered assistive systems designed to tackle challenges encountered in fairness work
Can Fairness be Automated? Guidelines and Opportunities for Fairness-aware AutoML
The field of automated machine learning (AutoML) introduces techniques that
automate parts of the development of machine learning (ML) systems,
accelerating the process and reducing barriers for novices. However, decisions
derived from ML models can reproduce, amplify, or even introduce unfairness in
our societies, causing harm to (groups of) individuals. In response,
researchers have started to propose AutoML systems that jointly optimize
fairness and predictive performance to mitigate fairness-related harm. However,
fairness is a complex and inherently interdisciplinary subject, and solely
posing it as an optimization problem can have adverse side effects. With this
work, we aim to raise awareness among developers of AutoML systems about such
limitations of fairness-aware AutoML, while also calling attention to the
potential of AutoML as a tool for fairness research. We present a comprehensive
overview of different ways in which fairness-related harm can arise and the
ensuing implications for the design of fairness-aware AutoML. We conclude that
while fairness cannot be automated, fairness-aware AutoML can play an important
role in the toolbox of ML practitioners. We highlight several open technical
challenges for future work in this direction. Additionally, we advocate for the
creation of more user-centered assistive systems designed to tackle challenges
encountered in fairness wor
Probing Teichoic Acid Genetics with Bioactive Molecules Reveals New Interactions among Diverse Processes in Bacterial Cell Wall Biogenesis
SummaryThe bacterial cell wall has been a celebrated target for antibiotics and holds real promise for the discovery of new antibacterial chemical matter. In addition to peptidoglycan, the walls of Gram-positive bacteria contain large amounts of the polymer teichoic acid, covalently attached to peptidoglycan. Recently, wall teichoic acid was shown to be essential to the proper morphology of Bacillus subtilis and an important virulence factor for Staphylococcus aureus. Additionally, recent studies have shown that the dispensability of genes encoding teichoic acid biosynthetic enzymes is paradoxical and complex. Here, we report on the discovery of a promoter (PywaC), which is sensitive to lesions in teichoic acid synthesis. Exploiting this promoter through a chemical-genetic approach, we revealed surprising interactions among undecaprenol, peptidoglycan, and teichoic acid biosynthesis that help explain the complexity of teichoic acid gene dispensability. Furthermore, the new reporter assay represents an exciting avenue for the discovery of antibacterial molecules
Pan-Cancer Analysis of lncRNA Regulation Supports Their Targeting of Cancer Genes in Each Tumor Context
Long noncoding RNAs (lncRNAs) are commonly dys-regulated in tumors, but only a handful are known toplay pathophysiological roles in cancer. We inferredlncRNAs that dysregulate cancer pathways, onco-genes, and tumor suppressors (cancer genes) bymodeling their effects on the activity of transcriptionfactors, RNA-binding proteins, and microRNAs in5,185 TCGA tumors and 1,019 ENCODE assays.Our predictions included hundreds of candidateonco- and tumor-suppressor lncRNAs (cancerlncRNAs) whose somatic alterations account for thedysregulation of dozens of cancer genes and path-ways in each of 14 tumor contexts. To demonstrateproof of concept, we showed that perturbations tar-geting OIP5-AS1 (an inferred tumor suppressor) andTUG1 and WT1-AS (inferred onco-lncRNAs) dysre-gulated cancer genes and altered proliferation ofbreast and gynecologic cancer cells. Our analysis in-dicates that, although most lncRNAs are dysregu-lated in a tumor-specific manner, some, includingOIP5-AS1, TUG1, NEAT1, MEG3, and TSIX, synergis-tically dysregulate cancer pathways in multiple tumorcontexts
- …