75 research outputs found
Random Access Protocols with Collision Resolution in a Noncoherent Setting
Wireless systems are increasingly used for Machine-Type Communication (MTC),
where the users sporadically send very short messages. In such a setting, the
overhead imposed by channel estimation is substantial, thereby demanding
noncoherent communication. In this paper we consider a noncoherent setup in
which users randomly access the medium to send short messages to a common
receiver. We propose a transmission scheme based on Gabor frames, where each
user has a dedicated codebook of M possible codewords, while the codebook
simultaneously serves as an ID for the user. The scheme is used as a basis for
a simple protocol for collision resolution.Comment: 5 pages, 3 figures; EDIT: A version of this work has been submitted
for publication in the IEEE Wireless Communication Letters Journa
A Pre-log Region for the Non-coherent MIMO Two-Way Relaying Channel
We study the two-user MIMO block fading two-way relay channel in
the non-coherent setting, where neither the terminals nor the
relay have knowledge of the channel realizations. We analyze the
achievable sum-rate when the users employ independent,
isotropically distributed, unitary input signals, with
amplify-and-forward (AF) strategy at the relay node. As a
byproduct, we present an achievable pre-log region of the AF
scheme, defined as the limiting ratio of the rate region to the
logarithm of the signal-to-noise ratio (SNR) as the SNR tends to
infinity. We compare the performance with
time-division-multiple-access (TDMA) schemes, both coherent and
non-coherent. The analysis is supported by a geometric
interpretation, based on the paradigm of subspace-based
communication
TransOpt: Transformer-based Representation Learning for Optimization Problem Classification
We propose a representation of optimization problem instances using a
transformer-based neural network architecture trained for the task of problem
classification of the 24 problem classes from the Black-box Optimization
Benchmarking (BBOB) benchmark. We show that transformer-based methods can be
trained to recognize problem classes with accuracies in the range of 70\%-80\%
for different problem dimensions, suggesting the possible application of
transformer architectures in acquiring representations for black-box
optimization problems
DynamoRep: Trajectory-Based Population Dynamics for Classification of Black-box Optimization Problems
The application of machine learning (ML) models to the analysis of
optimization algorithms requires the representation of optimization problems
using numerical features. These features can be used as input for ML models
that are trained to select or to configure a suitable algorithm for the problem
at hand. Since in pure black-box optimization information about the problem
instance can only be obtained through function evaluation, a common approach is
to dedicate some function evaluations for feature extraction, e.g., using
random sampling. This approach has two key downsides: (1) It reduces the budget
left for the actual optimization phase, and (2) it neglects valuable
information that could be obtained from a problem-solver interaction.
In this paper, we propose a feature extraction method that describes the
trajectories of optimization algorithms using simple descriptive statistics. We
evaluate the generated features for the task of classifying problem classes
from the Black Box Optimization Benchmarking (BBOB) suite. We demonstrate that
the proposed DynamoRep features capture enough information to identify the
problem class on which the optimization algorithm is running, achieving a mean
classification accuracy of 95% across all experiments.Comment: 9 pages, 5 figure
Explainable Model-specific Algorithm Selection for Multi-Label Classification
Multi-label classification (MLC) is an ML task of predictive modeling in
which a data instance can simultaneously belong to multiple classes. MLC is
increasingly gaining interest in different application domains such as text
mining, computer vision, and bioinformatics. Several MLC algorithms have been
proposed in the literature, resulting in a meta-optimization problem that the
user needs to address: which MLC approach to select for a given dataset? To
address this algorithm selection problem, we investigate in this work the
quality of an automated approach that uses characteristics of the datasets -
so-called features - and a trained algorithm selector to choose which algorithm
to apply for a given task. For our empirical evaluation, we use a portfolio of
38 datasets. We consider eight MLC algorithms, whose quality we evaluate using
six different performance metrics. We show that our automated algorithm
selector outperforms any of the single MLC algorithms, and this is for all
evaluated performance measures. Our selection approach is explainable, a
characteristic that we exploit to investigate which meta-features have the
largest influence on the decisions made by the algorithm selector. Finally, we
also quantify the importance of the most significant meta-features for various
domains
OPTION: OPTImization Algorithm Benchmarking ONtology
Many optimization algorithm benchmarking platforms allow users to share their
experimental data to promote reproducible and reusable research. However,
different platforms use different data models and formats, which drastically
complicates the identification of relevant datasets, their interpretation, and
their interoperability. Therefore, a semantically rich, ontology-based,
machine-readable data model that can be used by different platforms is highly
desirable. In this paper, we report on the development of such an ontology,
which we call OPTION (OPTImization algorithm benchmarking ONtology). Our
ontology provides the vocabulary needed for semantic annotation of the core
entities involved in the benchmarking process, such as algorithms, problems,
and evaluation measures. It also provides means for automatic data integration,
improved interoperability, and powerful querying capabilities, thereby
increasing the value of the benchmarking data. We demonstrate the utility of
OPTION, by annotating and querying a corpus of benchmark performance data from
the BBOB collection of the COCO framework and from the Yet Another Black-Box
Optimization Benchmark (YABBOB) family of the Nevergrad environment. In
addition, we integrate features of the BBOB functional performance landscape
into the OPTION knowledge base using publicly available datasets with
exploratory landscape analysis. Finally, we integrate the OPTION knowledge base
into the IOHprofiler environment and provide users with the ability to perform
meta-analysis of performance data
- …