100,844 research outputs found
Case-based reasoning for meta-heuristics self-parameterization in a multi-agent scheduling system
A novel agent-based approach to Meta-Heuristics
self-configuration is proposed in this work. Meta-heuristics are
examples of algorithms where parameters need to be set up as
efficient as possible in order to unsure its performance. This
paper presents a learning module for self-parameterization of
Meta-heuristics (MHs) in a Multi-Agent System (MAS) for
resolution of scheduling problems. The learning is based on
Case-based Reasoning (CBR) and two different integration
approaches are proposed. A computational study is made for
comparing the two CBR integration perspectives. In the end,
some conclusions are reached and future work outlined
Incorporating Memory and Learning Mechanisms Into Meta-RaPS
Due to the rapid increase of dimensions and complexity of real life problems, it has become more difficult to find optimal solutions using only exact mathematical methods. The need to find near-optimal solutions in an acceptable amount of time is a challenge when developing more sophisticated approaches. A proper answer to this challenge can be through the implementation of metaheuristic approaches. However, a more powerful answer might be reached by incorporating intelligence into metaheuristics.
Meta-RaPS (Metaheuristic for Randomized Priority Search) is a metaheuristic that creates high quality solutions for discrete optimization problems. It is proposed that incorporating memory and learning mechanisms into Meta-RaPS, which is currently classified as a memoryless metaheuristic, can help the algorithm produce higher quality results.
The proposed Meta-RaPS versions were created by taking different perspectives of learning. The first approach taken is Estimation of Distribution Algorithms (EDA), a stochastic learning technique that creates a probability distribution for each decision variable to generate new solutions. The second Meta-RaPS version was developed by utilizing a machine learning algorithm, Q Learning, which has been successfully applied to optimization problems whose output is a sequence of actions. In the third Meta-RaPS version, Path Relinking (PR) was implemented as a post-optimization method in which the new algorithm learns the good attributes by memorizing best solutions, and follows them to reach better solutions. The fourth proposed version of Meta-RaPS presented another form of learning with its ability to adaptively tune parameters. The efficiency of these approaches motivated us to redesign Meta-RaPS by removing the improvement phase and adding a more sophisticated Path Relinking method. The new Meta-RaPS could solve even the largest problems in much less time while keeping up the quality of its solutions.
To evaluate their performance, all introduced versions were tested using the 0-1 Multidimensional Knapsack Problem (MKP). After comparing the proposed algorithms, Meta-RaPS PR and Meta-RaPS Q Learning appeared to be the algorithms with the best and worst performance, respectively. On the other hand, they could all show superior performance than other approaches to the 0-1 MKP in the literature
Artificial intelligence for MRI diagnosis of joints: a scoping review of the current state-of-the-art of deep learning-based approaches
Deep learning-based MRI diagnosis of internal joint derangement is an emerging field of artificial intelligence, which offers many exciting possibilities for musculoskeletal radiology. A variety of investigational deep learning algorithms have been developed to detect anterior cruciate ligament tears, meniscus tears, and rotator cuff disorders. Additional deep learning-based MRI algorithms have been investigated to detect Achilles tendon tears, recurrence prediction of musculoskeletal neoplasms, and complex segmentation of nerves, bones, and muscles. Proof-of-concept studies suggest that deep learning algorithms may achieve similar diagnostic performances when compared to human readers in meta-analyses; however, musculoskeletal radiologists outperformed most deep learning algorithms in studies including a direct comparison. Earlier investigations and developments of deep learning algorithms focused on the binary classification of the presence or absence of an abnormality, whereas more advanced deep learning algorithms start to include features for characterization and severity grading. While many studies have focused on comparing deep learning algorithms against human readers, there is a paucity of data on the performance differences of radiologists interpreting musculoskeletal MRI studies without and with artificial intelligence support. Similarly, studies demonstrating the generalizability and clinical applicability of deep learning algorithms using realistic clinical settings with workflow-integrated deep learning algorithms are sparse. Contingent upon future studies showing the clinical utility of deep learning algorithms, artificial intelligence may eventually translate into clinical practice to assist detection and characterization of various conditions on musculoskeletal MRI exams
The Curse of Low Task Diversity: On the Failure of Transfer Learning to Outperform MAML and Their Empirical Equivalence
Recently, it has been observed that a transfer learning solution might be all
we need to solve many few-shot learning benchmarks -- thus raising important
questions about when and how meta-learning algorithms should be deployed. In
this paper, we seek to clarify these questions by 1. proposing a novel metric
-- the diversity coefficient -- to measure the diversity of tasks in a few-shot
learning benchmark and 2. by comparing Model-Agnostic Meta-Learning (MAML) and
transfer learning under fair conditions (same architecture, same optimizer, and
all models trained to convergence). Using the diversity coefficient, we show
that the popular MiniImageNet and CIFAR-FS few-shot learning benchmarks have
low diversity. This novel insight contextualizes claims that transfer learning
solutions are better than meta-learned solutions in the regime of low diversity
under a fair comparison. Specifically, we empirically find that a low diversity
coefficient correlates with a high similarity between transfer learning and
MAML learned solutions in terms of accuracy at meta-test time and
classification layer similarity (using feature based distance metrics like
SVCCA, PWCCA, CKA, and OPD). To further support our claim, we find this
meta-test accuracy holds even as the model size changes. Therefore, we conclude
that in the low diversity regime, MAML and transfer learning have equivalent
meta-test performance when both are compared fairly. We also hope our work
inspires more thoughtful constructions and quantitative evaluations of
meta-learning benchmarks in the future.Comment: arXiv admin note: substantial text overlap with arXiv:2112.1312
Comparing and Tuning Machine Learning Algorithms to Predict Type 2 Diabetes Mellitus
The main goals of this work is to study and compare machine learning algorithms to predict the development of type 2 diabetes mellitus.
Four classifi cation algorithms have been considered, studying and comparing the accuracy of each one to predict the incidence of type 2 diabetes mellitus seven years in advance. Specifically, the techniques studied are: Decision Tree, Random Forest, kNN (k-Nearest Neighbors) and Neural Networks.
The study not only involves the comparison among these techniques, but also, the tuning of the meta-parameters in each algorithm.
The algorithms have been implemented using the language R.
The data base used is obtained from the nation-wide cohort [email protected] study.
The conclusions will include the accuracy of each algorithm and therefore the best technique for this problem. The best meta-parameters for each algorithm will be also provided.Universidad de Málaga. Campus de Excelencia Internacional AndalucĂa Tec
Learning models for semantic classification of insufficient plantar pressure images
Establishing a reliable and stable model to predict a target by using insufficient labeled samples is feasible and
effective, particularly, for a sensor-generated data-set. This paper has been inspired with insufficient data-set
learning algorithms, such as metric-based, prototype networks and meta-learning, and therefore we propose
an insufficient data-set transfer model learning method. Firstly, two basic models for transfer learning are
introduced. A classification system and calculation criteria are then subsequently introduced. Secondly, a dataset
of plantar pressure for comfort shoe design is acquired and preprocessed through foot scan system; and by
using a pre-trained convolution neural network employing AlexNet and convolution neural network (CNN)-
based transfer modeling, the classification accuracy of the plantar pressure images is over 93.5%. Finally,
the proposed method has been compared to the current classifiers VGG, ResNet, AlexNet and pre-trained
CNN. Also, our work is compared with known-scaling and shifting (SS) and unknown-plain slot (PS) partition
methods on the public test databases: SUN, CUB, AWA1, AWA2, and aPY with indices of precision (tr, ts, H)
and time (training and evaluation). The proposed method for the plantar pressure classification task shows high
performance in most indices when comparing with other methods. The transfer learning-based method can be
applied to other insufficient data-sets of sensor imaging fields
On the Generalizability and Predictability of Recommender Systems
While other areas of machine learning have seen more and more automation,
designing a high-performing recommender system still requires a high level of
human effort. Furthermore, recent work has shown that modern recommender system
algorithms do not always improve over well-tuned baselines. A natural follow-up
question is, "how do we choose the right algorithm for a new dataset and
performance metric?" In this work, we start by giving the first large-scale
study of recommender system approaches by comparing 18 algorithms and 100 sets
of hyperparameters across 85 datasets and 315 metrics. We find that the best
algorithms and hyperparameters are highly dependent on the dataset and
performance metric, however, there are also strong correlations between the
performance of each algorithm and various meta-features of the datasets.
Motivated by these findings, we create RecZilla, a meta-learning approach to
recommender systems that uses a model to predict the best algorithm and
hyperparameters for new, unseen datasets. By using far more meta-training data
than prior work, RecZilla is able to substantially reduce the level of human
involvement when faced with a new recommender system application. We not only
release our code and pretrained RecZilla models, but also all of our raw
experimental results, so that practitioners can train a RecZilla model for
their desired performance metric: https://github.com/naszilla/reczilla.Comment: NeurIPS 202
- …