5,442 research outputs found
Investigating Machine Learning Techniques for Gesture Recognition with Low-Cost Capacitive Sensing Arrays
Machine learning has proven to be an effective tool for forming models to make predictions based on sample data. Supervised learning, a subset of machine learning, can be used to map input data to output labels based on pre-existing paired data. Datasets for machine learning can be created from many different sources and vary in complexity, with popular datasets including the MNIST handwritten dataset and CIFAR10 image dataset. The focus of this thesis is to test and validate multiple machine learning models for accurately classifying gestures performed on a low-cost capacitive sensing array. Multiple neural networks are trained using gesture datasets obtained from the capacitance board. In this paper, I train and compare different machine learning models on recognizing gesture datasets. Learning hyperparameters are also adjusted for results. Two datasets are used for the training: one containing simple gestures and another containing more complicated gestures. Accuracy and loss for the models are calculated and compared to determine which models excel at recognizing performed gestures
Machine Analysis of Facial Expressions
No abstract
Dataset Condensation with Gradient Matching
As the state-of-the-art machine learning methods in many fields rely on
larger datasets, storing datasets and training models on them become
significantly more expensive. This paper proposes a training set synthesis
technique for data-efficient learning, called Dataset Condensation, that learns
to condense large dataset into a small set of informative synthetic samples for
training deep neural networks from scratch. We formulate this goal as a
gradient matching problem between the gradients of deep neural network weights
that are trained on the original and our synthetic data. We rigorously evaluate
its performance in several computer vision benchmarks and demonstrate that it
significantly outperforms the state-of-the-art methods. Finally we explore the
use of our method in continual learning and neural architecture search and
report promising gains when limited memory and computations are available
Data Distillation: A Survey
The popularity of deep learning has led to the curation of a vast number of
massive and multifarious datasets. Despite having close-to-human performance on
individual tasks, training parameter-hungry models on large datasets poses
multi-faceted problems such as (a) high model-training time; (b) slow research
iteration; and (c) poor eco-sustainability. As an alternative, data
distillation approaches aim to synthesize terse data summaries, which can serve
as effective drop-in replacements of the original dataset for scenarios like
model training, inference, architecture search, etc. In this survey, we present
a formal framework for data distillation, along with providing a detailed
taxonomy of existing approaches. Additionally, we cover data distillation
approaches for different data modalities, namely images, graphs, and user-item
interactions (recommender systems), while also identifying current challenges
and future research directions.Comment: Accepted at TMLR '23. 21 pages, 4 figure
Towards Data-centric Graph Machine Learning: Review and Outlook
Data-centric AI, with its primary focus on the collection, management, and
utilization of data to drive AI models and applications, has attracted
increasing attention in recent years. In this article, we conduct an in-depth
and comprehensive review, offering a forward-looking outlook on the current
efforts in data-centric AI pertaining to graph data-the fundamental data
structure for representing and capturing intricate dependencies among massive
and diverse real-life entities. We introduce a systematic framework,
Data-centric Graph Machine Learning (DC-GML), that encompasses all stages of
the graph data lifecycle, including graph data collection, exploration,
improvement, exploitation, and maintenance. A thorough taxonomy of each stage
is presented to answer three critical graph-centric questions: (1) how to
enhance graph data availability and quality; (2) how to learn from graph data
with limited-availability and low-quality; (3) how to build graph MLOps systems
from the graph data-centric view. Lastly, we pinpoint the future prospects of
the DC-GML domain, providing insights to navigate its advancements and
applications.Comment: 42 pages, 9 figure
- …