14,808 research outputs found
Rank-based linkage I: triplet comparisons and oriented simplicial complexes
Rank-based linkage is a new tool for summarizing a collection of objects
according to their relationships. These objects are not mapped to vectors, and
``similarity'' between objects need be neither numerical nor symmetrical. All
an object needs to do is rank nearby objects by similarity to itself, using a
Comparator which is transitive, but need not be consistent with any metric on
the whole set. Call this a ranking system on . Rank-based linkage is applied
to the -nearest neighbor digraph derived from a ranking system. Computations
occur on a 2-dimensional abstract oriented simplicial complex whose faces are
among the points, edges, and triangles of the line graph of the undirected
-nearest neighbor graph on . In steps it builds an
edge-weighted linkage graph where
is called the in-sway between objects and . Take to be
the links whose in-sway is at least , and partition into components of
the graph , for varying . Rank-based linkage is a
functor from a category of out-ordered digraphs to a category of partitioned
sets, with the practical consequence that augmenting the set of objects in a
rank-respectful way gives a fresh clustering which does not ``rip apart`` the
previous one. The same holds for single linkage clustering in the metric space
context, but not for typical optimization-based methods. Open combinatorial
problems are presented in the last section.Comment: 37 pages, 12 figure
The Metaverse: Survey, Trends, Novel Pipeline Ecosystem & Future Directions
The Metaverse offers a second world beyond reality, where boundaries are
non-existent, and possibilities are endless through engagement and immersive
experiences using the virtual reality (VR) technology. Many disciplines can
benefit from the advancement of the Metaverse when accurately developed,
including the fields of technology, gaming, education, art, and culture.
Nevertheless, developing the Metaverse environment to its full potential is an
ambiguous task that needs proper guidance and directions. Existing surveys on
the Metaverse focus only on a specific aspect and discipline of the Metaverse
and lack a holistic view of the entire process. To this end, a more holistic,
multi-disciplinary, in-depth, and academic and industry-oriented review is
required to provide a thorough study of the Metaverse development pipeline. To
address these issues, we present in this survey a novel multi-layered pipeline
ecosystem composed of (1) the Metaverse computing, networking, communications
and hardware infrastructure, (2) environment digitization, and (3) user
interactions. For every layer, we discuss the components that detail the steps
of its development. Also, for each of these components, we examine the impact
of a set of enabling technologies and empowering domains (e.g., Artificial
Intelligence, Security & Privacy, Blockchain, Business, Ethics, and Social) on
its advancement. In addition, we explain the importance of these technologies
to support decentralization, interoperability, user experiences, interactions,
and monetization. Our presented study highlights the existing challenges for
each component, followed by research directions and potential solutions. To the
best of our knowledge, this survey is the most comprehensive and allows users,
scholars, and entrepreneurs to get an in-depth understanding of the Metaverse
ecosystem to find their opportunities and potentials for contribution
Audio-Visual Automatic Speech Recognition Towards Education for Disabilities
Education is a fundamental right that enriches everyone’s life. However, physically challenged people often debar from the general and advanced education system. Audio-Visual Automatic Speech Recognition (AV-ASR) based system is useful to improve the education of physically challenged people by providing hands-free computing. They can communicate to the learning system through AV-ASR. However, it is challenging to trace the lip correctly for visual modality. Thus, this paper addresses the appearance-based visual feature along with the co-occurrence statistical measure for visual speech recognition. Local Binary Pattern-Three Orthogonal Planes (LBP-TOP) and Grey-Level Co-occurrence Matrix (GLCM) is proposed for visual speech information. The experimental results show that the proposed system achieves 76.60 % accuracy for visual speech and 96.00 % accuracy for audio speech recognition
Neural Architecture Search: Insights from 1000 Papers
In the past decade, advances in deep learning have resulted in breakthroughs
in a variety of areas, including computer vision, natural language
understanding, speech recognition, and reinforcement learning. Specialized,
high-performing neural architectures are crucial to the success of deep
learning in these areas. Neural architecture search (NAS), the process of
automating the design of neural architectures for a given task, is an
inevitable next step in automating machine learning and has already outpaced
the best human-designed architectures on many tasks. In the past few years,
research in NAS has been progressing rapidly, with over 1000 papers released
since 2020 (Deng and Lindauer, 2021). In this survey, we provide an organized
and comprehensive guide to neural architecture search. We give a taxonomy of
search spaces, algorithms, and speedup techniques, and we discuss resources
such as benchmarks, best practices, other surveys, and open-source libraries
Qluster: An easy-to-implement generic workflow for robust clustering of health data
The exploration of heath data by clustering algorithms allows to better describe the populations of interest by seeking the sub-profiles that compose it. This therefore reinforces medical knowledge, whether it is about a disease or a targeted population in real life. Nevertheless, contrary to the so-called conventional biostatistical methods where numerous guidelines exist, the standardization of data science approaches in clinical research remains a little discussed subject. This results in a significant variability in the execution of data science projects, whether in terms of algorithms used, reliability and credibility of the designed approach. Taking the path of parsimonious and judicious choice of both algorithms and implementations at each stage, this article proposes Qluster, a practical workflow for performing clustering tasks. Indeed, this workflow makes a compromise between (1) genericity of applications (e.g. usable on small or big data, on continuous, categorical or mixed variables, on database of high-dimensionality or not), (2) ease of implementation (need for few packages, few algorithms, few parameters, ...), and (3) robustness (e.g. use of proven algorithms and robust packages, evaluation of the stability of clusters, management of noise and multicollinearity). This workflow can be easily automated and/or routinely applied on a wide range of clustering projects. It can be useful both for data scientists with little experience in the field to make data clustering easier and more robust, and for more experienced data scientists who are looking for a straightforward and reliable solution to routinely perform preliminary data mining. A synthesis of the literature on data clustering as well as the scientific rationale supporting the proposed workflow is also provided. Finally, a detailed application of the workflow on a concrete use case is provided, along with a practical discussion for data scientists. An implementation on the Dataiku platform is available upon request to the authors
Open Set Classification of GAN-based Image Manipulations via a ViT-based Hybrid Architecture
Classification of AI-manipulated content is receiving great attention, for
distinguishing different types of manipulations. Most of the methods developed
so far fail in the open-set scenario, that is when the algorithm used for the
manipulation is not represented by the training set. In this paper, we focus on
the classification of synthetic face generation and manipulation in open-set
scenarios, and propose a method for classification with a rejection option. The
proposed method combines the use of Vision Transformers (ViT) with a hybrid
approach for simultaneous classification and localization. Feature map
correlation is exploited by the ViT module, while a localization branch is
employed as an attention mechanism to force the model to learn per-class
discriminative features associated with the forgery when the manipulation is
performed locally in the image. Rejection is performed by considering several
strategies and analyzing the model output layers. The effectiveness of the
proposed method is assessed for the task of classification of facial attribute
editing and GAN attribution
Leveraging Hidden Positives for Unsupervised Semantic Segmentation
Dramatic demand for manpower to label pixel-level annotations triggered the
advent of unsupervised semantic segmentation. Although the recent work
employing the vision transformer (ViT) backbone shows exceptional performance,
there is still a lack of consideration for task-specific training guidance and
local semantic consistency. To tackle these issues, we leverage contrastive
learning by excavating hidden positives to learn rich semantic relationships
and ensure semantic consistency in local regions. Specifically, we first
discover two types of global hidden positives, task-agnostic and task-specific
ones for each anchor based on the feature similarities defined by a fixed
pre-trained backbone and a segmentation head-in-training, respectively. A
gradual increase in the contribution of the latter induces the model to capture
task-specific semantic features. In addition, we introduce a gradient
propagation strategy to learn semantic consistency between adjacent patches,
under the inherent premise that nearby patches are highly likely to possess the
same semantics. Specifically, we add the loss propagating to local hidden
positives, semantically similar nearby patches, in proportion to the predefined
similarity scores. With these training schemes, our proposed method achieves
new state-of-the-art (SOTA) results in COCO-stuff, Cityscapes, and Potsdam-3
datasets. Our code is available at: https://github.com/hynnsk/HP.Comment: Accepted to CVPR 202
Procedure-Aware Pretraining for Instructional Video Understanding
Our goal is to learn a video representation that is useful for downstream
procedure understanding tasks in instructional videos. Due to the small amount
of available annotations, a key challenge in procedure understanding is to be
able to extract from unlabeled videos the procedural knowledge such as the
identity of the task (e.g., 'make latte'), its steps (e.g., 'pour milk'), or
the potential next steps given partial progress in its execution. Our main
insight is that instructional videos depict sequences of steps that repeat
between instances of the same or different tasks, and that this structure can
be well represented by a Procedural Knowledge Graph (PKG), where nodes are
discrete steps and edges connect steps that occur sequentially in the
instructional activities. This graph can then be used to generate pseudo labels
to train a video representation that encodes the procedural knowledge in a more
accessible form to generalize to multiple procedure understanding tasks. We
build a PKG by combining information from a text-based procedural knowledge
database and an unlabeled instructional video corpus and then use it to
generate training pseudo labels with four novel pre-training objectives. We
call this PKG-based pre-training procedure and the resulting model Paprika,
Procedure-Aware PRe-training for Instructional Knowledge Acquisition. We
evaluate Paprika on COIN and CrossTask for procedure understanding tasks such
as task recognition, step recognition, and step forecasting. Paprika yields a
video representation that improves over the state of the art: up to 11.23%
gains in accuracy in 12 evaluation settings. Implementation is available at
https://github.com/salesforce/paprika.Comment: CVPR 202
The cosmic waltz of Coma Berenices and Latyshev 2 (Group X). Membership, phase-space structure, mass, and energy distributions
Context. Open clusters (OCs) are fundamental benchmarks where theories of
star formation and stellar evolution can be tested and validated. Coma Ber and
Latyshev 2 (Group X) are the second and third OCs closest to the Sun, making
them excellent targets to search for low-mass stars and ultra-cool dwarfs. In
addition, this pair will experience a flyby in 10-16 Myr which makes it a
benchmark to test OCs pair interactions. Aims. We aim at analysing the
membership, luminosity, mass, phase-space (i.e., positions and velocities), and
energy distributions for Coma Ber and Latyshev 2 and test the hypothesis of the
mixing of their populations at the encounter time. Methods. We develop a new
phase-space membership methodology and apply it to Gaia data. With the
recovered members we infer the phase-space, luminosity and mass distributions
using publicly available Bayesian inference codes. Then, with a publicly
available orbit integration code and members' positions and velocities, we
integrate their orbits 20 Myr into the future. Results. In Coma Ber, we
identify 302 candidate members distributed in the core and tidal tails. The
tails are dynamically cold and asymmetrically populated. The stellar system
called Group X is made of two structures: the disrupted OC Latyshev 2 (186
candidate members) and a loose stellar association called Mecayotl 1 (146
candidate members), both of them will fly by Coma Ber in Myr and
Myr, respectively, and each other in Myr. Conclusions.
We study the dynamical properties of the core and tails of Coma Ber and also
confirm the existence of the OC Latyshev 2 and its neighbour stellar
association Mecayotl 1. Although these three systems will experience encounters
we find no evidence supporting the mixing of their populations.Comment: 25 pages, 19 figures, accepted for publication in Astronomy &
Astrophysic
Machine Learning Research Trends in Africa: A 30 Years Overview with Bibliometric Analysis Review
In this paper, a critical bibliometric analysis study is conducted, coupled
with an extensive literature survey on recent developments and associated
applications in machine learning research with a perspective on Africa. The
presented bibliometric analysis study consists of 2761 machine learning-related
documents, of which 98% were articles with at least 482 citations published in
903 journals during the past 30 years. Furthermore, the collated documents were
retrieved from the Science Citation Index EXPANDED, comprising research
publications from 54 African countries between 1993 and 2021. The bibliometric
study shows the visualization of the current landscape and future trends in
machine learning research and its application to facilitate future
collaborative research and knowledge exchange among authors from different
research institutions scattered across the African continent
- …