9,180 research outputs found
Biblio-Analysis of Cohort Intelligence (CI) Algorithm and its allied applications from Scopus and Web of Science Perspective
Cohort Intelligence or CI is one of its kind of novel optimization algorithm.
Since its inception, in a very short span it is applied successfully in various
domains and its results are observed to be effectual in contrast to algorithm
of its kind. Till date, there is no such type of bibliometric analysis carried
out on CI and its related applications. So, this research paper in a way will
be an ice breaker for those who want to take up CI to a new level. In this
research papers, CI publications available in Scopus are analyzed through
graphs, networked diagrams about authors, source titles, keywords over the
years, journals over the time. In a way this bibliometric paper showcase CI,
its applications and detail outs systematic review in terms its bibliometric
details
MOOCs Meet Measurement Theory: A Topic-Modelling Approach
This paper adapts topic models to the psychometric testing of MOOC students
based on their online forum postings. Measurement theory from education and
psychology provides statistical models for quantifying a person's attainment of
intangible attributes such as attitudes, abilities or intelligence. Such models
infer latent skill levels by relating them to individuals' observed responses
on a series of items such as quiz questions. The set of items can be used to
measure a latent skill if individuals' responses on them conform to a Guttman
scale. Such well-scaled items differentiate between individuals and inferred
levels span the entire range from most basic to the advanced. In practice,
education researchers manually devise items (quiz questions) while optimising
well-scaled conformance. Due to the costly nature and expert requirements of
this process, psychometric testing has found limited use in everyday teaching.
We aim to develop usable measurement models for highly-instrumented MOOC
delivery platforms, by using participation in automatically-extracted online
forum topics as items. The challenge is to formalise the Guttman scale
educational constraint and incorporate it into topic models. To favour topics
that automatically conform to a Guttman scale, we introduce a novel
regularisation into non-negative matrix factorisation-based topic modelling. We
demonstrate the suitability of our approach with both quantitative experiments
on three Coursera MOOCs, and with a qualitative survey of topic
interpretability on two MOOCs by domain expert interviews.Comment: 12 pages, 9 figures; accepted into AAAI'201
Chaotic Quantum Double Delta Swarm Algorithm using Chebyshev Maps: Theoretical Foundations, Performance Analyses and Convergence Issues
Quantum Double Delta Swarm (QDDS) Algorithm is a new metaheuristic algorithm
inspired by the convergence mechanism to the center of potential generated
within a single well of a spatially co-located double-delta well setup. It
mimics the wave nature of candidate positions in solution spaces and draws upon
quantum mechanical interpretations much like other quantum-inspired
computational intelligence paradigms. In this work, we introduce a Chebyshev
map driven chaotic perturbation in the optimization phase of the algorithm to
diversify weights placed on contemporary and historical, socially-optimal
agents' solutions. We follow this up with a characterization of solution
quality on a suite of 23 single-objective functions and carry out a comparative
analysis with eight other related nature-inspired approaches. By comparing
solution quality and successful runs over dynamic solution ranges, insights
about the nature of convergence are obtained. A two-tailed t-test establishes
the statistical significance of the solution data whereas Cohen's d and Hedge's
g values provide a measure of effect sizes. We trace the trajectory of the
fittest pseudo-agent over all function evaluations to comment on the dynamics
of the system and prove that the proposed algorithm is theoretically globally
convergent under the assumptions adopted for proofs of other closely-related
random search algorithms.Comment: 27 pages, 4 figures, 19 table
Toward Open-Set Face Recognition
Much research has been conducted on both face identification and face
verification, with greater focus on the latter. Research on face identification
has mostly focused on using closed-set protocols, which assume that all probe
images used in evaluation contain identities of subjects that are enrolled in
the gallery. Real systems, however, where only a fraction of probe sample
identities are enrolled in the gallery, cannot make this closed-set assumption.
Instead, they must assume an open set of probe samples and be able to
reject/ignore those that correspond to unknown identities. In this paper, we
address the widespread misconception that thresholding verification-like scores
is a good way to solve the open-set face identification problem, by formulating
an open-set face identification protocol and evaluating different strategies
for assessing similarity. Our open-set identification protocol is based on the
canonical labeled faces in the wild (LFW) dataset. Additionally to the known
identities, we introduce the concepts of known unknowns (known, but
uninteresting persons) and unknown unknowns (people never seen before) to the
biometric community. We compare three algorithms for assessing similarity in a
deep feature space under an open-set protocol: thresholded verification-like
scores, linear discriminant analysis (LDA) scores, and an extreme value machine
(EVM) probabilities. Our findings suggest that thresholding EVM probabilities,
which are open-set by design, outperforms thresholding verification-like
scores.Comment: Accepted for Publication in CVPR 2017 Biometrics Worksho
- …