7,638 research outputs found
SIMCO: SIMilarity-based object COunting
We present SIMCO, the first agnostic multi-class object counting approach.
SIMCO starts by detecting foreground objects through a novel Mask RCNN-based
architecture trained beforehand (just once) on a brand-new synthetic 2D shape
dataset, InShape; the idea is to highlight every object resembling a primitive
2D shape (circle, square, rectangle, etc.). Each object detected is described
by a low-dimensional embedding, obtained from a novel similarity-based head
branch; this latter implements a triplet loss, encouraging similar objects
(same 2D shape + color and scale) to map close. Subsequently, SIMCO uses this
embedding for clustering, so that different types of objects can emerge and be
counted, making SIMCO the very first multi-class unsupervised counter.
Experiments show that SIMCO provides state-of-the-art scores on counting
benchmarks and that it can also help in many challenging image understanding
tasks
EC3: Combining Clustering and Classification for Ensemble Learning
Classification and clustering algorithms have been proved to be successful
individually in different contexts. Both of them have their own advantages and
limitations. For instance, although classification algorithms are more powerful
than clustering methods in predicting class labels of objects, they do not
perform well when there is a lack of sufficient manually labeled reliable data.
On the other hand, although clustering algorithms do not produce label
information for objects, they provide supplementary constraints (e.g., if two
objects are clustered together, it is more likely that the same label is
assigned to both of them) that one can leverage for label prediction of a set
of unknown objects. Therefore, systematic utilization of both these types of
algorithms together can lead to better prediction performance. In this paper,
We propose a novel algorithm, called EC3 that merges classification and
clustering together in order to support both binary and multi-class
classification. EC3 is based on a principled combination of multiple
classification and multiple clustering methods using an optimization function.
We theoretically show the convexity and optimality of the problem and solve it
by block coordinate descent method. We additionally propose iEC3, a variant of
EC3 that handles imbalanced training data. We perform an extensive experimental
analysis by comparing EC3 and iEC3 with 14 baseline methods (7 well-known
standalone classifiers, 5 ensemble classifiers, and 2 existing methods that
merge classification and clustering) on 13 standard benchmark datasets. We show
that our methods outperform other baselines for every single dataset, achieving
at most 10% higher AUC. Moreover our methods are faster (1.21 times faster than
the best baseline), more resilient to noise and class imbalance than the best
baseline method.Comment: 14 pages, 7 figures, 11 table
Magic Sets for Disjunctive Datalog Programs
In this paper, a new technique for the optimization of (partially) bound
queries over disjunctive Datalog programs with stratified negation is
presented. The technique exploits the propagation of query bindings and extends
the Magic Set (MS) optimization technique.
An important feature of disjunctive Datalog is nonmonotonicity, which calls
for nondeterministic implementations, such as backtracking search. A
distinguishing characteristic of the new method is that the optimization can be
exploited also during the nondeterministic phase. In particular, after some
assumptions have been made during the computation, parts of the program may
become irrelevant to a query under these assumptions. This allows for dynamic
pruning of the search space. In contrast, the effect of the previously defined
MS methods for disjunctive Datalog is limited to the deterministic portion of
the process. In this way, the potential performance gain by using the proposed
method can be exponential, as could be observed empirically.
The correctness of MS is established thanks to a strong relationship between
MS and unfounded sets that has not been studied in the literature before. This
knowledge allows for extending the method also to programs with stratified
negation in a natural way.
The proposed method has been implemented in DLV and various experiments have
been conducted. Experimental results on synthetic data confirm the utility of
MS for disjunctive Datalog, and they highlight the computational gain that may
be obtained by the new method w.r.t. the previously proposed MS methods for
disjunctive Datalog programs. Further experiments on real-world data show the
benefits of MS within an application scenario that has received considerable
attention in recent years, the problem of answering user queries over possibly
inconsistent databases originating from integration of autonomous sources of
information.Comment: 67 pages, 19 figures, preprint submitted to Artificial Intelligenc
Recommended from our members
MIMIC Models for Uniform and Nonuniform DIF as Moderated Mediation Models.
In this article, the authors describe how multiple indicators multiple cause (MIMIC) models for studying uniform and nonuniform differential item functioning (DIF) can be conceptualized as mediation and moderated mediation models. Conceptualizing DIF within the context of a moderated mediation model helps to understand DIF as the effect of some variable on measurements that is not accounted for by the latent variable of interest. In addition, useful concepts and ideas from the mediation and moderation literature can be applied to DIF analysis: (a) improving the understanding of uniform and nonuniform DIF as direct effects and interactions, (b) understanding the implication of indirect effects in DIF analysis, (c) clarifying the interpretation of the "uniform DIF parameter" in the presence of nonuniform DIF, and (d) probing interactions and using the concept of "conditional effects" to better understand the patterns of DIF across the range of the latent variable
Microscopic Aspects of Stretched Exponential Relaxation (SER) in Homogeneous Molecular and Network Glasses and Polymers
Because the theory of SER is still a work in progress, the phenomenon itself
can be said to be the oldest unsolved problem in science, as it started with
Kohlrausch in 1847. Many electrical and optical phenomena exhibit SER with
probe relaxation I(t) ~ exp[-(t/{\tau}){\beta}], with 0 < {\beta} < 1. Here
{\tau} is a material-sensitive parameter, useful for discussing chemical
trends. The "shape" parameter {\beta} is dimensionless and plays the role of a
non-equilibrium scaling exponent; its value, especially in glasses, is both
practically useful and theoretically significant. The mathematical complexity
of SER is such that rigorous derivations of this peculiar function were not
achieved until the 1970's. The focus of much of the 1970's pioneering work was
spatial relaxation of electronic charge, but SER is a universal phenomenon, and
today atomic and molecular relaxation of glasses and deeply supercooled liquids
provide the most reliable data. As the data base grew, the need for a
quantitative theory increased; this need was finally met by the
diffusion-to-traps topological model, which yields a remarkably simple
expression for the shape parameter {\beta}, given by d*/(d* + 2). At first
sight this expression appears to be identical to d/(d + 2), where d is the
actual spatial dimensionality, as originally derived. The original model,
however, failed to explain much of the data base. Here the theme of earlier
reviews, based on the observation that in the presence of short-range forces
only d* = d = 3 is the actual spatial dimensionality, while for mixed short-
and long-range forces, d* = fd = d/2, is applied to four new spectacular
examples, where it turns out that SER is useful not only for purposes of
quality control, but also for defining what is meant by a glass in novel
contexts. (Please see full abstract in main text
Automatic identification of the number of clusters in hierarchical clustering
Hierarchical clustering is one of the most suitable tools to discover the underlying true structure of a dataset in the case of unsupervised learning where the ground truth is unknown and classical machine learning classifiers are not suitable. In many real applications, it provides a perspective on inner data structure and is preferred to partitional methods. However, determining the resulting number of clusters in hierarchical clustering requires human expertise to deduce this from the dendrogram and this represents a major challenge in making a fully automatic system such as the ones required for decision support in Industry 4.0. This research proposes a general criterion to perform the cut of a dendrogram automatically, by comparing six original criteria based on the Calinski-Harabasz index. The performance of each criterion on 95 real-life dendrograms of different topologies is evaluated against the number of classes proposed by the experts and a winner criterion is determined. This research is framed in a bigger project to build an Intelligent Decision Support system to assess the performance of 3D printers based on sensor data in real-time, although the proposed criteria can be used in other real applications of hierarchical clustering.The methodology is applied to a real-life dataset from the 3D printers and the huge reduction in CPU time is also shown by comparing the CPU time before and after this modification of the entire clustering method. It also reduces the dependability on human-expert to provide the number of clusters by inspecting the dendrogram. Further, such a process allows applying hierarchical clustering in an automatic mode in real-life industrial applications and allows the continuous monitoring of real 3D printers in production, and helps in building an Intelligent Decision Support System to detect operational modes, anomalies, and other behavioral patterns.Peer ReviewedPostprint (author's final draft
Change blindness: eradication of gestalt strategies
Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
The hArtes Tool Chain
This chapter describes the different design steps needed to go from legacy code to a transformed application that can be efficiently mapped on the hArtes platform
- …