44 research outputs found
Weakly supervised semantic segmentation with a multi-image model
We propose a novel method for weakly supervised semantic segmentation. Training images are labeled only by the classes they contain, not by their location in the image. On test images instead, the method predicts a class label for every pixel. Our main innovation is a multi-image model (MIM)- a graphical model for recovering the pixel labels of the training images. The model connects superpixels from all training images in a data-driven fashion, based on their appearance similarity. For generalizing to new test images we integrate them into MIM using a learned multiple kernel metric, instead of learning conventional classifiers on the recovered pixel labels. We also introduce an “objectness” potential, that helps separating objects (e.g. car, dog, human) from background classes (e.g. grass, sky, road). In experiments on the MSRC 21 dataset and the LabelMe subset of [18], our technique outperforms previous weakly supervised methods and achieves accuracy comparable with fully supervised methods. 1
BGrowth: an efficient approach for the segmentation of vertebral compression fractures in magnetic resonance imaging
Segmentation of medical images is a critical issue: several process of
analysis and classification rely on this segmentation. With the growing number
of people presenting back pain and problems related to it, the automatic or
semi-automatic segmentation of fractured vertebral bodies became a challenging
task. In general, those fractures present several regions with non-homogeneous
intensities and the dark regions are quite similar to the structures nearby.
Aimed at overriding this challenge, in this paper we present a semi-automatic
segmentation method, called Balanced Growth (BGrowth). The experimental results
on a dataset with 102 crushed and 89 normal vertebrae show that our approach
significantly outperforms well-known methods from the literature. We have
achieved an accuracy up to 95% while keeping acceptable processing time
performance, that is equivalent to the state-of-the-artmethods. Moreover,
BGrowth presents the best results even with a rough (sloppy) manual annotation
(seed points).Comment: This is a pre-print of an article published in Symposium on Applied
Computing. The final authenticated version is available online at
https://doi.org/10.1145/3297280.329972
Melting Pot 2.0
Multi-agent artificial intelligence research promises a path to develop
intelligent technologies that are more human-like and more human-compatible
than those produced by "solipsistic" approaches, which do not consider
interactions between agents. Melting Pot is a research tool developed to
facilitate work on multi-agent artificial intelligence, and provides an
evaluation protocol that measures generalization to novel social partners in a
set of canonical test scenarios. Each scenario pairs a physical environment (a
"substrate") with a reference set of co-players (a "background population"), to
create a social situation with substantial interdependence between the
individuals involved. For instance, some scenarios were inspired by
institutional-economics-based accounts of natural resource management and
public-good-provision dilemmas. Others were inspired by considerations from
evolutionary biology, game theory, and artificial life. Melting Pot aims to
cover a maximally diverse set of interdependencies and incentives. It includes
the commonly-studied extreme cases of perfectly-competitive (zero-sum)
motivations and perfectly-cooperative (shared-reward) motivations, but does not
stop with them. As in real-life, a clear majority of scenarios in Melting Pot
have mixed incentives. They are neither purely competitive nor purely
cooperative and thus demand successful agents be able to navigate the resulting
ambiguity. Here we describe Melting Pot 2.0, which revises and expands on
Melting Pot. We also introduce support for scenarios with asymmetric roles, and
explain how to integrate them into the evaluation protocol. This report also
contains: (1) details of all substrates and scenarios; (2) a complete
description of all baseline algorithms and results. Our intention is for it to
serve as a reference for researchers using Melting Pot 2.0.Comment: 59 pages, 54 figures. arXiv admin note: text overlap with
arXiv:2107.0685
A Hybrid Color Space for Skin Detection Using Genetic Algorithm Heuristic Search and Principal Component Analysis Technique
Color is one of the most prominent features of an image and used in many skin and face detection applications. Color space transformation is widely used by researchers to improve face and skin detection performance. Despite the substantial research efforts in this area, choosing a proper color space in terms of skin and face classification performance which can address issues like illumination variations, various camera characteristics and diversity in skin color tones has remained an open issue. This research proposes a new three-dimensional hybrid color space termed SKN by employing the Genetic Algorithm heuristic and Principal Component Analysis to find the optimal representation of human skin color in over seventeen existing color spaces. Genetic Algorithm heuristic is used to find the optimal color component combination setup in terms of skin detection accuracy while the Principal Component Analysis projects the optimal Genetic Algorithm solution to a less complex dimension. Pixel wise skin detection was used to evaluate the performance of the proposed color space. We have employed four classifiers including Random Forest, Naïve Bayes, Support Vector Machine and Multilayer Perceptron in order to generate the human skin color predictive model. The proposed color space was compared to some existing color spaces and shows superior results in terms of pixel-wise skin detection accuracy. Experimental results show that by using Random Forest classifier, the proposed SKN color space obtained an average F-score and True Positive Rate of 0.953 and False Positive Rate of 0.0482 which outperformed the existing color spaces in terms of pixel wise skin detection accuracy. The results also indicate that among the classifiers used in this study, Random Forest is the most suitable classifier for pixel wise skin detection applications
Using simple artificial intelligence methods for predicting amyloidogenesis in antibodies
<p>Abstract</p> <p>Background</p> <p>All polypeptide backbones have the potential to form amyloid fibrils, which are associated with a number of degenerative disorders. However, the likelihood that amyloidosis would actually occur under physiological conditions depends largely on the amino acid composition of a protein. We explore using a naive Bayesian classifier and a weighted decision tree for predicting the amyloidogenicity of immunoglobulin sequences.</p> <p>Results</p> <p>The average accuracy based on leave-one-out (LOO) cross validation of a Bayesian classifier generated from 143 amyloidogenic sequences is 60.84%. This is consistent with the average accuracy of 61.15% for a holdout test set comprised of 103 AM and 28 non-amyloidogenic sequences. The LOO cross validation accuracy increases to 81.08% when the training set is augmented by the holdout test set. In comparison, the average classification accuracy for the holdout test set obtained using a decision tree is 78.64%. Non-amyloidogenic sequences are predicted with average LOO cross validation accuracies between 74.05% and 77.24% using the Bayesian classifier, depending on the training set size. The accuracy for the holdout test set was 89%. For the decision tree, the non-amyloidogenic prediction accuracy is 75.00%.</p> <p>Conclusions</p> <p>This exploratory study indicates that both classification methods may be promising in providing straightforward predictions on the amyloidogenicity of a sequence. Nevertheless, the number of available sequences that satisfy the premises of this study are limited, and are consequently smaller than the ideal training set size. Increasing the size of the training set clearly increases the accuracy, and the expansion of the training set to include not only more derivatives, but more alignments, would make the method more sound. The accuracy of the classifiers may also be improved when additional factors, such as structural and physico-chemical data, are considered. The development of this type of classifier has significant applications in evaluating engineered antibodies, and may be adapted for evaluating engineered proteins in general.</p
