1,521 research outputs found
Position statement on classification of basal cell carcinomas. Part 1: unsupervised clustering of experts as a way to build an operational classification of advanced basal cell carcinoma based on pattern recognition
Background No simple classification system has emerged for 'advanced basal cell carcinomas', and more generally for all difficult-to-treat BCCs (DTT-BCCs), due to the heterogeneity of situations, TNM inappropriateness to BCCs, and different approaches of different specialists. Objective To generate an operational classification, using the unconscious ability of experts to simplify the great heterogeneity of the clinical situations into a few relevant groups, which drive their treatment decisions. Method Non-supervised independent and blinded clustering of real clinical cases of DTT-BCCs was used. Fourteen international experts from different specialties independently partitioned 199 patient cases considered 'difficult to treat' into as many clusters they want (<= 10), choosing their own criteria for partitioning. Convergences and divergences between the individual partitions were analyzed using the similarity matrix, K-mean approach, and average silhouette method. Results There was a rather consensual clustering of cases, regardless of the specialty and nationality of the experts. Mathematical analysis showed that consensus between experts was best represented by a partition of DTT-BCCs into five clusters, easily recognized a posteriori as five clear-cut patterns of clinical situations. The concept of 'locally advanced' did not appear consistent between experts. Conclusion Although convergence between experts was not granted, this experiment shows that clinicians dealing with BCCs all tend to work by a similar pattern recognition based on the overall analysis of the situation. This study thus provides the first consensual classification of DTT-BCCs. This experimental approach using mathematical analysis of independent and blinded clustering of cases by experts can probably be applied to many other situations in dermatology and oncology
Multiple Instance Learning: A Survey of Problem Characteristics and Applications
Multiple instance learning (MIL) is a form of weakly supervised learning
where training instances are arranged in sets, called bags, and a label is
provided for the entire bag. This formulation is gaining interest because it
naturally fits various problems and allows to leverage weakly labeled data.
Consequently, it has been used in diverse application fields such as computer
vision and document classification. However, learning from bags raises
important challenges that are unique to MIL. This paper provides a
comprehensive survey of the characteristics which define and differentiate the
types of MIL problems. Until now, these problem characteristics have not been
formally identified and described. As a result, the variations in performance
of MIL algorithms from one data set to another are difficult to explain. In
this paper, MIL problem characteristics are grouped into four broad categories:
the composition of the bags, the types of data distribution, the ambiguity of
instance labels, and the task to be performed. Methods specialized to address
each category are reviewed. Then, the extent to which these characteristics
manifest themselves in key MIL application areas are described. Finally,
experiments are conducted to compare the performance of 16 state-of-the-art MIL
methods on selected problem characteristics. This paper provides insight on how
the problem characteristics affect MIL algorithms, recommendations for future
benchmarking and promising avenues for research
Web Spam DetectionUsing Fuzzy Clustering
Internet is the most widespread medium to express our views and ideas and a lucrative platform for delivering the products. F or this in tention, search engine plays a key role. The information or data about the web pages are stored in an index database of the search engine for use in later queries. Web spam refers to a host of techniques to challenge the ranking algorithms of web search en gines and cause them to rank their web pages higher or for some other beneficial purpose. Usually, the web spam is irritating the web surfers and makes disruption. It ruins the quality of the web search engine. So, in this paper, we presented an efficient clustering method to detect the spam web pages effectively and accurately. Also, we employed various validation measures to validate our research work by using the clustering methods. The comparison s between the obtained charts and the val idation results clearly explain that the research work we presented produces the better result
Towards an Integrative Approach for Automated Literature Reviews Using Machine Learning
Due to a huge amount of scientific publications which are mostly stored as unstructured data, complexity and workload of the fundamental process of literature reviews increase constantly. Based on previous literature, we develop an artifact that partially automates the literature review process from collecting articles up to their evaluation. This artifact uses a custom crawler, the word2vec algorithm, LDA topic modeling, rapid automatic keyword extraction, and agglomerative hierarchical clustering to enable the automatic acquisition, processing, and clustering of relevant literature and subsequent graphical presentation of the results using illustrations such as dendrograms. Moreover, the artifact provides information on which topics each cluster addresses and which keywords they contain. We evaluate our artifact based on an exemplary set of 308 publications. Our findings indicate that the developed artifact delivers better results than known previous approaches and can be a helpful tool to support researchers in conducting literature reviews
Xu: An Automated Query Expansion and Optimization Tool
The exponential growth of information on the Internet is a big challenge for
information retrieval systems towards generating relevant results. Novel
approaches are required to reformat or expand user queries to generate a
satisfactory response and increase recall and precision. Query expansion (QE)
is a technique to broaden users' queries by introducing additional tokens or
phrases based on some semantic similarity metrics. The tradeoff is the added
computational complexity to find semantically similar words and a possible
increase in noise in information retrieval. Despite several research efforts on
this topic, QE has not yet been explored enough and more work is needed on
similarity matching and composition of query terms with an objective to
retrieve a small set of most appropriate responses. QE should be scalable,
fast, and robust in handling complex queries with a good response time and
noise ceiling. In this paper, we propose Xu, an automated QE technique, using
high dimensional clustering of word vectors and Datamuse API, an open source
query engine to find semantically similar words. We implemented Xu as a command
line tool and evaluated its performances using datasets containing news
articles and human-generated QEs. The evaluation results show that Xu was
better than Datamuse by achieving about 88% accuracy with reference to the
human-generated QE.Comment: Accepted to IEEE COMPSAC 201
Contextual Bag-Of-Visual-Words and ECOC-Rank for Retrieval and Multi-class Object Recognition
Projecte Final de Màster UPC realitzat en col.laboració amb Dept. Matemàtica Aplicada i Anàlisi, Universitat de BarcelonaMulti-class object categorization is an important line of research in Computer Vision
and Pattern Recognition fields. An artificial intelligent system is able to interact with its environment if it is able to distinguish among a set of cases, instances, situations, objects, etc. The World is inherently multi-class, and thus, the eficiency
of a system can be determined by its accuracy discriminating among a set of cases.
A recently applied procedure in the literature is the Bag-Of-Visual-Words (BOVW).
This methodology is based on the natural language processing theory, where a set of
sentences are defined based on word frequencies. Analogy, in the pattern recognition
domain, an object is described based on the frequency of its parts appearance.
However, a general drawback of this method is that the dictionary construction
does not take into account geometrical information about object parts. In order to
include parts relations in the BOVW model, we propose the Contextual BOVW
(C-BOVW), where the dictionary construction is guided by a geometricaly-based
merging procedure. As a result, objects are described as sentences where geometrical
information is implicitly considered.
In order to extend the proposed system to the multi-class case, we used the
Error-Correcting Output Codes framework (ECOC). State-of-the-art multi-class
techniques are frequently defined as an ensemble of binary classifiers. In this sense, the ECOC framework, based on error-correcting principles, showed to be a powerful tool, being able to classify a huge number of classes at the same time that corrects classification errors produced by the individual learners.
In our case, the C-BOVW sentences are learnt by means of an ECOC configuration, obtaining high discriminative power. Moreover, we used the ECOC outputs obtained by the new methodology to rank classes. In some situations, more than
one label is required to work with multiple hypothesis and find similar cases, such
as in the well-known retrieval problems. In this sense, we also included contextual
and semantic information to modify the ECOC outputs and defined an ECOC-rank methodology. Altering the ECOC output values by means of the adjacency of
classes based on features and classes relations based on ontologies, we also reporteda significant improvement in class-retrieval problems
Feature Extraction and Duplicate Detection for Text Mining: A Survey
Text mining, also known as Intelligent Text Analysis is an important research area. It is very difficult to focus on the most appropriate information due to the high dimensionality of data. Feature Extraction is one of the important techniques in data reduction to discover the most important features. Proce- ssing massive amount of data stored in a unstructured form is a challenging task. Several pre-processing methods and algo- rithms are needed to extract useful features from huge amount of data. The survey covers different text summarization, classi- fication, clustering methods to discover useful features and also discovering query facets which are multiple groups of words or phrases that explain and summarize the content covered by a query thereby reducing time taken by the user. Dealing with collection of text documents, it is also very important to filter out duplicate data. Once duplicates are deleted, it is recommended to replace the removed duplicates. Hence we also review the literature on duplicate detection and data fusion (remove and replace duplicates).The survey provides existing text mining techniques to extract relevant features, detect duplicates and to replace the duplicate data to get fine grained knowledge to the user
- …