8 research outputs found

    Etiquetado de clusters de relaciones verbales motivados semánticamente

    Get PDF
    El clustering de documentos es un campo de investigación popular en los ámbitos del Procesamiento del Lenguaje Natural, la Minería de Datos y la Recuperación de información (RI). El problema de agrupar unidades léxicas mediante clustering ha sido menos estudiado y menos aún, el problema de etiquetar los clusters. Sin embargo, en nuestra aplicación que trata sobre la extracción de tuplas de relaciones para ser usadas como entrada a programas para dibujar diagramas de bloques o mapas conceptuales, este problema es fundamental. La valoración de varias estrategias de etiquetado de clusters de documentos nos revela que algunas de estas técnicas pueden ser también aplicadas para etiquetar nuestros clusters, compuestos por verbos semánticamente similares. Para confirmar esta suposición, llevamos a cabo una serie de experimentos y evaluamos su rendimiento contra baselines y un goldstandard de clusters etiquetados.Document clustering is a popular research field in Natural Language Processing, Data Mining and Information Retrieval. The problem of lexical unit (LU) clustering has been less addressed, and even less so the problem of labeling LU clusters. However, in our application that deals with the distillation of relational tuples from patent claims as input to block diagram or a concept map drawing programs, this problem is central. The assessment of various document cluster labeling techniques lets us assume that despite some significant differences that need to be taken into account some of these techniques may also be applied to verbal relation cluster labeling we are concerned with. To confirm this assumption, we carry out a number of experiments and evaluate their outcome against baselines and gold standard labeled clusters

    Extracting collective trends from Twitter using social-based data mining

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-40495-5_62Proceedings 5th International Conference, ICCCI 2013, Craiova, Romania, September 11-13, 2013,Social Networks have become an important environment for Collective Trends extraction. The interactions amongst users provide information of their preferences and relationships. This information can be used to measure the influence of ideas, or opinions, and how they are spread within the Network. Currently, one of the most relevant and popular Social Network is Twitter. This Social Network was created to share comments and opinions. The information provided by users is specially useful in different fields and research areas such as marketing. This data is presented as short text strings containing different ideas expressed by real people. With this representation, different Data Mining and Text Mining techniques (such as classification and clustering) might be used for knowledge extraction trying to distinguish the meaning of the opinions. This work is focused on the analysis about how these techniques can interpret these opinions within the Social Network using information related to IKEA® company.The preparation of this manuscript has been supported by the Spanish Ministry of Science and Innovation under the following projects: TIN2010-19872, ECO2011-30105 (National Plan for Research, Development and Innovation) and the Multidisciplinary Project of Universidad Aut´onoma de Madrid (CEMU-2012-034

    Prioritising references for systematic reviews with RobotAnalyst: A user study

    Get PDF
    Screening references is a time-consuming step necessary for systematic reviews and guideline development. Previous studies have shown that human effort can be reduced by using machine learning software to prioritise large reference collections such that most of the relevant references are identified before screening is completed. We describe and evaluate RobotAnalyst, a Web-based software system that combines text-mining and machine learning algorithms for organising references by their content and actively prioritising them based on a relevancy classification model trained and updated throughout the process. We report an evaluation over 22 reference collections (most are related to public health topics) screened using RobotAnalyst with a total of 43 610 abstract-level decisions. The number of references that needed to be screened to identify 95% of the abstract-level inclusions for the evidence review was reduced on 19 of the 22 collections. Significant gains over random sampling were achieved for all reviews conducted with active prioritisation, as compared with only two of five when prioritisation was not used. RobotAnalyst's descriptive clustering and topic modelling functionalities were also evaluated by public health analysts. Descriptive clustering provided more coherent organisation than topic modelling, and the content of the clusters was apparent to the users across a varying number of clusters. This is the first large-scale study using technology-assisted screening to perform new reviews, and the positive results provide empirical evidence that RobotAnalyst can accelerate the identification of relevant studies. The results also highlight the issue of user complacency and the need for a stopping criterion to realise the work savings

    Silhouette + Attraction: A Simple and Effective Method for Text Clustering

    Get PDF
    [EN] This article presents silhouette attraction (Sil Att), a simple and effective method for text clustering, which is based on two main concepts: the silhouette coefficient and the idea of attraction. The combination of both principles allows us to obtain a general technique that can be used either as a boosting method, which improves results of other clustering algorithms, or as an independent clustering algorithm. The experimental work shows that Sil Att is able to obtain high-quality results on text corpora with very different characteristics. Furthermore, its stable performance on all the considered corpora is indicative that it is a very robust method. This is a very interesting positive aspect of Sil Att with respect to the other algorithms used in the experiments, whose performances heavily depend on specific characteristics of the corpora being considered.This research work has been partially funded by UNSL, CONICET (Argentina), DIANA-APPLICATIONS-Finding Hidden Knowledge in Texts: Applications (TIN2012-38603-C02-01) research project, and the WIQ-EI IRSES project (grant no. 269180) within the FP 7 Marie Curie People Framework on Web Information Quality Evaluation Initiative. The work of the third author was done also in the framework of the VLC/CAMPUS Microcluster on Multimodal Interaction in Intelligent Systems.Errecalde, M.; Cagnina, L.; Rosso, P. (2015). Silhouette + Attraction: A Simple and Effective Method for Text Clustering. Natural Language Engineering. 1-40. https://doi.org/10.1017/S1351324915000273S140Zhao, Y., & Karypis, G. (2004). Empirical and Theoretical Comparisons of Selected Criterion Functions for Document Clustering. Machine Learning, 55(3), 311-331. doi:10.1023/b:mach.0000027785.44527.d6Tu, L., & Chen, Y. (2009). Stream data clustering based on grid density and attraction. ACM Transactions on Knowledge Discovery from Data, 3(3), 1-27. doi:10.1145/1552303.1552305Yang, T., Jin, R., Chi, Y., & Zhu, S. (2009). Combining link and content for community detection. Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining - KDD ’09. doi:10.1145/1557019.1557120Zhao, Y., Karypis, G., & Fayyad, U. (2005). Hierarchical Clustering Algorithms for Document Datasets. Data Mining and Knowledge Discovery, 10(2), 141-168. doi:10.1007/s10618-005-0361-3Kaufman, L., & Rousseeuw, P. J. (Eds.). (1990). Finding Groups in Data. Wiley Series in Probability and Statistics. doi:10.1002/9780470316801Karypis, G., Eui-Hong Han, & Kumar, V. (1999). Chameleon: hierarchical clustering using dynamic modeling. Computer, 32(8), 68-75. doi:10.1109/2.781637Cagnina, L., Errecalde, M., Ingaramo, D., & Rosso, P. (2014). An efficient Particle Swarm Optimization approach to cluster short texts. Information Sciences, 265, 36-49. doi:10.1016/j.ins.2013.12.010He, H., Chen, B., Xu, W., & Guo, J. (2007). Short Text Feature Extraction and Clustering for Web Topic Mining. Third International Conference on Semantics, Knowledge and Grid (SKG 2007). doi:10.1109/skg.2007.76Spearman, C. (1904). The Proof and Measurement of Association between Two Things. The American Journal of Psychology, 15(1), 72. doi:10.2307/1412159Rousseeuw, P. J. (1987). Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. Journal of Computational and Applied Mathematics, 20, 53-65. doi:10.1016/0377-0427(87)90125-7Manning, C. D., Raghavan, P., & Schutze, H. (2008). Introduction to Information Retrieval. doi:10.1017/cbo9780511809071Qi, G.-J., Aggarwal, C. C., & Huang, T. (2012). Community Detection with Edge Content in Social Media Networks. 2012 IEEE 28th International Conference on Data Engineering. doi:10.1109/icde.2012.77Daxin Jiang, Jian Pei, & Aidong Zhang. (s. f.). DHC: a density-based hierarchical clustering method for time series gene expression data. Third IEEE Symposium on Bioinformatics and Bioengineering, 2003. Proceedings. doi:10.1109/bibe.2003.1188978Charikar, M., Chekuri, C., Feder, T., & Motwani, R. (2004). Incremental Clustering and Dynamic Information Retrieval. SIAM Journal on Computing, 33(6), 1417-1440. doi:10.1137/s0097539702418498Selim, S. Z., & Alsultan, K. (1991). A simulated annealing algorithm for the clustering problem. Pattern Recognition, 24(10), 1003-1008. doi:10.1016/0031-3203(91)90097-oAranganayagi, S., & Thangavel, K. (2007). Clustering Categorical Data Using Silhouette Coefficient as a Relocating Measure. International Conference on Computational Intelligence and Multimedia Applications (ICCIMA 2007). doi:10.1109/iccima.2007.328Makagonov, P., Alexandrov, M., & Gelbukh, A. (2004). Clustering Abstracts Instead of Full Texts. Lecture Notes in Computer Science, 129-135. doi:10.1007/978-3-540-30120-2_17Jing L. 2005. Survey of text clustering. Technical report. Department of Mathematics. The University of Hong Kong, Hong Kong, China.Shannon, C. E. (1948). A Mathematical Theory of Communication. Bell System Technical Journal, 27(3), 379-423. doi:10.1002/j.1538-7305.1948.tb01338.xHearst, M. A. (2006). Clustering versus faceted categories for information exploration. Communications of the ACM, 49(4), 59. doi:10.1145/1121949.1121983Alexandrov, M., Gelbukh, A., & Rosso, P. (2005). An Approach to Clustering Abstracts. Lecture Notes in Computer Science, 275-285. doi:10.1007/11428817_25Dos Santos, J. B., Heuser, C. A., Moreira, V. P., & Wives, L. K. (2011). Automatic threshold estimation for data matching applications. Information Sciences, 181(13), 2685-2699. doi:10.1016/j.ins.2010.05.029Hasan, M. A., Chaoji, V., Salem, S., & Zaki, M. J. (2009). Robust partitional clustering by outlier and density insensitive seeding. Pattern Recognition Letters, 30(11), 994-1002. doi:10.1016/j.patrec.2009.04.013Dunn†, J. C. (1974). Well-Separated Clusters and Optimal Fuzzy Partitions. Journal of Cybernetics, 4(1), 95-104. doi:10.1080/01969727408546059Carullo, M., Binaghi, E., & Gallo, I. (2009). An online document clustering technique for short web contents. Pattern Recognition Letters, 30(10), 870-876. doi:10.1016/j.patrec.2009.04.001Kruskal, W. H., & Wallis, W. A. (1952). Use of Ranks in One-Criterion Variance Analysis. Journal of the American Statistical Association, 47(260), 583-621. doi:10.1080/01621459.1952.10483441Bezdek, J. C., & Pal, N. R. (s. f.). Cluster validation with generalized Dunn’s indices. Proceedings 1995 Second New Zealand International Two-Stream Conference on Artificial Neural Networks and Expert Systems. doi:10.1109/annes.1995.499469Brun, M., Sima, C., Hua, J., Lowey, J., Carroll, B., Suh, E., & Dougherty, E. R. (2007). Model-based evaluation of clustering validation measures. Pattern Recognition, 40(3), 807-824. doi:10.1016/j.patcog.2006.06.026Davies, D. L., & Bouldin, D. W. (1979). A Cluster Separation Measure. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-1(2), 224-227. doi:10.1109/tpami.1979.4766909Pinto, D., & Rosso, P. (s. f.). On the Relative Hardness of Clustering Corpora. Lecture Notes in Computer Science, 155-161. doi:10.1007/978-3-540-74628-7_22Pons-Porrata, A., Berlanga-Llavori, R., & Ruiz-Shulcloper, J. (2007). Topic discovery based on text mining techniques. Information Processing & Management, 43(3), 752-768. doi:10.1016/j.ipm.2006.06.001Pinto, D., Benedí, J.-M., & Rosso, P. (2007). Clustering Narrow-Domain Short Texts by Using the Kullback-Leibler Distance. Lecture Notes in Computer Science, 611-622. doi:10.1007/978-3-540-70939-8_5

    Concept Mining: A Conceptual Understanding based Approach

    Get PDF
    Due to the daily rapid growth of the information, there are considerable needs to extract and discover valuable knowledge from data sources such as the World Wide Web. Most of the common techniques in text mining are based on the statistical analysis of a term either word or phrase. These techniques consider documents as bags of words and pay no attention to the meanings of the document content. In addition, statistical analysis of a term frequency captures the importance of the term within a document only. However, two terms can have the same frequency in their documents, but one term contributes more to the meaning of its sentences than the other term. Therefore, there is an intensive need for a model that captures the meaning of linguistic utterances in a formal structure. The underlying model should indicate terms that capture the semantics of text. In this case, the model can capture terms that present the concepts of the sentence, which leads to discover the topic of the document. A new concept-based model that analyzes terms on the sentence, document and corpus levels rather than the traditional analysis of document only is introduced. The concept-based model can effectively discriminate between non-important terms with respect to sentence semantics and terms which hold the concepts that represent the sentence meaning. The proposed model consists of concept-based statistical analyzer, conceptual ontological graph representation, concept extractor and concept-based similarity measure. The term which contributes to the sentence semantics is assigned two different weights by the concept-based statistical analyzer and the conceptual ontological graph representation. These two weights are combined into a new weight. The concepts that have maximum combined weights are selected by the concept extractor. The similarity between documents is calculated based on a new concept-based similarity measure. The proposed similarity measure takes full advantage of using the concept analysis measures on the sentence, document, and corpus levels in calculating the similarity between documents. Large sets of experiments using the proposed concept-based model on different datasets in text clustering, categorization and retrieval are conducted. The experiments demonstrate extensive comparison between traditional weighting and the concept-based weighting obtained by the concept-based model. Experimental results in text clustering, categorization and retrieval demonstrate the substantial enhancement of the quality using: (1) concept-based term frequency (tf), (2) conceptual term frequency (ctf), (3) concept-based statistical analyzer, (4) conceptual ontological graph, (5) concept-based combined model. In text clustering, the evaluation of results is relied on two quality measures, the F-Measure and the Entropy. In text categorization, the evaluation of results is relied on three quality measures, the Micro-averaged F1, the Macro-averaged F1 and the Error rate. In text retrieval, the evaluation of results relies on three quality measures, the precision at 10 documents retrieved P(10), the preference measure (bpref), and the mean uninterpolated average precision (MAP). All of these quality measures are improved when the newly developed concept-based model is used to enhance the quality of the text clustering, categorization and retrieval

    Automatic extraction and structure of arguments in legal documents

    Get PDF
    A argumentação desempenha um papel fundamental na comunicação humana ao formular razões e tirar conclusões. Desenvolveu-se um sistema automático para identificar argumentos jurídicos de forma eficaz em termos de custos a partir da jurisprudência. Usando 42 leis jurídicas do Tribunal Europeu dos Direitos Humanos (ECHR), anotou-se os documentos para estabelecer um conjunto de dados “padrão-ouro”. Foi então desenvolvido e testado um processo composto por 3 etapas para mineração de argumentos. A primeira etapa foi avaliar o melhor conjunto de recursos para identificar automaticamente as frases argumentativas do texto não estruturado. Várias experiencias foram conduzidas dependendo do tipo de características disponíveis no corpus, a fim de determinar qual abordagem que produzia os melhores resultados. No segundo estágio, introduziu-se uma nova abordagem de agrupamento automático (para agrupar frases num argumento legal coerente), através da utilização de dois novos algoritmos: o “Algoritmo de Identificação do Grupo Apropriado”, ACIA e a “Distribuição de orações no agrupamento de Cluster”, DSCA. O trabalho inclui também um sistema de avaliação do algoritmo de agrupamento que permite ajustar o seu desempenho. Na terceira etapa do trabalho, utilizou-se uma abordagem híbrida de técnicas estatísticas e baseadas em regras para categorizar as orações argumentativas. No geral, observa-se que o nível de precisão e utilidade alcançado por essas novas técnicas é viável como base para uma estrutura geral de argumentação e mineração; Abstract: Automatic Extraction and Structure of Arguments in Legal Documents Argumentation plays a cardinal role in human communication when formulating reasons and drawing conclusions. A system to automatically identify legal arguments cost-effectively from case-law was developed. Using 42 legal case-laws from the European Court of Human Rights (ECHR), an annotation was performed to establish a ‘gold-standard’ dataset. Then a three-stage process for argument mining was developed and tested. The first stage aims at evaluating the best set of features for automatically identifying argumentative sentences within unstructured text. Several experiments were conducted, depending upon the type of features available in the corpus, in order to determine which approach yielded the best result. In the second stage, a novel approach to clustering (for grouping sentences automatically into a coherent legal argument) was introduced through the development of two new algorithms: the “Appropriate Cluster Identification Algorithm”,(ACIA) and the “Distribution of Sentence to the Cluster Algorithm” (DSCA). This work also includes a new evaluation system for the clustering algorithm, which helps tuning it for performance. In the third stage, a hybrid approach of statistical and rule-based techniques was used in order to categorize argumentative sentences. Overall, it’s possible to observe that the level of accuracy and usefulness achieve by these new techniques makes it viable as the basis of a general argument-mining framework

    Semanttisten luokkien soveltaminen automaattisessa uutisseurannassa

    Get PDF
    Topic detection and tracking (TDT) is an area of information retrieval research the focus of which revolves around news events. The problems TDT deals with relate to segmenting news text into cohesive stories, detecting something new, previously unreported, tracking the development of a previously reported event, and grouping together news that discuss the same event. The performance of the traditional information retrieval techniques based on full-text similarity has remained inadequate for online production systems. It has been difficult to make the distinction between same and similar events. In this work, we explore ways of representing and comparing news documents in order to detect new events and track their development. First, however, we put forward a conceptual analysis of the notions of topic and event. The purpose is to clarify the terminology and align it with the process of news-making and the tradition of story-telling. Second, we present a framework for document similarity that is based on semantic classes, i.e., groups of words with similar meaning. We adopt people, organizations, and locations as semantic classes in addition to general terms. As each semantic class can be assigned its own similarity measure, document similarity can make use of ontologies, e.g., geographical taxonomies. The documents are compared class-wise, and the outcome is a weighted combination of class-wise similarities. Third, we incorporate temporal information into document similarity. We formalize the natural language temporal expressions occurring in the text, and use them to anchor the rest of the terms onto the time-line. Upon comparing documents for event-based similarity, we look not only at matching terms, but also how near their anchors are on the time-line. Fourth, we experiment with an adaptive variant of the semantic class similarity system. The news reflect changes in the real world, and in order to keep up, the system has to change its behavior based on the contents of the news stream. We put forward two strategies for rebuilding the topic representations and report experiment results. We run experiments with three annotated TDT corpora. The use of semantic classes increased the effectiveness of topic tracking by 10-30\% depending on the experimental setup. The gain in spotting new events remained lower, around 3-4\%. The anchoring the text to a time-line based on the temporal expressions gave a further 10\% increase the effectiveness of topic tracking. The gains in detecting new events, again, remained smaller. The adaptive systems did not improve the tracking results.Automaattinen uutistapahtumien seuranta on tietojenkäsittelytieteen ja siinä tiedonhaun piiriin kuuluva tutkimusalue, jossa kehitetään menetelmiä digitaalisen uutisvirran hallintaan. Uutisvirta koostuu useista, mahdollisesti eri kielisistä uutislähteistä, joissa voi olla digitaalisia online-uutisia ja radio- sekä televisiouutisia. Alueen tutkimusongelmat koostuvat uusien, aikaisemmin uutisoimattomien uutistapahtumien havaitsemisesta, tunnistettujen uutistapahtumien kehityksen seuraamisesta ja uutisten ryhmittelystä sisällön perusteella sekä uutisvirran pilkkomisesta uutisjutuiksi. Tässä työssä keskitytään kahteen ensimmäiseen tutkimusongelmaan. Perinteiset tiedonhakumenetelmät, jotka ovat edelleen internet-tiedonhakujärjestelmien perustana, vertailevat tekstidokumentteja joukkoina sanoja ja käsittelevät sanoja yksinkertaisina merkkijonoja, mikä mahdollistaa nopeat hakuajat ja kohtuullisen hyvä tulokset mutta kadottaa sanojen merkitykset. Perinteiset menetelmät eivät ole kuitenkaan toimineet erityisen hyvin tapahtumapohjaisessa uutisseurannassa. Erityisen vaikeaa on ollut tunnistaa kaksi samantyyppistä uutistapahtumaa, esim. kaksi lento-onnettomuutta, eri tapahtumiksi, koska niiden uutisointi sisältää pitkälti samoja sanoja. Tässä työssä etsitään uusia tapoja kuvata ja vertailla uutisia. Ensinnäkin sanat ryhmitellään merkitystensä mukaan joukoiksi samankaltaisia sanoja eli semanttisiksi luokiksi. Työssä käytetään semanttisia luokkia kuten yleiset sanat, organisaatiot, henkilöt, paikanilmaukset ja ajanilmaukset, jotka karkeasti ottaen vastaavat kysymyksiin mitä, kuka, milloin ja missä. Jokaisen luokan sisällä sanoja voidaan vertailla hieman eri tavoin, ja niinpä paikanilmausten kohdalla voidaan kaksi eri kaupunkia tai maata huomata maantieteellisesti läheisiksi tai organisaatioiden nimien kohdalla tunnistaa kaksi nimeä viittaavan samaan organisaatioon. Semanttisen luokan taustalle voidaan kytkeä sanojen taksonomia tai jokin muu rakenne, jonka kautta voidaan selvittää luokan sanojen välinen suhde. Lisäksi tekstistä tunnistetaan ajanilmaukset (esim. 'eilen', 'kaksi vuotta sitten helmikuussa') ja teksti ankkuroidaan niiden avulla aika-akselille. Tällöin tunnistetaan eri uutistapahtumista puhuttaessa samaa sanaa, esim. 'lento-onnettomuus', käytetään eri aikayhteydessä. Uutisia verrataan semanttinen luokka kerrallaan, ja tunnistaminen nojaa näiden erilaisten luokkakohtaisten tulosten yhdistelmään. Näin kaksi lento-onnettomuusuutista voivat olla samanlaisia yleisten sanojen suhteen mutta erilaisia paikkojen ja ajanilmausten suhteen, koska ne tapahtuvat eri paikoissa eri aikaan. Uutistapahtumia on monenlaisia, eikä todellisuus tai siitä kertovat uutiset taivu täysin kauniisiin malleihin. Tutkimustuloksissa kuitenkin semanttisten luokkien käyttö parantaa tuntuvasti uutistapahtumien seurannan tarkkuutta verrattuna perinteiseen lähestymistapaan -- uusien tapahtumien tunnistamista hieman vähemmän
    corecore