37 research outputs found

    The Lowlands team at TRECVID 2007

    Get PDF
    In this report we summarize our methods and results for the search tasks in\ud TRECVID 2007. We employ two different kinds of search: purely ASR based and\ud purely concept based search. However, there is not significant difference of the\ud performance of the two systems. Using neighboring shots for the combination of\ud two concepts seems to be beneficial. General preprocessing of queries increased\ud the performance and choosing detector sources helped. However, for all automatic\ud search components we need to perform further investigations

    An empirical study on large-scale multi-label text classification including few and zero-shot labels

    Get PDF
    Large-scale Multi-label Text Classification (LMTC) has a wide range of Natural Language Processing (NLP) applications and presents interesting challenges. First, not all labels are well represented in the training set, due to the very large label set and the skewed label distributions of LMTC datasets. Also, label hierarchies and differences in human labelling guidelines may affect graph-aware annotation proximity. Finally, the label hierarchies are periodically updated, requiring LMTC models capable of zero-shot generalization. Current state-of-the-art LMTC models employ Label-Wise Attention Networks (LWANs), which (1) typically treat LMTC as flat multi-label classification; (2) may use the label hierarchy to improve zero-shot learning, although this practice is vastly understudied; and (3) have not been combined with pre-trained Transformers (e.g. BERT), which have led to state-of-the-art results in several NLP benchmarks. Here, for the first time, we empirically evaluate a battery of LMTC methods from vanilla LWANs to hierarchical classification approaches and transfer learning, on frequent, few, and zero-shot learning on three datasets from different domains. We show that hierarchical methods based on Probabilistic Label Trees (PLTs) outperform LWANs. Furthermore, we show that Transformer-based approaches outperform the state-of-the-art in two of the datasets, and we propose a new state-of-the-art method which combines BERT with LWANs. Finally, we propose new models that leverage the label hierarchy to improve few and zero-shot learning, considering on each dataset a graph-aware annotation proximity measure that we introduce

    From the web of data to a world of action

    Full text link
    This is the author’s version of a work that was accepted for publication in Web Semantics: Science, Services and Agents on the World Wide Web. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Web Semantics: Science, Services and Agents on the World Wide Web 8.4 (2010): 10.1016/j.websem.2010.04.007This paper takes as its premise that the web is a place of action, not just information, and that the purpose of global data is to serve human needs. The paper presents several component technologies, which together work towards a vision where many small micro-applications can be threaded together using automated assistance to enable a unified and rich interaction. These technologies include data detector technology to enable any text to become a start point of semantic interaction; annotations for web-based services so that they can link data to potential actions; spreading activation over personal ontologies, to allow modelling of context; algorithms for automatically inferring 'typing' of web-form input data based on previous user inputs; and early work on inferring task structures from action traces. Some of these have already been integrated within an experimental web-based (extended) bookmarking tool, Snip!t, and a prototype desktop application On Time, and the paper discusses how the components could be more fully, yet more openly, linked in terms of both architecture and interaction. As well as contributing to the goal of an action and activity-focused web, the work also exposes a number of broader issues, theoretical, practical, social and economic, for the Semantic Web.Parts of this work were supported by the Information Society Technologies (IST) Program of the European Commission as part of the DELOS Network of Excellence on Digital Libraries (Contract G038- 507618). Thanks also to Emanuele Tracanna, Marco Piva, and Raffaele Giuliano for their work on On Time

    Extreme multi-label legal text classification: a case study in EU legislation

    Get PDF
    We consider the task of Extreme Multi-Label Text Classification (XMTC) in the legal domain. We release a new dataset of 57k legislative documents from EURLEX, the European Union’s public document database, annotated with concepts from EUROVOC, a multidisciplinary thesaurus. The dataset is substantially larger than previous EURLEX datasets and suitable for XMTC, few-shot and zero-shot learning. Experimenting with several neural classifiers, we show that BIGRUs with self-attention outperform the current multi-label state-of-the-art methods, which employ label-wise attention. Replacing CNNs with BIGRUs in label-wise attention networks leads to the best overall performance

    Leakage Impact On Non Revenue Water (Nrw) In Sungai Petani (Kedah Tengah District), Kedah Darul Aman

    Get PDF
    Leakage is well known as the main reason of high physical loss in non-revenue water (NRW) and is divided into major and minor leakages. A leakage study was done in 10 District Meter Zones (DMZs) in Sungai Petani, Kedah. NRW data had been collected in cooperation with Syarikat Air Darul Aman (SADA). Data taken from year 2008 until2012 were used to investigate the major and minor leakages by doing visual inspection at site and analysis of the results were done. The baseline inflow and baseline average water consumption in cubic meter per day were collected from the Primayer 'data logger' and SADA billing systems to determine the real water losses. Statistical analysis were used to study the relationship between number of leakage with NRW cost saving, number of connection and length of pipe by using the Statistical Product and Service Solution (SPSS)
    corecore