93,709 research outputs found

    Protecting Children from Harmful Audio Content: Automated Profanity Detection From English Audio in Songs and Social-Media

    Get PDF
    A novel approach for the automated detection of profanity in English audio songs using machine learning techniques. One of the primary drawbacks of existing systems is only confined to textual data. The proposed method utilizes a combination of feature extraction techniques and machine learning algorithms to identify profanity in audio songs. Specifically, the approach employs the popular feature extraction techniques of Term frequency–inverse document frequency (TF-IDF), Bidirectional Encoder Representations from Transformers (BERT) and Doc2vec to extract relevant features from the audio songs. TF-IDF is used to capture the frequency and importance of each word in the song, while BERT is utilized to extract contextualized representations of words that can capture more nuanced meanings. To capture the semantic meaning of words in audio songs, also explored the use of the Doc2Vec model, which is a neural network-based approach that can extract relevant features from the audio songs. The study utilizes Open Whisper, an open-source machine learning library, to develop and implement the approach. A dataset of English audio songs was used to evaluate the performance of the proposed method. The results showed that both the TF-IDF and BERT models outperformed the Doc2Vec model in terms of accuracy in identifying profanity in English audio songs. The proposed approach has potential applications in identifying profanity in various forms of audio content, including songs, audio clips, social media, reels, and shorts

    Role of semantic indexing for text classification.

    Get PDF
    The Vector Space Model (VSM) of text representation suffers a number of limitations for text classification. Firstly, the VSM is based on the Bag-Of-Words (BOW) assumption where terms from the indexing vocabulary are treated independently of one another. However, the expressiveness of natural language means that lexically different terms often have related or even identical meanings. Thus, failure to take into account the semantic relatedness between terms means that document similarity is not properly captured in the VSM. To address this problem, semantic indexing approaches have been proposed for modelling the semantic relatedness between terms in document representations. Accordingly, in this thesis, we empirically review the impact of semantic indexing on text classification. This empirical review allows us to answer one important question: how beneficial is semantic indexing to text classification performance. We also carry out a detailed analysis of the semantic indexing process which allows us to identify reasons why semantic indexing may lead to poor text classification performance. Based on our findings, we propose a semantic indexing framework called Relevance Weighted Semantic Indexing (RWSI) that addresses the limitations identified in our analysis. RWSI uses relevance weights of terms to improve the semantic indexing of documents. A second problem with the VSM is the lack of supervision in the process of creating document representations. This arises from the fact that the VSM was originally designed for unsupervised document retrieval. An important feature of effective document representations is the ability to discriminate between relevant and non-relevant documents. For text classification, relevance information is explicitly available in the form of document class labels. Thus, more effective document vectors can be derived in a supervised manner by taking advantage of available class knowledge. Accordingly, we investigate approaches for utilising class knowledge for supervised indexing of documents. Firstly, we demonstrate how the RWSI framework can be utilised for assigning supervised weights to terms for supervised document indexing. Secondly, we present an approach called Supervised Sub-Spacing (S3) for supervised semantic indexing of documents. A further limitation of the standard VSM is that an indexing vocabulary that consists only of terms from the document collection is used for document representation. This is based on the assumption that terms alone are sufficient to model the meaning of text documents. However for certain classification tasks, terms are insufficient to adequately model the semantics needed for accurate document classification. A solution is to index documents using semantically rich concepts. Accordingly, we present an event extraction framework called Rule-Based Event Extractor (RUBEE) for identifying and utilising event information for concept-based indexing of incident reports. We also demonstrate how certain attributes of these events e.g. negation, can be taken into consideration to distinguish between documents that describe the occurrence of an event, and those that mention the non-occurrence of that event

    Enhancing Semantic Segmentation: Design and Analysis of Improved U-Net Based Deep Convolutional Neural Networks

    Get PDF
    In this research, we provide a state-of-the-art method for semantic segmentation that makes use of a modified version of the U-Net architecture, which is itself based on deep convolutional neural networks (CNNs). This research delves into the ins and outs of this cutting-edge approach to semantic segmentation in an effort to boost its precision and productivity. To perform semantic segmentation, a crucial operation in computer vision, each pixel in an image must be assigned to one of many predefined item classes. The proposed Improved U-Net architecture makes use of deep CNNs to efficiently capture complex spatial characteristics while preserving associated context. The study illustrates the efficacy of the Improved U-Net in a variety of real-world circumstances through thorough experimentation and assessment. Intricate feature extraction, down-sampling, and up-sampling are all part of the network's design in order to produce high-quality segmentation results. The study demonstrates comparative evaluations against classic U-Net and other state-of-the-art models and emphasizes the significance of hyperparameter fine-tuning. The suggested architecture shows excellent performance in terms of accuracy and generalization, demonstrating its promise for a variety of applications. Finally, the problem of semantic segmentation is addressed in a novel way. The experimental findings validate the relevance of the architecture's design decisions and demonstrate its potential to boost computer vision by enhancing segmentation precision and efficiency

    Sentence level relation extraction via relation embedding

    Full text link
    Relation extraction is a task of information extraction that extracts semantic relations from text, which usually occur between two named entities. It is a crucial step for converting unstructured text into structured data that forms a knowledge base, so that it may be used to build systems with special purposes such as business decision making and legal case-based reasoning. Relation extraction in sentence-level is the most common type, because relationships can be usually discovered within single sentences. One obvious example is the relationship between the subject and the object. As it has been studied for years, there are various methods for relation extraction such as feature based methods, distant supervision and recurrent neural networks. However, the following problems have been found in these approaches. (i) These methods require large amounts of human labelled data to train the model in order to get high accuracy. (ii) These methods are hard to be applied in real applications, especially in specialised domains where experts are required for both labelling and validating the data. In this thesis, we address these problems in two aspects: academic research and application development. In terms of academic research, we propose models that can be trained with less amount of labelled training data. The first approach trains the relation feature embedding, then it uses the feature embeddings for obtaining relation embeddings. To minimise the effect of designing handcraft features, the second approach adopts RNNs to automatically learn features from the text. In these methods, relation embeddings are reduced to a smaller vector space, and the relations with similar meanings form clusters. Therefore, the model can be trained with a smaller number of labelled data. The last approach adopts seq2seq regularisation, which can improve the accuracy of the relation extraction models. In terms of application development, we construct a prototype web service for searching semantic triples using relations extracted by third-party extraction tools. In the last chapter, we run all our proposed models on real-world legal documents. We also build a web application for extracting relations in legal text based on the trained models, which can help lawyers investigate the key information in legal cases more quickly. We believe that the idea of relation embeddings can be applied in domains that require relation extraction but with limited labelled data

    Optical tomography: Image improvement using mixed projection of parallel and fan beam modes

    Get PDF
    Mixed parallel and fan beam projection is a technique used to increase the quality images. This research focuses on enhancing the image quality in optical tomography. Image quality can be defined by measuring the Peak Signal to Noise Ratio (PSNR) and Normalized Mean Square Error (NMSE) parameters. The findings of this research prove that by combining parallel and fan beam projection, the image quality can be increased by more than 10%in terms of its PSNR value and more than 100% in terms of its NMSE value compared to a single parallel beam

    A Machine Learning Approach For Opinion Holder Extraction In Arabic Language

    Full text link
    Opinion mining aims at extracting useful subjective information from reliable amounts of text. Opinion mining holder recognition is a task that has not been considered yet in Arabic Language. This task essentially requires deep understanding of clauses structures. Unfortunately, the lack of a robust, publicly available, Arabic parser further complicates the research. This paper presents a leading research for the opinion holder extraction in Arabic news independent from any lexical parsers. We investigate constructing a comprehensive feature set to compensate the lack of parsing structural outcomes. The proposed feature set is tuned from English previous works coupled with our proposed semantic field and named entities features. Our feature analysis is based on Conditional Random Fields (CRF) and semi-supervised pattern recognition techniques. Different research models are evaluated via cross-validation experiments achieving 54.03 F-measure. We publicly release our own research outcome corpus and lexicon for opinion mining community to encourage further research

    Thesaurus-based index term extraction for agricultural documents

    Get PDF
    This paper describes a new algorithm for automatically extracting index terms from documents relating to the domain of agriculture. The domain-specific Agrovoc thesaurus developed by the FAO is used both as a controlled vocabulary and as a knowledge base for semantic matching. The automatically assigned terms are evaluated against a manually indexed 200-item sample of the FAO’s document repository, and the performance of the new algorithm is compared with a state-of-the-art system for keyphrase extraction

    Aspect-Based Sentiment Analysis Using a Two-Step Neural Network Architecture

    Full text link
    The World Wide Web holds a wealth of information in the form of unstructured texts such as customer reviews for products, events and more. By extracting and analyzing the expressed opinions in customer reviews in a fine-grained way, valuable opportunities and insights for customers and businesses can be gained. We propose a neural network based system to address the task of Aspect-Based Sentiment Analysis to compete in Task 2 of the ESWC-2016 Challenge on Semantic Sentiment Analysis. Our proposed architecture divides the task in two subtasks: aspect term extraction and aspect-specific sentiment extraction. This approach is flexible in that it allows to address each subtask independently. As a first step, a recurrent neural network is used to extract aspects from a text by framing the problem as a sequence labeling task. In a second step, a recurrent network processes each extracted aspect with respect to its context and predicts a sentiment label. The system uses pretrained semantic word embedding features which we experimentally enhance with semantic knowledge extracted from WordNet. Further features extracted from SenticNet prove to be beneficial for the extraction of sentiment labels. As the best performing system in its category, our proposed system proves to be an effective approach for the Aspect-Based Sentiment Analysis
    corecore