17 research outputs found

    A Systematic Performance Analysis of Deep Perceptual Loss Networks: Breaking Transfer Learning Conventions

    Full text link
    Deep perceptual loss is a type of loss function in computer vision that aims to mimic human perception by using the deep features extracted from neural networks. In recent years, the method has been applied to great effect on a host of interesting computer vision tasks, especially for tasks with image or image-like outputs, such as image synthesis, segmentation, depth prediction, and more. Many applications of the method use pretrained networks, often convolutional networks, for loss calculation. Despite the increased interest and broader use, more effort is needed toward exploring which networks to use for calculating deep perceptual loss and from which layers to extract the features. This work aims to rectify this by systematically evaluating a host of commonly used and readily available, pretrained networks for a number of different feature extraction points on four existing use cases of deep perceptual loss. The use cases of perceptual similarity, super-resolution, image segmentation, and dimensionality reduction, are evaluated through benchmarks. The benchmarks are implementations of previous works where the selected networks and extraction points are evaluated. The performance on the benchmarks, and attributes of the networks and extraction points are then used as a basis for an in-depth analysis. This analysis uncovers insight regarding which architectures provide superior performance for deep perceptual loss and how to choose an appropriate extraction point for a particular task and dataset. Furthermore, the work discusses the implications of the results for deep perceptual loss and the broader field of transfer learning. The results show that deep perceptual loss deviates from two commonly held conventions in transfer learning, which suggests that those conventions are in need of deeper analysis

    An Overview of Functional Food

    Get PDF
    Functional foods are responsible for the improvement of human health and can significantly reduce the probability of disease in the host body. Functional foods are directly or indirectly part of different food ingredients and can induce functional activities in the host biological system. Functional foods are present in fruits, vegetables, dairy, bakery, cereals and meat products. Functional foods are not additional food supplements, drugs or antibiotics, they are the main component of a normal human and animal diet. Functional foods are cost-effective and easily available in the market. Daily consumption of functional foods can prevent the gastrointestinal diseases and also provide ease against different acute and chronic diseases. Adequate administration of probiotics in a human food can convert a normal food into functional food. This chapter will highlight the effective role of functional food in an individual’s daily life

    Deep Learning for Geo-referenced Data : Case Study: Earth Observation

    No full text
    The thesis focuses on machine learning methods for Earth Observation (EO) data, more specifically, remote sensing data acquired by satellites and drones. EO plays a vital role in monitoring the Earth’s surface and modelling climate change to take necessary precautionary measures. Initially, these efforts were dominated by methods relying on handcrafted features and expert knowledge. The recent advances of machine learning methods, however, have also led to successful applications in EO. This thesis explores supervised and unsupervised approaches of Deep Learning (DL) to monitor natural resources of water bodies and forests.  The first study of this thesis introduces an Unsupervised Curriculum Learning (UCL) method based on widely-used DL models to classify water resources from RGB remote sensing imagery. In traditional settings, human experts labeled images to train the deep models which is costly and time-consuming. UCL, instead, can learn the features progressively in an unsupervised fashion from the data, reducing the exhausting efforts of labeling. Three datasets of varying resolution are used to evaluate UCL and show its effectiveness: SAT-6, EuroSAT, and PakSAT. UCL outperforms the supervised methods in domain adaptation, which demonstrates the effectiveness of the proposed algorithm.  The subsequent study is an extension of UCL for the multispectral imagery of Australian wildfires. This study has used multispectral Sentinel-2 imagery to create the dataset for the forest fires ravaging Australia in late 2019 and early 2020. 12 out of the 13 spectral bands of Sentinel-2 are concatenated in a way to make them suitable as a three-channel input to the unsupervised architecture. The unsupervised model then classified the patches as either burnt or not burnt. This work attains 87% F1-Score mapping the burnt regions of Australia, demonstrating the effectiveness of the proposed method.  The main contributions of this work are (i) the creation of two datasets using Sentinel-2 Imagery, PakSAT dataset and Australian Forest Fire dataset; (ii) the introduction of UCL that learns the features progressively without the need of labelled data; and (iii) experimentation on relevant datasets for water body and forest fire classification.  This work focuses on patch-level classification which could in future be expanded to pixel-based classification. Moreover, the methods proposed in this study can be extended to the multi-class classification of aerial imagery. Further possible future directions include the combination of geo-referenced meteorological and remotely sensed image data to explore proposed methods. Lastly, the proposed method can also be adapted to other domains involving multi-spectral and multi-modal input, such as, historical documents analysis, forgery detection in documents, and Natural Language Processing (NLP) classification tasks

    T5 for Hate Speech, Augmented Data, and Ensemble

    No full text
    We conduct relatively extensive investigations of automatic hate speech (HS) detection using different State-of-The-Art (SoTA) baselines across 11 subtasks spanning six different datasets. Our motivation is to determine which of the recent SoTA models is best for automatic hate speech detection and what advantage methods, such as data augmentation and ensemble, may have on the best model, if any. We carry out six cross-task investigations. We achieve new SoTA results on two subtasks—macro F1 scores of 91.73% and 53.21% for subtasks A and B of the HASOC 2020 dataset, surpassing previous SoTA scores of 51.52% and 26.52%, respectively. We achieve near-SoTA results on two others—macro F1 scores of 81.66% for subtask A of the OLID 2019 and 82.54% for subtask A of the HASOC 2021, in comparison to SoTA results of 82.9% and 83.05%, respectively. We perform error analysis and use two eXplainable Artificial Intelligence (XAI) algorithms (Integrated Gradient (IG) and SHapley Additive exPlanations (SHAP)) to reveal how two of the models (Bi-Directional Long Short-Term Memory Network (Bi-LSTM) and Text-to-Text-Transfer Transformer (T5)) make the predictions they do by using examples. Other contributions of this work are: (1) the introduction of a simple, novel mechanism for correcting Out-of-Class (OoC) predictions in T5, (2) a detailed description of the data augmentation methods, and (3) the revelation of the poor data annotations in the HASOC 2021 dataset by using several examples and XAI (buttressing the need for better quality control). We publicly release our model checkpoints and codes to foster transparency.Godkänd;2023;Nivå 0;2023-11-13 (joosat);Part of special issue: Computational Linguistics and Artificial IntelligenceCC BY 4.0 License</p

    UCL: Unsupervised Curriculum Learning for Utility Pole Detection from Aerial Imagery

    No full text
    This paper introduces a machine learning-based approach for detecting electric poles, an essential part of power grid maintenance. With the increasing popularity of deep learning, several such approaches have been proposed for electric pole detection. However, most of these approaches are supervised, requiring a large amount of labeled data, which is time-consuming and labor-intensive. Unsupervised deep learning approaches have the potential to overcome the need for huge amounts of training data. This paper presents an unsupervised deep learning framework for utility pole detection. The framework combines Convolutional Neural Network (CNN) and clustering algorithms with a selection operation. The CNN architecture for extracting meaningful features from aerial imagery, a clustering algorithm for generating pseudo labels for the resulting features, and a selection operation to filter out reliable samples to fine-tune the CNN architecture further. The fine-tuned version then replaces the initial CNN model, thus improving the framework, and we iteratively repeat this process so that the model learns the prominent patterns in the data progressively. The presented framework is trained and tested on a small dataset of utility poles provided by “Mention Fuvex” (a Spanish company utilizing long-range drones for power line inspection). Our extensive experimentation demonstrates the progressive learning behavior of the proposed method and results in promising classification scores with significance test having p−value&lt;0.00005 on the utility pole dataset

    UCL: Unsupervised Curriculum Learning for Water Body Classification from Remote Sensing Imagery

    No full text
    This paper presents a Convolutional Neural Networks (CNN) based Unsupervised Curriculum Learning approach for the recognition of water bodies to overcome the stated challenges for remote sensing based RGB imagery. The unsupervised nature of the presented algorithm eliminates the need for labelled training data. The problem is cast as a two class clustering problem (water and non-water), while clustering is done on deep features obtained by a pre-trained CNN. After initial clusters have been identified, representative samples from each cluster are chosen by the unsupervised curriculum learning algorithm for fine-tuning the feature extractor. The stated process is repeated iteratively until convergence. Three datasets have been used to evaluate the approach and show its effectiveness on varying scales: (i) SAT-6 dataset comprising high resolution aircraft images, (ii) Sentinel-2 of EuroSAT, comprising remote sensing images with low resolution, and (iii) PakSAT, a new dataset we created for this study. PakSAT is the first Pakistani Sentinel-2 dataset designed to classify water bodies of Pakistan. Extensive experiments on these datasets demonstrate the progressive learning behaviour of UCL and reported promising results of water classification on all three datasets. The obtained accuracies outperform the supervised methods in domain adaptation, demonstrating the effectiveness of the proposed algorithm.Validerad;2021;Nivå 2;2021-11-08 (johcin);Full text license: CC BY-NC-ND</p

    Creating and Leveraging a Synthetic Dataset of Cloud Optical Thickness Measures for Cloud Detection in MSI

    Full text link
    Cloud formations often obscure optical satellite-based monitoring of the Earth's surface, thus limiting Earth observation (EO) activities such as land cover mapping, ocean color analysis, and cropland monitoring. The integration of machine learning (ML) methods within the remote sensing domain has significantly improved performance on a wide range of EO tasks, including cloud detection and filtering, but there is still much room for improvement. A key bottleneck is that ML methods typically depend on large amounts of annotated data for training, which is often difficult to come by in EO contexts. This is especially true when it comes to cloud optical thickness (COT) estimation. A reliable estimation of COT enables more fine-grained and application-dependent control compared to using pre-specified cloud categories, as is commonly done in practice. To alleviate the COT data scarcity problem, in this work we propose a novel synthetic dataset for COT estimation, that we subsequently leverage for obtaining reliable and versatile cloud masks on real data. In our dataset, top-of-atmosphere radiances have been simulated for 12 of the spectral bands of the Multispectral Imagery (MSI) sensor onboard Sentinel-2 platforms. These data points have been simulated under consideration of different cloud types, COTs, and ground surface and atmospheric profiles. Extensive experimentation of training several ML models to predict COT from the measured reflectivity of the spectral bands demonstrates the usefulness of our proposed dataset. In particular, by thresholding COT estimates from our ML models, we show on two satellite image datasets (one that is publicly available, and one which we have collected and annotated) that reliable cloud masks can be obtained. The synthetic data, the collected real dataset, code and models have been made publicly available at https://github.com/aleksispi/ml-cloud-opt-thick.Comment: Published in the journal Remote Sensing (2024). Code, data and models available at https://github.com/aleksispi/ml-cloud-opt-thic

    Creating and Leveraging a Synthetic Dataset of Cloud Optical Thickness Measures for Cloud Detection in MSI

    No full text
    Cloud formations often obscure optical satellite-based monitoring of the Earth’s surface, thus limiting Earth observation (EO) activities such as land cover mapping, ocean color analysis, and cropland monitoring. The integration of machine learning (ML) methods within the remote sensing domain has significantly improved performance for a wide range of EO tasks, including cloud detection and filtering, but there is still much room for improvement. A key bottleneck is that ML methods typically depend on large amounts of annotated data for training, which are often difficult to come by in EO contexts. This is especially true when it comes to cloud optical thickness (COT) estimation. A reliable estimation of COT enables more fine-grained and application-dependent control compared to using pre-specified cloud categories, as is common practice. To alleviate the COT data scarcity problem, in this work, we propose a novel synthetic dataset for COT estimation, which we subsequently leverage for obtaining reliable and versatile cloud masks on real data. In our dataset, top-of-atmosphere radiances have been simulated for 12 of the spectral bands of the Multispectral Imagery (MSI) sensor onboard Sentinel-2 platforms. These data points have been simulated under consideration of different cloud types, COTs, and ground surface and atmospheric profiles. Extensive experimentation of training several ML models to predict COT from the measured reflectivity of the spectral bands demonstrates the usefulness of our proposed dataset. In particular, by thresholding COT estimates from our ML models, we show on two satellite image datasets (one that is publicly available, and one which we have collected and annotated) that reliable cloud masks can be obtained. The synthetic data, the newly collected real dataset, code and models have been made publicly available.Validerad;2024;Nivå 2;2024-04-09 (sofila);Full text license: CC BY</p

    Bimodal electroencephalography-functional magnetic resonance imaging dataset for inner-speech recognition

    No full text
    The recognition of inner speech, which could give a ‘voice’ to patients that have no ability to speak or move, is a challenge for brain-computer interfaces (BCIs). A shortcoming of the available datasets is that they do not combine modalities to increase the performance of inner speech recognition. Multimodal datasets of brain data enable the fusion of neuroimaging modalities with complimentary properties, such as the high spatial resolution of functional magnetic resonance imaging (fMRI) and the temporal resolution of electroencephalography (EEG), and therefore are promising for decoding inner speech. This paper presents the first publicly available bimodal dataset containing EEG and fMRI data acquired nonsimultaneously during inner-speech production. Data were obtained from four healthy, right-handed participants during an inner-speech task with words in either a social or numerical category. Each of the 8-word stimuli were assessed with 40 trials, resulting in 320 trials in each modality for each participant. The aim of this work is to provide a publicly available bimodal dataset on inner speech, contributing towards speech prostheses

    Water Treatment Using High Performance Antifouling Ultrafiltration Polyether Sulfone Membranes Incorporated with Activated Carbon

    No full text
    Membrane fouling is a continued critical challenge for ultrafiltration membranes performance. In this work, polyether sulfone (PES) ultrafiltration (UF) membranes were fabricated via phase-inversion method by incorporating varying concentrations of APTMS modified activated carbon (mAC). The mAC was thoroughly characterized and the fabricated membranes were studied for their surface morphology, functional groups, contact angle, water retention, swelling (%) porosity, and water flux. The hydrophilicity of mAC membranes also resulted in lower contact angle and higher values of porosity, roughness, water retention as well as water flux. Also, the membranes incorporated with mAC exhibited antibacterial performance against model test strains of gram-negative Ecoil and gram-positive S. aureus. The antifouling studies based on bovine serum albumin protein (BSA) solution filtration showed that mAC membranes have better BSA flux. The higher flux and antifouling characteristics of the mAC membranes were attributed to the electrostatic repulsion of the BSA protein from the unique functional properties of AC and network structure of APTMS. The novel mAC ultrafiltration membranes developed and studied in present work can provide higher flux and less BSA rejection thus can find antifouling applications for the isolation and concentration of proteins and macromolecules.Applied Science, Faculty ofNon UBCMaterials Engineering, Department ofReviewedFacultyResearche
    corecore