27 research outputs found

    Emotion Recognition Using Artificial Intelligence

    Get PDF
    This paper focuses on the interplay between humans and computer system, the ability for these systems to understand and respond to human emotions, including non-verbal communication. Current emotion recognition systems are based solely on either facial or verbal expressions. The limitation of these system is that it requires a large training data-sets. The paper proposes a system for recognizing human emotions that combines both speech and emotion recognition. The system utilizes advanced techniques such as deep learning and image recognition to identify facial expressions and comprehend emotions. The results show that the proposed system, based on combination of facial expression and speech outperforms existing ones, which are based solely either on facial or on verbal expressions. The proposed system detects human emotion with an accuracy of 86%, whereas the existing systems have an accuracy of 70% using verbal expression only and 76% using facial expression only. In this paper the increasing significance and demand for facial recognition technology in emotion recognition is also discussed

    An investigation into computational methods for classifying fishing vessels to identify illegal, unreported and unregulated fishing

    Get PDF
    Illegal, unreported and unregulated (IUU) fishing thwarts collective efforts to create a global model for sustainable fishing. Countering IUU fishing is an urgent priority given world population growth and increasing dependence on ocean-sourced food. This paper examines deep learning methods for the classification of fishing vessels with the intent to determine illicit fishing operations. This is achieved through supervised learning with highly irregular time series data in the form of signals from the automatic identification system (AIS). One of the main disadvantages of using such data is the intermittent frequency of signals. To deal with this problem two separate approaches have been followed. The first is feature engineering with zero padding and second is linear interpolation. Moving forward, these methods can be used for handling the ever-growing quantity of irregularly sampled time series data. Fundamentally, this work shows the existence of a distinct relationship between the movement pattern of a vessel and its method of fishing, through classification. Two neural network architectures have been used, firstly, stacked, bidirectional gated recurrent units (GRUs) and, secondly, one-dimensional convolutional neural networks (1D CNNs) with residual connection blocks. Each data pipeline has served as input for both neural networks, producing four different sets of results. The results show that feature engineering with the GRU performs very well with 95% overall accuracy, despite severe class imbalance in the large datasets. By creating a model that can classify the fishing method of a given vessel over 24 hours, this system can be used in real time to monitor behaviour in marine protected areas. It can also locate discrepancies between registered fishing gear and observed behaviour and could therefore be used to safeguard fish stocks

    Enhanced Solar Potential Analysis: Separating Terraced House Rooftops Using Convolutional Neural Networks

    Get PDF
    Solar power, a clean and renewable energy source, plays a pivotal role in achieving sustainable development goals by offering affordable, reliable, modern energy solutions and mitigating energy-related emissions and pollutants. Current studies predominantly focus on solar potential analysis derived from machine learning-based rooftop area segmentation. However, these studies reveal an overestimation of usable area for solar output calculations in terraced houses, due to failing to distinguish individual households within terraced structures. This research delineates state-of-the-art Machine Learning and computer vision techniques applied on remote-sensing images obtained via the Google API. The dataset, manually annotated and augmented to include 5000 training images and 1000 validation images, is focused on the UK, particularly terraced house areas. The stand-alone Convolutional Neural Network used to segment terraced-structure rooftop areas reaches an intersection over union of 69.11%. The model uniquely addresses the segmentation of contiguous terraced houses in the UK, which is pivotal for the solar installation assessments in the UKā€™s residential landscape

    A Lazy Semantics for Program Slicing

    Get PDF
    This paper demonstrates that if a slicing algorithm is expressed denotationally, without intermediate structures, then the power of denotational semantics can be used to prove correctness. The semantics preserved by slicing algorithms, however, is non-standard. We introduce a new lazy semantics which we prove is preserved by slicing algorithms. It is demonstrated how other concepts in program dependence, difficult or impossible to express using standard semantics, for example variable dependence, can be expressed naturally using our new lazy semantics

    Comparing Sampling Strategies for Tackling Imbalanced Data in Human Activity Recognition

    Get PDF
    Human activity recognition (HAR) using wearable sensors is an increasingly active research topic in machine learning, aided in part by the ready availability of detailed motion capture data from smartphones, fitness trackers, and smartwatches. The goal of HAR is to use such devices to assist users in their daily lives in application areas such as healthcare, physical therapy, and fitness. One of the main challenges for HAR, particularly when using supervised learning methods, is obtaining balanced data for algorithm optimisation and testing. As people perform some activities more than others (e.g., walk more than run), HAR datasets are typically imbalanced. The lack of dataset representation from minority classes hinders the ability of HAR classifiers to sufficiently capture new instances of those activities. We introduce three novel hybrid sampling strategies to generate more diverse synthetic samples to overcome the class imbalance problem. The first strategy, which we call the distance-based method (DBM), combines Synthetic Minority Oversampling Techniques (SMOTE) with Random_SMOTE, both of which are built around the k-nearest neighbors (KNN). The second technique, referred to as the noise detection-based method (NDBM), combines SMOTE Tomek links (SMOTE_Tomeklinks) and the modified synthetic minority oversampling technique (MSMOTE). The third approach, which we call the cluster-based method (CBM), combines Cluster-Based Synthetic Oversampling (CBSO) and Proximity Weighted Synthetic Oversampling Technique (ProWSyn). We compare the performance of the proposed hybrid methods to the individual constituent methods and baseline using accelerometer data from three commonly used benchmark datasets. We show that DBM, NDBM, and CBM reduce the impact of class imbalance and enhance F1 scores by a range of 9ā€“20 percentage point compared to their constituent sampling methods. CBM performs significantly better than the others under a Friedman test, however, DBM has lower computational requirements

    On a Survival Gradient Boosting, Neural Network and Cox PH Based Approach to Predicting Dementia Diagnosis Risk on ADNI

    Get PDF
    In recent years, attention within the clinical prediction community has turned to the use of survival machine learning as a tool for predicting the risk of developing a disease as a function of time. The current work seeks to contribute to existing literature which demonstrates the utility of these methods when applied to a dementia prediction context. We use the Alzheimer's Disease Neuroimaging Initiative ADNI dataset and model deterioration within two distinct groups, those deemed cognitively normal and those with a formal diagnosis of Mild Cognitive Impairment. In agreement with existing literature we find that survival machine learning outperforms standard survival analysis methods such as Cox PH model, and has very good predictive ability. We propose an innovative approach to predicting dementia diagnosis risk on ADNI, which explores the use of survival neural network and survival extreme gradient boosting techniques that have hitherto seldom been applied to this context. The stability of our models was investigated within a Monte Carlo simulation framework

    A Denotational Interprocedural Program Slicer

    Get PDF
    This paper extends a previously developed intraprocedural denotational program slicer to handle procedures. Using the denotational approach, slices can be deļ¬ned in terms of the abstract syntax of the object language without the need of a control ļ¬‚ow graph or similar intermediate structure. The algorithm presented here is capable of correctly handling the interplay between function and procedure calls, side-effects, and short-circuit expression evaluation. The ability to deal with these features is required in reverse engineering of legacy systems, where code often contains side-effects

    Sentiment and stock market volatility predictive modelling - A hybrid approach

    Get PDF
    The frequent ups and downs are characteristic to the stock market. The conventional standard models that assume that investors act rationally have not been able to capture the irregularities in the stock market patterns for years. As a result, behavioural finance is embraced to attempt to correct these model shortcomings by adding some factors to capture sentimental contagion which may be at play in determining the stock market. This paper assesses the predictive influence of sentiment on the stock market returns by using a non-parametric nonlinear approach that corrects specific limitations encountered in previous related work. In addition, the paper proposes a new approach to developing stock market volatility predictive models by incorporating a hybrid GARCH and artificial neural network framework, and proves the advantage of this framework over a GARCH only based framework. Our results reveal also that past volatility and positive sentiment appear to have strong predictive power over future volatility

    A trajectory-based strict semantics for program slicing

    Get PDF
    We define a program semantics that is preserved by dependence-based slicing algorithms. It is a natural extension, to non-terminating programs, of the semantics introduced by Weiser (which only considered terminating ones) and, as such, is an accurate characterisation of the semantic relationship between a program and the slice produced by these algorithms. Unlike other approaches, apart from Weiserā€™s original one, it is based on strict standard semantics which models the ā€˜normalā€™ execution of programs on a von Neumann machine and, thus, has the advantage of being intuitive. This is essential since one of the main applications of slicing is program comprehension. Although our semantics handles non-termination, it is defined wholly in terms of finite trajectories, without having to resort to complex, counter-intuitive, non-standard models of computation. As well as being simpler, unlike other approaches to this problem, our semantics is substitutive. Substitutivity is an important property becauseit greatly enhances the ability to reason about correctness of meaning-preserving program transformations such as slicing

    Empirical Study of Partitions Similarity Measures

    Get PDF
    This paper compares four existing distances and similarity measures between partitions. The partition measures considered in this paper are the Rand Index (RI), the Adjusted Rand Index (ARI), the Variation of Information (VI) and finally, the Normalised Variation of Information (NVI). This work investigates the ability of these partition measures to capture three predefined intuitions: the variation within randomly generated partitions, the sensitivity to small perturbations and finally the independence from the dataset scale . It has been shown that the Adjusted Rand Index (ARI) performed well overall, regarding these three intuitions
    corecore