934 research outputs found

    Real-time human ambulation, activity, and physiological monitoring:taxonomy of issues, techniques, applications, challenges and limitations

    Get PDF
    Automated methods of real-time, unobtrusive, human ambulation, activity, and wellness monitoring and data analysis using various algorithmic techniques have been subjects of intense research. The general aim is to devise effective means of addressing the demands of assisted living, rehabilitation, and clinical observation and assessment through sensor-based monitoring. The research studies have resulted in a large amount of literature. This paper presents a holistic articulation of the research studies and offers comprehensive insights along four main axes: distribution of existing studies; monitoring device framework and sensor types; data collection, processing and analysis; and applications, limitations and challenges. The aim is to present a systematic and most complete study of literature in the area in order to identify research gaps and prioritize future research directions

    Block-level discrete cosine transform coefficients for autonomic face recognition

    Get PDF
    This dissertation presents a novel method of autonomic face recognition based on the recently proposed biologically plausible network of networks (NoN) model of information processing. The NoN model is based on locally parallel and globally coordinated transformations. In the NoN architecture, the neurons or computational units form distributed networks, which themselves link to form larger networks. In the general case, an n-level hierarchy of nested distributed networks is constructed. This models the structures in the cerebral cortex described by Mountcastle and the architecture based on that proposed for information processing by Sutton. In the implementation proposed in the dissertation, the image is processed by a nested family of locally operating networks along with a hierarchically superior network that classifies the information from each of the local networks. The implementation of this approach helps obtain sensitivity to the contrast sensitivity function (CSF) in the middle of the spectrum, as is true for the human vision system. The input images are divided into blocks to define the local regions of processing. The two-dimensional Discrete Cosine Transform (DCT), a spatial frequency transform, is used to transform the data into the frequency domain. Thereafter, statistical operators that calculate various functions of spatial frequency in the block are used to produce a block-level DCT coefficient. The image is now transformed into a variable length vector that is trained with respect to the data set. The classification was done by the use of a backpropagation neural network. The proposed method yields excellent results on a benchmark database. The results of the experiments yielded a maximum of 98.5% recognition accuracy and an average of 97.4% recognition accuracy. An advanced version of the method where the local processing is done on offset blocks has also been developed. This has validated the NoN approach and further research using local processing as well as more advanced global operators is likely to yield even better results

    Automatic Malware Detection

    Get PDF
    The problem of automatic malware detection presents challenges for antivirus vendors. Since the manual investigation is not possible due to the massive number of samples being submitted every day, automatic malware classication is necessary. Our work is focused on an automatic malware detection framework based on machine learning algorithms. We proposed several static malware detection systems for the Windows operating system to achieve the primary goal of distinguishing between malware and benign software. We also considered the more practical goal of detecting as much malware as possible while maintaining a suciently low false positive rate. We proposed several malware detection systems using various machine learning techniques, such as ensemble classier, recurrent neural network, and distance metric learning. We designed architectures of the proposed detection systems, which are automatic in the sense that extraction of features, preprocessing, training, and evaluating the detection model can be automated. However, antivirus program relies on more complex system that consists of many components where several of them depends on malware analysts and researchers. Malware authors adapt their malicious programs frequently in order to bypass antivirus programs that are regularly updated. Our proposed detection systems are not automatic in the sense that they are not able to automatically adapt to detect the newest malware. However, we can partly solve this problem by running our proposed systems again if the training set contains the newest malware. Our work relied on static analysis only. In this thesis, we discuss advantages and drawbacks in comparison to dynamic analysis. Static analysis still plays an important role, and it is used as one component of a complex detection system.The problem of automatic malware detection presents challenges for antivirus vendors. Since the manual investigation is not possible due to the massive number of samples being submitted every day, automatic malware classication is necessary. Our work is focused on an automatic malware detection framework based on machine learning algorithms. We proposed several static malware detection systems for the Windows operating system to achieve the primary goal of distinguishing between malware and benign software. We also considered the more practical goal of detecting as much malware as possible while maintaining a suciently low false positive rate. We proposed several malware detection systems using various machine learning techniques, such as ensemble classier, recurrent neural network, and distance metric learning. We designed architectures of the proposed detection systems, which are automatic in the sense that extraction of features, preprocessing, training, and evaluating the detection model can be automated. However, antivirus program relies on more complex system that consists of many components where several of them depends on malware analysts and researchers. Malware authors adapt their malicious programs frequently in order to bypass antivirus programs that are regularly updated. Our proposed detection systems are not automatic in the sense that they are not able to automatically adapt to detect the newest malware. However, we can partly solve this problem by running our proposed systems again if the training set contains the newest malware. Our work relied on static analysis only. In this thesis, we discuss advantages and drawbacks in comparison to dynamic analysis. Static analysis still plays an important role, and it is used as one component of a complex detection system

    EpilepsyNet: Novel automated detection of epilepsy using transformer model with EEG signals from 121 patient population

    Get PDF
    Background: Epilepsy is one of the most common neurological conditions globally, and the fourth most common in the United States. Recurrent non-provoked seizures characterize it and have huge impacts on the quality of life and financial impacts for affected individuals. A rapid and accurate diagnosis is essential in order to instigate and monitor optimal treatments. There is also a compelling need for the accurate interpretation of epilepsy due to the current scarcity in neurologist diagnosticians and a global inequity in access and outcomes. Furthermore, the existing clinical and traditional machine learning diagnostic methods exhibit limitations, warranting the need to create an automated system using deep learning model for epilepsy detection and monitoring using a huge database. Method: The EEG signals from 35 channels were used to train the deep learning-based transformer model named (EpilepsyNet). For each training iteration, 1-min-long data were randomly sampled from each participant. Thereafter, each 5-s epoch was mapped to a matrix using the Pearson Correlation Coefficient (PCC), such that the bottom part of the triangle was discarded and only the upper triangle of the matrix was vectorized as input data. PCC is a reliable method used to measure the statistical relationship between two variables. Based on the 5 s of data, single embedding was performed thereafter to generate a 1-dimensional array of signals. In the final stage, a positional encoding with learnable parameters was added to each correlation coefficient’s embedding before being fed to the developed EpilepsyNet as input data to epilepsy EEG signals. The ten-fold cross-validation technique was used to generate the model. Results: Our transformer-based model (EpilepsyNet) yielded high classification accuracy, sensitivity, specificity and positive predictive values of 85%, 82%, 87%, and 82%, respectively. Conclusion: The proposed method is both accurate and robust since ten-fold cross-validation was employed to evaluate the performance of the model. Compared to the deep models used in existing studies for epilepsy diagnosis, our proposed method is simple and less computationally intensive. This is the earliest study to have uniquely employed the positional encoding with learnable parameters to each correlation coefficient’s embedding together with the deep transformer model, using a huge database of 121 participants for epilepsy detection. With the training and validation of the model using a larger dataset, the same study approach can be extended for the detection of other neurological conditions, with a transformative impact on neurological diagnostics worldwide

    Application of artificial intelligence in cognitive load analysis using functional near-infrared spectroscopy:A systematic review

    Get PDF
    Cognitive load theory suggests that overloading of working memory may negatively affect the performance of human in cognitively demanding tasks. Evaluation of cognitive load is a difficult task; it is often assessed through feedback and evaluation from experts. Cognitive load classification based on Functional Near-InfraRed Spectroscopy (fNIRS) is now one of the key research areas in recent years, due to its resistance of artefacts, cost-effectiveness, and portability. To make fNIRS more practical in various applications, it is necessary to develop robust algorithms that can automatically classify fNIRS signals and less reliant on trained signals. Many of the analytical tools used in cognitive sciences have used Deep Learning (DL) modalities to uncover relevant information for mental workload classification. This review investigates the research questions on the design and overall effectiveness of DL as well as its key characteristics. We have identified 45 studies published between 2011 and 2023, that specifically proposed Machine Learning (ML) models for classifying cognitive load using data obtained from fNIRS devices. Those studies were analyzed based on type of feature selection methods, input, and DL model architectures. Most of the existing cognitive load studies are based on ML algorithms, which follow signal filtration and hand-crafted features. It is observed that hybrid DL architectures that integrate convolution and LSTM operators performed significantly better in comparison with other models. However, DL models especially hybrid models have not been extensively investigated for the classification of cognitive load captured by fNIRS devices. The current trends and challenges are highlighted to provide directions for the development of DL models pertaining to fNIRS research

    Computational Modeling of Temporal EEG Responses to Cyclic Binary Visual Stimulus Patterns

    Get PDF
    The human visual system serves as the basis for many modern computer vision and machine learning approaches. While detailed biophysical models of certain aspects of the visual system exist, little work has been done to develop an end-to-end model from the visual stimulus to the signals generated at the visual cortex measured via the scalp electroencephalogram (EEG). The creation of such a model would not only provide a better understanding of the visual processing pathways but would also facilitate the design and evaluation of more robust visual stimuli for brain-computer interfaces (BCIs). A novel experiment was designed and conducted where 15 participants viewed stereotyped visual stimuli while their EEG was recorded simultaneously. The resulting EEG responses were characterized across participants. Furthermore, a Residual Connection Feed Forward system identification Neural Network (ReCon FFNN) was implemented as a preliminary end-to-end model of the visual system that uses the temporal characteristics of the visual stimulus as the model input and the corresponding EEG time series as the model output. This preliminary model was able to reproduce temporal and spectral characteristics of the EEG and serves as a proof of concept for the development of future artificial neural network or biophysical models that incorporate spatio-temporal information

    High speed event-based visual processing in the presence of noise

    Get PDF
    Standard machine vision approaches are challenged in applications where large amounts of noisy temporal data must be processed in real-time. This work aims to develop neuromorphic event-based processing systems for such challenging, high-noise environments. The novel event-based application-focused algorithms developed are primarily designed for implementation in digital neuromorphic hardware with a focus on noise robustness, ease of implementation, operationally useful ancillary signals and processing speed in embedded systems

    Energy efficient enabling technologies for semantic video processing on mobile devices

    Get PDF
    Semantic object-based processing will play an increasingly important role in future multimedia systems due to the ubiquity of digital multimedia capture/playback technologies and increasing storage capacity. Although the object based paradigm has many undeniable benefits, numerous technical challenges remain before the applications becomes pervasive, particularly on computational constrained mobile devices. A fundamental issue is the ill-posed problem of semantic object segmentation. Furthermore, on battery powered mobile computing devices, the additional algorithmic complexity of semantic object based processing compared to conventional video processing is highly undesirable both from a real-time operation and battery life perspective. This thesis attempts to tackle these issues by firstly constraining the solution space and focusing on the human face as a primary semantic concept of use to users of mobile devices. A novel face detection algorithm is proposed, which from the outset was designed to be amenable to be offloaded from the host microprocessor to dedicated hardware, thereby providing real-time performance and reducing power consumption. The algorithm uses an Artificial Neural Network (ANN), whose topology and weights are evolved via a genetic algorithm (GA). The computational burden of the ANN evaluation is offloaded to a dedicated hardware accelerator, which is capable of processing any evolved network topology. Efficient arithmetic circuitry, which leverages modified Booth recoding, column compressors and carry save adders, is adopted throughout the design. To tackle the increased computational costs associated with object tracking or object based shape encoding, a novel energy efficient binary motion estimation architecture is proposed. Energy is reduced in the proposed motion estimation architecture by minimising the redundant operations inherent in the binary data. Both architectures are shown to compare favourable with the relevant prior art
    corecore