56 research outputs found

    Interpreting Machine Learning Models and Application of Homotopy Methods

    Get PDF
    Neural networks have been criticized for their lack of easy interpretation, which undermines confidence in their use for important applications. We show that a trained neural network can be interpreted using flip points. A flip point is any point that lies on the boundary between two output classes: e.g. for a neural network with a binary yes/no output, a flip point is any input that generates equal scores for ``yes" and ``no". The flip point closest to a given input is of particular importance, and this point is the solution to a well-posed optimization problem. We show that computing closest flip points allows us, for example, to systematically investigate the decision boundaries of trained networks, to interpret and audit them with respect to individual inputs and entire datasets, and to find vulnerability against adversarial attacks. We demonstrate that flip points can help identify mistakes made by a model, improve its accuracy, and reveal the most influential features for classifications. We also show that some common assumptions about the decision boundaries of neural networks can be unreliable. Additionally, we present methods for designing the structure of feed-forward networks using matrix conditioning. At the end, we investigate an unsupervised learning method, the Gaussian graphical model, and provide mathematical tools for interpretation

    Bayesian plug & play methods for inverse problems in imaging.

    Get PDF
    Thèse de Doctorat de Mathématiques Appliquées (Université de Paris)Tesis de Doctorado en Ingeniería Eléctrica (Universidad de la República)This thesis deals with Bayesian methods for solving ill-posed inverse problems in imaging with learnt image priors. The first part of this thesis (Chapter 3) concentrates on two particular problems, namely joint denoising and decompression and multi-image super-resolution. After an extensive study of the noise statistics for these problem in the transformed (wavelet or Fourier) domain, we derive two novel algorithms to solve this particular inverse problem. One of them is based on a multi-scale self-similarity prior and can be seen as a transform-domain generalization of the celebrated non-local bayes algorithm to the case of non-Gaussian noise. The second one uses a neural-network denoiser to implicitly encode the image prior, and a splitting scheme to incorporate this prior into an optimization algorithm to find a MAP-like estimator. The second part of this thesis concentrates on the Variational AutoEncoder (VAE) model and some of its variants that show its capabilities to explicitly capture the probability distribution of high-dimensional datasets such as images. Based on these VAE models, we propose two ways to incorporate them as priors for general inverse problems in imaging : • The first one (Chapter 4) computes a joint (space-latent) MAP estimator named Joint Posterior Maximization using an Autoencoding Prior (JPMAP). We show theoretical and experimental evidence that the proposed objective function satisfies a weak bi-convexity property which is sufficient to guarantee that our optimization scheme converges to a stationary point. Experimental results also show the higher quality of the solutions obtained by our JPMAP approach with respect to other non-convex MAP approaches which more often get stuck in spurious local optima. • The second one (Chapter 5) develops a Gibbs-like posterior sampling algorithm for the exploration of posterior distributions of inverse problems using multiple chains and a VAE as image prior. We showhowto use those samples to obtain MMSE estimates and their corresponding uncertainty.Cette thèse traite des méthodes bayésiennes pour résoudre des problèmes inverses mal posés en imagerie avec des distributions a priori d’images apprises. La première partie de cette thèse (Chapitre 3) se concentre sur deux problèmes partic-uliers, à savoir le débruitage et la décompression conjoints et la super-résolutionmulti-images. Après une étude approfondie des statistiques de bruit pour ces problèmes dans le domaine transformé (ondelettes ou Fourier), nous dérivons deuxnouveaux algorithmes pour résoudre ce problème inverse particulie. L’un d’euxest basé sur une distributions a priori d’auto-similarité multi-échelle et peut êtrevu comme une généralisation du célèbre algorithme de Non-Local Bayes au cas dubruit non gaussien. Le second utilise un débruiteur de réseau de neurones pourcoder implicitement la distribution a priori, et un schéma de division pour incor-porer cet distribution dans un algorithme d’optimisation pour trouver un estima-teur de type MAP. La deuxième partie de cette thèse se concentre sur le modèle Variational Auto Encoder (VAE) et certaines de ses variantes qui montrent ses capacités à capturer explicitement la distribution de probabilité d’ensembles de données de grande dimension tels que les images. Sur la base de ces modèles VAE, nous proposons deuxmanières de les incorporer comme distribution a priori pour les problèmes inverses généraux en imagerie: •Le premier (Chapitre 4) calcule un estimateur MAP conjoint (espace-latent) nommé Joint Posterior Maximization using an Autoencoding Prior (JPMAP). Nous montrons des preuves théoriques et expérimentales que la fonction objectif proposée satisfait une propriété de bi-convexité faible qui est suffisante pour garantir que notre schéma d’optimisation converge vers un pointstationnaire. Les résultats expérimentaux montrent également la meilleurequalité des solutions obtenues par notre approche JPMAP par rapport à d’autresapproches MAP non convexes qui restent le plus souvent bloquées dans desminima locaux. •Le second (Chapitre 5) développe un algorithme d’échantillonnage a poste-riori de type Gibbs pour l’exploration des distributions a posteriori de problèmes inverses utilisant des chaînes multiples et un VAE comme distribution a priori. Nous montrons comment utiliser ces échantillons pour obtenir desestimations MMSE et leur incertitude correspondante.En esta tesis se estudian métodos bayesianos para resolver problemas inversos mal condicionados en imágenes usando distribuciones a priori entrenadas. La primera parte de esta tesis (Capítulo 3) se concentra en dos problemas particulares, a saber, el de eliminación de ruido y descompresión conjuntos, y el de superresolución a partir de múltiples imágenes. Después de un extenso estudio de las estadísticas del ruido para estos problemas en el dominio transformado (wavelet o Fourier),derivamos dos algoritmos nuevos para resolver este problema inverso en particular. Uno de ellos se basa en una distribución a priori de autosimilitud multiescala y puede verse como una generalización al dominio wavelet del célebre algoritmo Non-Local Bayes para el caso de ruido no Gaussiano. El segundo utiliza un algoritmo de eliminación de ruido basado en una red neuronal para codificar implícitamente la distribución a priori de las imágenes y un esquema de relajación para incorporar esta distribución en un algoritmo de optimización y así encontrar un estimador similar al MAP. La segunda parte de esta tesis se concentra en el modelo Variational AutoEncoder (VAE) y algunas de sus variantes que han mostrado capacidad para capturar explícitamente la distribución de probabilidad de conjuntos de datos en alta dimensión como las imágenes. Basándonos en estos modelos VAE, proponemos dos formas de incorporarlos como distribución a priori para problemas inversos genéricos en imágenes : •El primero (Capítulo 4) calcula un estimador MAP conjunto (espacio imagen y latente) llamado Joint Posterior Maximization using an Autoencoding Prior (JPMAP). Mostramos evidencia teórica y experimental de que la función objetivo propuesta satisface una propiedad de biconvexidad débil que es suficiente para garantizar que nuestro esquema de optimización converge a un punto estacionario. Los resultados experimentales también muestran la mayor calidad de las soluciones obtenidas por nuestro enfoque JPMAP con respecto a otros enfoques MAP no convexos que a menudo se atascan en mínimos locales espurios. •El segundo (Capítulo 5) desarrolla un algoritmo de muestreo tipo Gibbs parala exploración de la distribución a posteriori de problemas inversos utilizando múltiples cadenas y un VAE como distribución a priori. Mostramos cómo usar esas muestras para obtener estimaciones de MMSE y su correspondiente incertidumbr

    SMART PARALLEL WAVELET TRANSFORMATIONS FOR EDGE AND FOG DETECTION OF BEARING DEFECTS

    Get PDF
    Rolling Element Bearings (REB) are critical components of a wide range of rotating machines. Identifying and preventing their faults is critical for safe and efficient equipment operation. A variety of condition monitoring techniques have been developed that gather large amounts of data using acoustic or vibration transducers. Further information about the health of an REB can be extracted via time domain trend analysis, and amplitude modulation technics. The frequency domain-specific peaks corresponding to the defects can also be identified directly from the spectrum. Such approaches either provide little insight into the type of defect, are sensitive to noise, and require substantial post-processing. Complicating current fault diagnostic approaches are the ever-increasing size of datasets from different types of sensors that yield non-homogeneous databases and more challenging to execute prognostics for large-scale condition-based maintenance. These difficulties are addressable via approaches that leverage recent developments on microprocessors and system on chip (SoC) enabling more processing power at the sensor level, unloading the cloud from non-used or low information density data. The proposed research addresses these limitations by presenting a new approach for bearing defect detection using a SoC network to perform a wavelet transform calculation. The wavelet transforms enable an improved time- frequency representation and is less sensitive to noise than other classical methods; however, its analysis requires more complex processing techniques that must be executed at the edge (sensor) to limit the need for cloud computing of the results and large-scale data transmission to the cloud. To enable near real-time processing of the data, the BeagleBone AI SoC is employed, the wavelet transforms, and the defect classification are achieved at the edge. The contributions of this work are as follows: first, the real-time data acquisition driver for the SoC is developed. Second, the machine learning algorithm for improving the wavelet transform and the defect identification is implemented. Third federated learning in a network of SoC is formulated and implemented. Finally, the new approach is benchmarked to current approaches in terms of detection accuracy, and sensitivity to defect and was proven to obtain between 80 and 90 percent accuracy depending on the dataset.Ph.D

    Signal fingerprinting and machine learning framework for UAV detection and identification.

    Get PDF
    Advancement in technology has led to creative and innovative inventions. One such invention includes unmanned aerial vehicles (UAVs). UAVs (also known as drones) are now an intrinsic part of our society because their application is becoming ubiquitous in every industry ranging from transportation and logistics to environmental monitoring among others. With the numerous benign applications of UAVs, their emergence has added a new dimension to privacy and security issues. There are little or no strict regulations on the people that can purchase or own a UAV. For this reason, nefarious actors can take advantage of these aircraft to intrude into restricted or private areas. A UAV detection and identification system is one of the ways of detecting and identifying the presence of a UAV in an area. UAV detection and identification systems employ different sensing techniques such as radio frequency (RF) signals, video, sounds, and thermal imaging for detecting an intruding UAV. Because of the passive nature (stealth) of RF sensing techniques, the ability to exploit RF sensing for identification of UAV flight mode (i.e., flying, hovering, videoing, etc.), and the capability to detect a UAV at beyond visual line-of-sight (BVLOS) or marginal line-of-sight makes RF sensing techniques promising for UAV detection and identification. More so, there is constant communication between a UAV and its ground station (i.e., flight controller). The RF signals emitting from a UAV or UAV flight controller can be exploited for UAV detection and identification. Hence, in this work, an RF-based UAV detection and identification system is proposed and investigated. In RF signal fingerprinting research, the transient and steady state of the RF signals can be used to extract a unique signature. The first part of this work is to use two different wavelet analytic transforms (i.e., continuous wavelet transform and wavelet scattering transform) to investigate and analyze the characteristics or impacts of using either state for UAV detection and identification. Coefficient-based and image-based signatures are proposed for each of the wavelet analysis transforms to detect and identify a UAV. One of the challenges of using RF sensing is that a UAV\u27s communication links operate at the industrial, scientific, and medical (ISM) band. Several devices such as Bluetooth and WiFi operate at the ISM band as well, so discriminating UAVs from other ISM devices is not a trivial task. A semi-supervised anomaly detection approach is explored and proposed in this research to differentiate UAVs from Bluetooth and WiFi devices. Both time-frequency analytical approaches and unsupervised deep neural network techniques (i.e., denoising autoencoder) are used differently for feature extraction. Finally, a hierarchical classification framework for UAV identification is proposed for the identification of the type of unmanned aerial system signal (UAV or UAV controller signal), the UAV model, and the operational mode of the UAV. This is a shift from a flat classification approach. The hierarchical learning approach provides a level-by-level classification that can be useful for identifying an intruding UAV. The proposed frameworks described here can be extended to the detection of rogue RF devices in an environment

    Toward Building an Intelligent and Secure Network: An Internet Traffic Forecasting Perspective

    Get PDF
    Internet traffic forecast is a crucial component for the proactive management of self-organizing networks (SON) to ensure better Quality of Service (QoS) and Quality of Experience (QoE). Given the volatile and random nature of traffic data, this forecasting influences strategic development and investment decisions in the Internet Service Provider (ISP) industry. Modern machine learning algorithms have shown potential in dealing with complex Internet traffic prediction tasks, yet challenges persist. This thesis systematically explores these issues over five empirical studies conducted in the past three years, focusing on four key research questions: How do outlier data samples impact prediction accuracy for both short-term and long-term forecasting? How can a denoising mechanism enhance prediction accuracy? How can robust machine learning models be built with limited data? How can out-of-distribution traffic data be used to improve the generalizability of prediction models? Based on extensive experiments, we propose a novel traffic forecast/prediction framework and associated models that integrate outlier management and noise reduction strategies, outperforming traditional machine learning models. Additionally, we suggest a transfer learning-based framework combined with a data augmentation technique to provide robust solutions with smaller datasets. Lastly, we propose a hybrid model with signal decomposition techniques to enhance model generalization for out-of-distribution data samples. We also brought the issue of cyber threats as part of our forecast research, acknowledging their substantial influence on traffic unpredictability and forecasting challenges. Our thesis presents a detailed exploration of cyber-attack detection, employing methods that have been validated using multiple benchmark datasets. Initially, we incorporated ensemble feature selection with ensemble classification to improve DDoS (Distributed Denial-of-Service) attack detection accuracy with minimal false alarms. Our research further introduces a stacking ensemble framework for classifying diverse forms of cyber-attacks. Proceeding further, we proposed a weighted voting mechanism for Android malware detection to secure Mobile Cyber-Physical Systems, which integrates the mobility of various smart devices to exchange information between physical and cyber systems. Lastly, we employed Generative Adversarial Networks for generating flow-based DDoS attacks in Internet of Things environments. By considering the impact of cyber-attacks on traffic volume and their challenges to traffic prediction, our research attempts to bridge the gap between traffic forecasting and cyber security, enhancing proactive management of networks and contributing to resilient and secure internet infrastructure

    Deep Clustering and Deep Network Compression

    Get PDF
    The use of deep learning has grown increasingly in recent years, thereby becoming a much-discussed topic across a diverse range of fields, especially in computer vision, text mining, and speech recognition. Deep learning methods have proven to be robust in representation learning and attained extraordinary achievement. Their success is primarily due to the ability of deep learning to discover and automatically learn feature representations by mapping input data into abstract and composite representations in a latent space. Deep learning’s ability to deal with high-level representations from data has inspired us to make use of learned representations, aiming to enhance unsupervised clustering and evaluate the characteristic strength of internal representations to compress and accelerate deep neural networks.Traditional clustering algorithms attain a limited performance as the dimensionality in-creases. Therefore, the ability to extract high-level representations provides beneficial components that can support such clustering algorithms. In this work, we first present DeepCluster, a clustering approach embedded in a deep convolutional auto-encoder. We introduce two clustering methods, namely DCAE-Kmeans and DCAE-GMM. The DeepCluster allows for data points to be grouped into their identical cluster, in the latent space, in a joint-cost function by simultaneously optimizing the clustering objective and the DCAE objective, producing stable representations, which is appropriate for the clustering process. Both qualitative and quantitative evaluations of proposed methods are reported, showing the efficiency of deep clustering on several public datasets in comparison to the previous state-of-the-art methods.Following this, we propose a new version of the DeepCluster model to include varying degrees of discriminative power. This introduces a mechanism which enables the imposition of regularization techniques and the involvement of a supervision component. The key idea of our approach is to distinguish the discriminatory power of numerous structures when searching for a compact structure to form robust clusters. The effectiveness of injecting various levels of discriminatory powers into the learning process is investigated alongside the exploration and analytical study of the discriminatory power obtained through the use of two discriminative attributes: data-driven discriminative attributes with the support of regularization techniques, and supervision discriminative attributes with the support of the supervision component. An evaluation is provided on four different datasets.The use of neural networks in various applications is accompanied by a dramatic increase in computational costs and memory requirements. Making use of the characteristic strength of learned representations, we propose an iterative pruning method that simultaneously identifies the critical neurons and prunes the model during training without involving any pre-training or fine-tuning procedures. We introduce a majority voting technique to compare the activation values among neurons and assign a voting score to evaluate their importance quantitatively. This mechanism effectively reduces model complexity by eliminating the less influential neurons and aims to determine a subset of the whole model that can represent the reference model with much fewer parameters within the training process. Empirically, we demonstrate that our pruning method is robust across various scenarios, including fully-connected networks (FCNs), sparsely-connected networks (SCNs), and Convolutional neural networks (CNNs), using two public datasets.Moreover, we also propose a novel framework to measure the importance of individual hidden units by computing a measure of relevance to identify the most critical filters and prune them to compress and accelerate CNNs. Unlike existing methods, we introduce the use of the activation of feature maps to detect valuable information and the essential semantic parts, with the aim of evaluating the importance of feature maps, inspired by novel neural network interpretability. A majority voting technique based on the degree of alignment between a se-mantic concept and individual hidden unit representations is utilized to evaluate feature maps’ importance quantitatively. We also propose a simple yet effective method to estimate new convolution kernels based on the remaining crucial channels to accomplish effective CNN compression. Experimental results show the effectiveness of our filter selection criteria, which outperforms the state-of-the-art baselines.To conclude, we present a comprehensive, detailed review of time-series data analysis, with emphasis on deep time-series clustering (DTSC), and a founding contribution to the area of applying deep clustering to time-series data by presenting the first case study in the context of movement behavior clustering utilizing the DeepCluster method. The results are promising, showing that the latent space encodes sufficient patterns to facilitate accurate clustering of movement behaviors. Finally, we identify state-of-the-art and present an outlook on this important field of DTSC from five important perspectives

    Leaning Robust Sequence Features via Dynamic Temporal Pattern Discovery

    Get PDF
    As a major type of data, time series possess invaluable latent knowledge for describing the real world and human society. In order to improve the ability of intelligent systems for understanding the world and people, it is critical to design sophisticated machine learning algorithms for extracting robust time series features from such latent knowledge. Motivated by the successful applications of deep learning in computer vision, more and more machine learning researchers put their attentions on the topic of applying deep learning techniques to time series data. However, directly employing current deep models in most time series domains could be problematic. A major reason is that temporal pattern types that current deep models are aiming at are very limited, which cannot meet the requirement of modeling different underlying patterns of data coming from various sources. In this study we address this problem by designing different network structures explicitly based on specific domain knowledge such that we can extract features via most salient temporal patterns. More specifically, we mainly focus on two types of temporal patterns: order patterns and frequency patterns. For order patterns, which are usually related to brain and human activities, we design a hashing-based neural network layer to globally encode the ordinal pattern information into the resultant features. It is further generalized into a specially designed Recurrent Neural Networks (RNN) cell which can learn order patterns in an online fashion. On the other hand, we believe audio-related data such as music and speech can benefit from modeling frequency patterns. Thus, we do so by developing two types of RNN cells. The first type tries to directly learn the long-term dependencies on frequency domain rather than time domain. The second one aims to dynamically filter out the noise frequencies based on temporal contexts. By proposing various deep models based on different domain knowledge and evaluating them on extensive time series tasks, we hope this work can provide inspirations for others and increase the community\u27s interests on the problem of applying deep learning techniques to more time series tasks

    Machine learning-based automated segmentation with a feedback loop for 3D synchrotron micro-CT

    Get PDF
    Die Entwicklung von Synchrotronlichtquellen der dritten Generation hat die Grundlage für die Untersuchung der 3D-Struktur opaker Proben mit einer Auflösung im Mikrometerbereich und höher geschaffen. Dies führte zur Entwicklung der Röntgen-Synchrotron-Mikro-Computertomographie, welche die Schaffung von Bildgebungseinrichtungen zur Untersuchung von Proben verschiedenster Art förderte, z.B. von Modellorganismen, um die Physiologie komplexer lebender Systeme besser zu verstehen. Die Entwicklung moderner Steuerungssysteme und Robotik ermöglichte die vollständige Automatisierung der Röntgenbildgebungsexperimente und die Kalibrierung der Parameter des Versuchsaufbaus während des Betriebs. Die Weiterentwicklung der digitalen Detektorsysteme führte zu Verbesserungen der Auflösung, des Dynamikbereichs, der Empfindlichkeit und anderer wesentlicher Eigenschaften. Diese Verbesserungen führten zu einer beträchtlichen Steigerung des Durchsatzes des Bildgebungsprozesses, aber auf der anderen Seite begannen die Experimente eine wesentlich größere Datenmenge von bis zu Dutzenden von Terabyte zu generieren, welche anschließend manuell verarbeitet wurden. Somit ebneten diese technischen Fortschritte den Weg für die Durchführung effizienterer Hochdurchsatzexperimente zur Untersuchung einer großen Anzahl von Proben, welche Datensätze von besserer Qualität produzierten. In der wissenschaftlichen Gemeinschaft besteht daher ein hoher Bedarf an einem effizienten, automatisierten Workflow für die Röntgendatenanalyse, welcher eine solche Datenlast bewältigen und wertvolle Erkenntnisse für die Fachexperten liefern kann. Die bestehenden Lösungen für einen solchen Workflow sind nicht direkt auf Hochdurchsatzexperimente anwendbar, da sie für Ad-hoc-Szenarien im Bereich der medizinischen Bildgebung entwickelt wurden. Daher sind sie nicht für Hochdurchsatzdatenströme optimiert und auch nicht in der Lage, die hierarchische Beschaffenheit von Proben zu nutzen. Die wichtigsten Beiträge der vorliegenden Arbeit sind ein neuer automatisierter Analyse-Workflow, der für die effiziente Verarbeitung heterogener Röntgendatensätze hierarchischer Natur geeignet ist. Der entwickelte Workflow basiert auf verbesserten Methoden zur Datenvorverarbeitung, Registrierung, Lokalisierung und Segmentierung. Jede Phase eines Arbeitsablaufs, die eine Trainingsphase beinhaltet, kann automatisch feinabgestimmt werden, um die besten Hyperparameter für den spezifischen Datensatz zu finden. Für die Analyse von Faserstrukturen in Proben wurde eine neue, hochgradig parallelisierbare 3D-Orientierungsanalysemethode entwickelt, die auf einem neuartigen Konzept der emittierenden Strahlen basiert und eine präzisere morphologische Analyse ermöglicht. Alle entwickelten Methoden wurden gründlich an synthetischen Datensätzen validiert, um ihre Anwendbarkeit unter verschiedenen Abbildungsbedingungen quantitativ zu bewerten. Es wurde gezeigt, dass der Workflow in der Lage ist, eine Reihe von Datensätzen ähnlicher Art zu verarbeiten. Darüber hinaus werden die effizienten CPU/GPU-Implementierungen des entwickelten Workflows und der Methoden vorgestellt und der Gemeinschaft als Module für die Sprache Python zur Verfügung gestellt. Der entwickelte automatisierte Analyse-Workflow wurde erfolgreich für Mikro-CT-Datensätze angewandt, die in Hochdurchsatzröntgenexperimenten im Bereich der Entwicklungsbiologie und Materialwissenschaft gewonnen wurden. Insbesondere wurde dieser Arbeitsablauf für die Analyse der Medaka-Fisch-Datensätze angewandt, was eine automatisierte Segmentierung und anschließende morphologische Analyse von Gehirn, Leber, Kopfnephronen und Herz ermöglichte. Darüber hinaus wurde die entwickelte Methode der 3D-Orientierungsanalyse bei der morphologischen Analyse von Polymergerüst-Datensätzen eingesetzt, um einen Herstellungsprozess in Richtung wünschenswerter Eigenschaften zu lenken

    An Analytic Training Approach for Recognition in Still Images and Videos

    Get PDF
    This dissertation proposes a general framework to efficiently identify the objects of interest (OI) in still images and its application can be further extended to human action recognition in videos. The frameworks utilized in this research to process still images and videos are similar in architecture except they have different content representations. Initially, global level analysis is employed to extract distinctive feature sets from an input data. For the global analysis of data the bidirectional two dimensional principal component analysis (2D-PCA) is employed to preserve correlation amongst neighborhood pixels. Furthermore, to cope with the inherent limitations within the holistic approach local information is introduced into the framework. The local information of OI is identified utilizing FERNS and affine SIFT (ASIFT) approaches for spatial and temporal datasets, respectively. For supportive local information, the feature detection is followed by an effective pruning strategy to divide these features into inliers and outliers. A cluster of inliers represents local features which exhibit stable behavior and geometric consistency. Incremental learning is a significant but often overlooked problem in action recognition. The final part of this dissertation proposes a new action recognition algorithm based on sequential learning and adaptive representation of the human body using Pyramid of Histogram of Oriented Gradients (PHOG) features. The changing shape and appearance of human body parts is tracked based on the weak appearance constancy assumption. The constantly changing shape of an OI is maximally covered by the small blocks to approximate the body contour of a segmented foreground object. In addition, the analytically determined learning phase guarantees lower computational burden for classification. The utilization of a minimum number of video frames in a causal way to recognize an action is also explored in this dissertation. The use of PHOG features adaptively extracted from individual frames allows the recognition of an incoming action video using a small group of frames which eliminates the need of large look-ahead
    • …
    corecore