5 research outputs found

    Enhanced segment particle swarm optimization for large-scale kinetic parameter estimation of escherichia coli network model

    Get PDF
    The development of a large-scale metabolic model of Escherichia coli (E. coli) is very crucial to identify the potential solution of industrially viable productions. However, the large-scale kinetic parameters estimation using optimization algorithms is still not applied to the main metabolic pathway of the E. coli model, and they’re a lack of accuracy result been reported for current parameters estimation using this approach. Thus, this research aimed to estimate large-scale kinetic parameters of the main metabolic pathway of the E. coli model. In this regard, a Local Sensitivity Analysis, Segment Particle Swarm Optimization (Se-PSO) algorithm, and the Enhanced Segment Particle Swarm Optimization (ESe-PSO) algorithm was adapted and proposed to estimate the parameters. Initially, PSO algorithm was adapted to find the globally optimal result based on unorganized particle movement in the search space toward the optimal solution. This development then introduces the Se-PSO algorithm in which the particles are segmented to find a local optimal solution at the beginning and later sought by the PSO algorithm. Additionally, the study proposed an Enhance Se-PSO algorithm to improve the linear value of inertia weigh

    Adaptive classifier ensembles for face recognition in video-surveillance

    Get PDF
    Lors de l’implémentation de systèmes de sécurité tels que la vidéo-surveillance intelligente, l’utilisation d’images de visages présente de nombreux avantages par rapport à d’autres traits biométriques. En particulier, cela permet de détecter d’éventuels individus d’intérêt de manière discrète et non intrusive, ce qui peut être particulièrement avantageux dans des situations comme la détection d’individus sur liste noire, la recherche dans des données archivées ou la ré-identification de visages. Malgré cela, la reconnaissance de visages reste confrontée à de nombreuses difficultés propres à la vidéo surveillance. Entre autres, le manque de contrôle sur l’environnement observé implique de nombreuses variations dans les conditions d’éclairage, la résolution de l’image, le flou de mouvement, l’orientation et l’expression des visages. Pour reconnaître des individus, des modèles de visages sont habituellement générés à l’aide d’un nombre limité d’images ou de vidéos de référence collectées lors de sessions d’inscription. Cependant, ces acquisitions ne se déroulant pas nécessairement dans les mêmes conditions d’observation, les données de référence représentent pas toujours la complexité du problème réel. D’autre part, bien qu’il soit possible d’adapter les modèles de visage lorsque de nouvelles données de référence deviennent disponibles, un apprentissage incrémental basé sur des données significativement différentes expose le système à un risque de corruption de connaissances. Enfin, seule une partie de ces connaissances est effectivement pertinente pour la classification d’une image donnée. Dans cette thèse, un nouveau système est proposé pour la détection automatique d’individus d’intérêt en vidéo-surveillance. Plus particulièrement, celle-ci se concentre sur un scénario centré sur l’utilisateur, où un système de reconnaissance de visages est intégré à un outil d’aide à la décision pour alerter un opérateur lorsqu’un individu d’intérêt est détecté sur des flux vidéo. Un tel système se doit d’être capable d’ajouter ou supprimer des individus d’intérêt durant son fonctionnement, ainsi que de mettre à jour leurs modèles de visage dans le temps avec des nouvelles données de référence. Pour cela, le système proposé se base sur de la détection de changement de concepts pour guider une stratégie d’apprentissage impliquant des ensembles de classificateurs. Chaque individu inscrit dans le système est représenté par un ensemble de classificateurs à deux classes, chacun étant spécialisé dans des conditions d’observation différentes, détectées dans les données de référence. De plus, une nouvelle règle pour la fusion dynamique d’ensembles de classificateurs est proposée, utilisant des modèles de concepts pour estimer la pertinence des classificateurs vis-à-vis de chaque image à classifier. Enfin, les visages sont suivis d’une image à l’autre dans le but de les regrouper en trajectoires, et accumuler les décisions dans le temps. Au Chapitre 2, la détection de changement de concept est dans un premier temps utilisée pour limiter l’augmentation de complexité d’un système d’appariement de modèles adoptant une stratégie de mise à jour automatique de ses galeries. Une nouvelle approche sensible au contexte est proposée, dans laquelle seules les images de haute confiance capturées dans des conditions d’observation différentes sont utilisées pour mettre à jour les modèles de visage. Des expérimentations ont été conduites avec trois bases de données de visages publiques. Un système d’appariement de modèles standard a été utilisé, combiné avec un module de détection de changement dans les conditions d’illumination. Les résultats montrent que l’approche proposée permet de diminuer la complexité de ces systèmes, tout en maintenant la performance dans le temps. Au Chapitre 3, un nouveau système adaptatif basé des ensembles de classificateurs est proposé pour la reconnaissance de visages en vidéo-surveillance. Il est composé d’un ensemble de classificateurs incrémentaux pour chaque individu inscrit, et se base sur la détection de changement de concepts pour affiner les modèles de visage lorsque de nouvelles données sont disponibles. Une stratégie hybride est proposée, dans laquelle des classificateurs ne sont ajoutés aux ensembles que lorsqu’un changement abrupt est détecté dans les données de référence. Lors d’un changement graduel, les classificateurs associés sont mis à jour, ce qui permet d’affiner les connaissances propres au concept correspondant. Une implémentation particulière de ce système est proposée, utilisant des ensembles de classificateurs de type Fuzzy-ARTMAP probabilistes, générés et mis à jour à l’aide d’une stratégie basée sur une optimisation par essaims de particules dynamiques, et utilisant la distance de Hellinger entre histogrammes pour détecter des changements. Les simulations réalisées sur la base de donnée de vidéo-surveillance Faces in Action (FIA) montrent que le système proposé permet de maintenir un haut niveau de performance dans le temps, tout en limitant la corruption de connaissance. Il montre des performances de classification supérieure à un système similaire passif (sans détection de changement), ainsi qu’a des systèmes de référence de type kNN probabiliste, et TCM-kNN. Au Chapitre 4, une évolution du système présenté au Chapitre 3 est proposée, intégrant des mécanismes permettant d’adapter dynamiquement le comportement du système aux conditions d’observation changeantes en mode opérationnel. Une nouvelle règle de fusion basée sur de la pondération dynamique est proposée, assignant à chaque classificateur un poids proportionnel à son niveau de compétence estimé vis-à-vis de chaque image à classifier. De plus, ces compétences sont estimées à l’aide des modèles de concepts utilisés en apprentissage pour la détection de changement, ce qui permet un allègement des ressources nécessaires en mode opérationnel. Une évolution de l’implémentation proposée au Chapitre 3 est présentée, dans laquelle les concepts sont modélisés à l’aide de l’algorithme de partitionnement Fuzzy C-Means, et la fusion de classificateurs réalisée avec une moyenne pondérée. Les simulation expérimentales avec les bases de données de vidéo-surveillance FIA et Chokepoint montrent que la méthode de fusion proposée permet d’obtenir des résultats supérieurs à la méthode de sélection dynamique DSOLA, tout en utilisant considérablement moins de ressources de calcul. De plus, la méthode proposée montre des performances de classification supérieures aux systèmes de référence de type kNN probabiliste, TCM-kNN et Adaptive Sparse Coding

    Adapting heterogeneous ensembles with particle swarm optimization for video face recognition

    Get PDF
    In video-based face recognition applications, matching is typically performed by comparing query samples against biometric models (i.e., an individual’s facial model) that is designed with reference samples captured during an enrollment process. Although statistical and neural pattern classifiers may represent a flexible solution to this kind of problem, their performance depends heavily on the availability of representative reference data. With operators involved in the data acquisition process, collection and analysis of reference data is often expensive and time consuming. However, although a limited amount of data is initially available during enrollment, new reference data may be acquired and labeled by an operator over time. Still, due to a limited control over changing operational conditions and personal physiology, classification systems used for video-based face recognition are confronted to complex and changing pattern recognition environments. This thesis concerns adaptive multiclassifier systems (AMCSs) for incremental learning of new data during enrollment and update of biometric models. To avoid knowledge (facial models) corruption over time, the proposed AMCS uses a supervised incremental learning strategy based on dynamic particle swarm optimization (DPSO) to evolve a swarm of fuzzy ARTMAP (FAM) neural networks in response to new data. As each particle in a FAM hyperparameter search space corresponds to a FAM network, the learning strategy adapts learning dynamics by co-optimizing all their parameters – hyperparameters, weights, and architecture – in order to maximize accuracy, while minimizing computational cost and memory resources. To achieve this, the relationship between the classification and optimization environments is studied and characterized, leading to these additional contributions. An initial version of this DPSO-based incremental learning strategy was applied to an adaptive classification system (ACS), where the accuracy of a single FAM neural network is maximized. It is shown that the original definition of a classification system capable of supervised incremental learning must be reconsidered in two ways. Not only must a classifier’s learning dynamics be adapted to maintain a high level of performance through time, but some previously acquired learning validation data must also be used during adaptation. It is empirically shown that adapting a FAM during incremental learning constitutes a type III dynamic optimization problem in the search space, where the local optima values and their corresponding position change in time. Results also illustrate the necessity of a long term memory (LTM) to store previously acquired data for unbiased validation and performance estimation. The DPSO-based incremental learning strategy was then modified to evolve the swarm (or pool) of FAM networks within an AMCS. A key element for the success of ensembles is tackled: classifier diversity. With several correlation and diversity indicators, it is shown that genoVIII type (i.e., hyperparameters) diversity in the optimization environment is correlated with classifier diversity in the classification environment. Following this result, properties of a DPSO algorithm that seeks to maintain genotype particle diversity to detect and follow local optima are exploited to generate and evolve diversified pools of FAMclassifiers. Furthermore, a greedy search algorithm is presented to perform an efficient ensemble selection based on accuracy and genotype diversity. This search algorithm allows for diversified ensembles without evaluating costly classifier diversity indicators, and selected ensembles also yield accuracy comparable to that of reference ensemble-based and batch learning techniques, with only a fraction of the resources. Finally, after studying the relationship between the classification environment and the search space, the objective space of the optimization environment is also considered. An aggregated dynamical niching particle swarm optimization (ADNPSO) algorithm is presented to guide the FAM networks according two objectives: FAM accuracy and computational cost. Instead of purely solving a multi-objective optimization problem to provide a Pareto-optimal front, the ADNPSO algorithm aims to generate pools of classifiers among which both genotype and phenotype (i.e., objectives) diversity are maximized. ADNPSO thus uses information in the search spaces to guide particles towards different local Pareto-optimal fronts in the objective space. A specialized archive is then used to categorize solutions according to FAMnetwork size and then capture locally non-dominated classifiers. These two components are then integrated to the AMCS through an ADNPSO-based incremental learning strategy. The AMCSs proposed in this thesis are promising since they create ensembles of classifiers designed with the ADNPSO-based incremental learning strategy and provide a high level of accuracy that is statistically comparable to that obtained through mono-objective optimization and reference batch learning techniques, and yet requires a fraction of the computational cost

    Adaptive multi-classifier systems for face re-identification applications

    Get PDF
    In video surveillance, decision support systems rely more and more on face recognition (FR) to rapidly determine if facial regions captured over a network of cameras correspond to individuals of interest. Systems for FR in video surveillance are applied in a range of scenarios, for instance in watchlist screening, face re-identification, and search and retrieval. The focus of this Thesis is video-to-video FR, as found in face re-identification applications, where facial models are designed on reference data, and update is archived on operational captures from video streams. Several challenges emerge from the task of recognizing individuals of interest from faces captured with video cameras. Most notably, it is often assumed that the facial appearance of target individuals do not change over time, and the proportions of faces captured for target and non-target individuals are balanced, known a priori and remain fixed. However, faces captured during operations vary due to several factors, including illumination, blur, resolution, pose expression, and camera interoperability. In addition, facial models used matching are commonly not representative since they are designed a priori, with a limited amount of reference samples that are collected and labeled at a high cost. Finally, the proportions of target and non-target individuals continuously change during operations. In literature, adaptive multiple classifier systems (MCSs) have been successfully applied to video-to-video FR, where the facial model for each target individual is designed using an ensemble of 2-class classifiers (trained using target vs. non-target reference samples). Recent approaches employ ensembles of 2-class Fuzzy ARTMAP classifiers, with a DPSO strategy to generate a pool of classifiers with optimized hyperparameters, and Boolean combination to merge their responses in the ROC space. Besides, the skew-sensitive ensembles were recently proposed to adapt the fusion function of an ensemble according to class imbalance measured on operational data. These active approaches estimate target vs. non-target proportions periodically during operations distance, and the fusion of classifier ensembles are adapted to such imbalance. Finally, face tracking can be used to regroup the system responses linked to a facial trajectory (facial captures from a single person in the scene) for robust spatio-temporal recognition, and to update facial models over time using operational data. In this Thesis, new techniques are proposed to adapt the facial models for individuals enrolled to a video-to-video FR system. Trajectory-based self-updating is proposed to update the system, considering gradual and abrupt changes in the classification environment. Then, skew-sensitive ensembles are proposed to adapt the system to the operational imbalance. In Chapter 2, an adaptive framework is proposed for partially-supervised learning of facial models over time based on facial trajectories. During operations, information from a face tracker and individual-specific ensembles is integrated for robust spatio-temporal recognition and for self-update of facial models. The tracker defines a facial trajectory for each individual in video. Recognition of a target individual is done if the positive predictions accumulated along a trajectory surpass a detection threshold for an ensemble. If the accumulated positive predictions surpass a higher update threshold, then all target face samples from the trajectory are combined with non-target samples (selected from the cohort and universal models) to update the corresponding facial model. A learn-and-combine strategy is employed to avoid knowledge corruption during self-update of ensembles. In addition, a memory management strategy based on Kullback-Leibler divergence is proposed to rank and select the most relevant target and non-target reference samples to be stored in memory as the ensembles evolves. The proposed system was validated with synthetic data and real videos from Face in Action dataset, emulating a passport checking scenario. Initially, enrollment trajectories were used for supervised learning of ensembles, and videos from three capture sessions were presented to the system for FR and self-update. Transaction-level analysis shows that the proposed approach outperforms baseline systems that do not adapt to new trajectories, and provides comparable performance to ideal systems that adapt to all relevant target trajectories, through supervised learning. Subject-level analysis reveals the existence of individuals for which self-updated ensembles provide a considerable benefit. Trajectory-level analysis indicates that the proposed system allows for robust spatio-temporal video-to-video FR. In Chapter 3, an extension and a particular implementation of the ensemble-based system for spatio-temporal FR is proposed, and is characterized in scenarios with gradual and abrupt changes in the classification environment. Transaction-level results show that the proposed system allows to increase AUC accuracy by about 3% in scenarios with abrupt changes, and by about 5% in scenarios with gradual changes. Subject-based analysis reveals the difficulties of FR with different poses, affecting more significantly the lamb- and goat-like individuals. Compared to reference spatio-temporal fusion approaches, the proposed accumulation scheme produces the highest discrimination. In Chapter 4, adaptive skew-sensitive ensembles are proposed to combine classifiers trained by selecting data with varying levels of imbalance and complexity, to sustain a high level the performance for video-to-video FR. During operations, the level of imbalance is periodically estimated from the input trajectories using the HDx quantification method, and pre-computed histogram representations of imbalanced data distributions. Ensemble scores are accumulated of trajectories for robust skew-sensitive spatio-temporal recognition. Results on synthetic data show that adapting the fusion function with the proposed approach can significantly improve performance. Results on real data show that the proposed method can outperform reference techniques in imbalanced video surveillance environments
    corecore