22 research outputs found

    Factored particle filtering with dependent and constrained partition dynamics for tracking deformable objects

    Get PDF
    In particle filtering, dimensionality of the state space can be reduced by tracking control (or feature) points as independent objects, which are traditionally named as partitions. Two critical decisions have to be made in implementation of reduced state-space dimensionality. First is how to construct a dynamic (transition) model for partitions that are inherently dependent. Second critical decision is how to filter partition states such that a viable and likely object state is achieved. In this study, we present a correlation-based transition model and a proposal function that incorporate partition dependency in particle filtering in a computationally tractable manner. We test our algorithm on challenging examples of occlusion, clutter and drastic changes in relative speeds of partitions. Our successful results with as low as 10 particles per partition indicate that the proposed algorithm is both robust and efficient.This research is part of project "Expression Recognition based on Facial Anatomy", grant number 109E061, supported by The Support Programme for Scientific and Technological Research Projects of The Scientific and Technological Research Council of Turkey (TUBITAK). In comparative evaluation of the tracking algorithms we utilized the SPOT tracking code that was made publicly available by researchers Lu Zhang and Laurens van der Maaten. A special thanks to Fish Species who generously provided the high definition aquarium videos used in our experiments (http://www.fish-species.org.uk)Publisher's VersionAuthor Post Prin

    Integrating vendors into cooperative design practices

    Get PDF
    This paper describes a new approach to cooperative design using distributed, off-the-shelf design components. The ultimate goal is to enable assemblers to rapidly design their products and perform simulations using parts that are offered by a global network of suppliers. The obvious way to realise this goal would be to transfer desired component models to the client computer. However, in order to protect proprietary data, manufacturers are reluctant to share their design models without non-disclosure agreements, which can take in the order of months to put in place. Due to bandwidth limitations, it is also impractical to keep the models at the manufacturer site and do simulations by simple message passing. To deal with these impediments in e-commerce the modular distributed modelling (MDM) methodology is leveraged, which enables transfer of component models while hiding proprietary implementation details. MDM methodology with routine design (RD) methods are augmented to realise a platform (RD-MDM) that enables automatic selection of secured off-the-shelf design components over the Internet, integration of these components in an assembly, running simulations for design testing and publishing the approved product model as a secured MDM agent. This paper demonstrates the capabilities of the RD-MDM platform on a fuel cell-battery hybrid vehicle design example.Publisher's VersionAuthor Post Prin

    Extraction and selection of muscle based features for facial expression recognition

    Get PDF
    In this study we propose a new set of muscle activity based features for facial expression recognition. We extract muscular activities by observing the displacements of facial feature points in an expression video. The facial feature points are initialized on muscular regions of influence in the first frame of the video. These points are tracked through optical flow in sequential frames. Displacements of feature points on the image plane are used to estimate the 3D orientation of a head model and relative displacements of its vertices. We model the human skin as a linear system of equations. The estimated deformation of the wireframe model produces an over-determined system of equations that can be solved under the constraint of the facial anatomy to obtain muscle activation levels. We apply sequential forward feature selection to choose the most descriptive set of muscles for recognition of basic facial expressions.Publisher's VersionAuthor Post Prin

    Driver recognition using gaussian mixture models and decision fusion techniques

    Get PDF
    In this paper we present our research in driver recognition. The goal of this study is to investigate the performance of different classifier fusion techniques in a driver recognition scenario. We are using solely driving behavior signals such as break and accelerator pedal pressure, engine RPM, vehicle speed; steering wheel angle for identifying the driver identities. We modeled each driver using Gaussian Mixture Models, obtained posterior probabilities of identities and combined these scores using different fixed mid trainable (adaptive) fusion methods. We observed error rates is low as 0.35% in recognition of 100 drivers using trainable combiners. We conclude that the fusion of multi-modal classifier results is very successful in biometric recognition of a person in a car setting.Publisher's Versio

    Yüz anotomisine dayalı ifade tanıma

    Get PDF
    Literatürde sunulan geometriye dayalı yüz ifadesi tanıma algoritmaları çoğunlukla araştırmacılar tarafından seçilen nirengi noktalarının devinimlerine veya yüz ifadesi kodlama sistemi (FACS) tarafından tanımlanan eylem birimlerinin etkinlik derecelerine odaklanır. Her iki yaklaşımda da nirengi noktaları, ifadenin en yoğun gözlemlendiği dudak, burun kenarları ve alın üzerinde konumlandırılır. Farklı kas etkinlikleri, birden fazla kasın etki alanında bulunan bu nirengi noktaları üzerinde benzer devinimlere neden olurlar. Bu nedenle, karmaşık ifadelerin belli noktalara konulan, sınırlı sayıdaki nirengi ile analizi oldukça zordur. Bu projede, yüz üzerinde kas etkinlik alanlarına dağıtılmış çok sayıda nirengi nokta-sının yüz ifadesinin oluşturulması sürecinde izlenmesi ile kas etkinlik derecelerinin belirlenmesini önerdik. Önerdiğimiz yüz ifadesi tanıma algoritması altı aşama içerir; (1) yüz modelinin deneğin yüzüne uyarlanması, (2) herhangi bir kasın etki alanında bulunan tüm nirengi noktalarının imge dizisinin ardışık çerçevelerinde izlenmesi, (3) baş yöneliminin belirlenmesi ve yüz modelinin imge üzerinde gözlemlenen yüz ile hizalanması, (4) yüze ait nirengi noktalarının deviniminden yola çıkarak model düğümlerinin yeni koordinatlarının kestirimi, (5) düğüm devinimlerinin kas kuvvetleri için çözülmesi, ve (6) elde edilen kas kuvvetleri ile yüz ifadesi sınıflandırılmasının yapılması. Algoritmamız, modelin yüze uyarlanması aşamasında yüz imgesi üzerinde nirengi noktalarının seçilmesi haricinde tamamen otomatiktir. Kas etkinliğine dayalı bu öznitelikleri temel ve belirsiz ifadelerin sınıflandırılması problemlerinde sınadık. Yedi adet temel yüz ifadesi üzerinde SVM sınıflandırıcısı ile %76 oranında başarı elde ettik. Bu oran, insanların ifade tanımadaki yetkinliklerine yakındır. Yedi temel ifadenin belirsiz gözlemlendiği çerçevelerde en yüksek başarıyı yine SVM sınıflandırıcısı ile %55 olarak elde ettik. Bu başarım, kas kuvvetlerinin genellikle hafif ve ani görülen istemsiz ifadelerin seziminde de başarılı olabileceğini göstermektedir. Kas kuvvetleri, yüz ifadesinin oluşturulmasındaki temel fiziksel gerçekliği yansıtan özniteliklerdir. Kas etkinliklerinin hassasiyetle kestirimi, belirsiz ifade değişikliklerinin sezimini sağladığı gibi, karmaşık yüz ifadelerinin sınıflandırılmalarını kolaylaştıracaktır. Ek olarak, araştırmacılar veya uzmanlar tarafından seçilen nirengi devinimleri ile kısıtlı kal-mayan bu yaklaşım, duygular ve yüz ifadeleri arasında bilinmeyen bağıntıların ortaya çıkarılmasını sağlayabilecektir.The geometric approaches to facial expression recognition commonly focus on the displa-cement of feature points that are selected by the researchers or the action units that aredefined by the facial action coding system (FACS). In both approaches the feature pointsare carefully located on lips, nose and the forehead, where an expression is observed at itsfull strength. Since these regions are under the influence of multiple muscles, distinct mus-cular activities could result in similar displacements of the feature points. Hence, analysisof complex expressions through a set of specific feature points is quite difficult.In this project we propose to extract the facial muscle activity levels through multiplepoints distributed over the muscular regions of influence. The proposed algorithm consistsof; (1) semi–automatic customization of the face model to a subject, (2) identification andtracking of facial features that reside in the region of influence of a muscle, (3) estimationof head orientation and alignment of the face model with the observed face, (4) estima-tion of relative displacements of vertices that produce facial expressions, (5) solving vertexdisplacements to obtain muscle forces, and (6) classification of facial expression with themuscle force features. Our algorithm requires manual intervention only in the stage ofmodel customization.We demonstrate the representative power of the proposed muscle–based features onclassification problems of seven basic and subtle expressions. The best performance onthe classification problem of basic expressions was 76%, obtained by use of SVM. Thisresult is close to the performance of humans in facial expression recognition. Our bestperformance for classification of seven subtle expressions was %55, once again by use ofSVM. This figure implies that muscle–based features are good candidates for involuntaryexpressions, which are often subtle and instantaneous.Muscle forces can be considered as the ultimate base functions that anatomicallycompose all expressions. Increased reliability in extraction of muscle forces will enabledetection and classification of subtle and complex expressions with higher precision. Mo-reover, the proposed algorithm may be used to reveal unknown mechanisms of emotionsand expressions as it is not limited to a predefined set of heuristic features.TÜBİTA

    Uncertainty as a Swiss army knife: new adversarial attack and defense ideas based on epistemic uncertainty

    Get PDF
    Although state-of-the-art deep neural network models are known to be robust to random perturbations, it was verified that these architectures are indeed quite vulnerable to deliberately crafted perturbations, albeit being quasi-imperceptible. These vulnerabilities make it challenging to deploy deep neural network models in the areas where security is a critical concern. In recent years, many research studies have been conducted to develop new attack methods and come up with new defense techniques that enable more robust and reliable models. In this study, we use the quantified epistemic uncertainty obtained from the model's final probability outputs, along with the model's own loss function, to generate more effective adversarial samples. And we propose a novel defense approach against attacks like Deepfool which result in adversarial samples located near the model's decision boundary. We have verified the effectiveness of our attack method on MNIST (Digit), MNIST (Fashion) and CIFAR-10 datasets. In our experiments, we showed that our proposed uncertainty-based reversal method achieved a worst case success rate of around 95% without compromising clean accuracy.Publisher's VersionWOS:00077742940000

    Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples

    Get PDF
    Deep neural network (DNN) architectures are considered to be robust to random perturbations. Nevertheless, it was shown that they could be severely vulnerable to slight but carefully crafted perturbations of the input, termed as adversarial samples. In recent years, numerous studies have been conducted in this new area called ``Adversarial Machine Learning” to devise new adversarial attacks and to defend against these attacks with more robust DNN architectures. However, most of the current research has concentrated on utilising model loss function to craft adversarial examples or to create robust models. This study explores the usage of quantified epistemic uncertainty obtained from Monte-Carlo Dropout Sampling for adversarial attack purposes by which we perturb the input to the shifted-domain regions where the model has not been trained on. We proposed new attack ideas by exploiting the difficulty of the target model to discriminate between samples drawn from original and shifted versions of the training data distribution by utilizing epistemic uncertainty of the model. Our results show that our proposed hybrid attack approach increases the attack success rates from 82.59% to 85.14%, 82.96% to 90.13% and 89.44% to 91.06% on MNIST Digit, MNIST Fashion and CIFAR-10 datasets, respectively.Publisher's VersionWOS:000757777400006PMID: 3522177

    Unsupervised textile defect detection using convolutional neural networks

    No full text
    In this study, we propose a novel motif-based approach for unsupervised textile anomaly detection that combines the benefits of traditional convolutional neural networks with those of an unsupervised learning paradigm. It consists of five main steps: preprocessing, automatic pattern period extraction, patch extraction, features selection and anomaly detection. This proposed approach uses a new dynamic and heuristic method for feature selection which avoids the drawbacks of initialization of the number of filters (neurons) and their weights, and those of the backpropagation mechanism such as the vanishing gradients, which are common practice in the state-of-the-art methods. The design and training of the network are performed in a dynamic and input domain-based manner and, thus, no ad-hoc configurations are required. Before building the model, only the number of layers and the stride are defined. We do not initialize the weights randomly nor do we define the filter size or number of filters as conventionally done in CNN-based approaches. This reduces effort and time spent on hyper-parameter initialization and fine-tuning. Only one defect-free sample is required for training and no further labeled data is needed. The trained network is then used to detect anomalies on defective fabric samples. We demonstrate the effectiveness of our approach on the Patterned Fabrics benchmark dataset. Our algorithm yields reliable and competitive results (on recall, precision, accuracy and f1-measure) compared to state-of-the-art unsupervised approaches, in less time, with efficient training in a single epoch and a lower computational cost.This research is part of project “Competitive Deep Learning with Convolutional Neural Networks”, grant number 118E293 , supported by The Support Programme for Scientific and Technological Research Projects (1001) of The Scientific and Technological Research Council of Turkey (TÜBİTAK).Publisher's Versio

    Numerical integration methods for simulation of mass-spring-damper systems

    No full text
    The dynamics of a face are often implemented as a system of connected particles with various forces acting upon them. Animation of such a system requires the approximation of velocity and position of each particle through numerical integration. There are many numerical integrators that are commonly used in the literature. We conducted experiments to determine the suitability of numerical integration methods in approximating the particular dynamics of mass-spring-damper systems. Among Euler, semi-implicit Euler, Runge-Kutta and Leapfrog, we found that simulation with Leapfrog numerical integration characterizes a mass-spring-damper system best in terms of the energy loss of the overall system.This research is part of project "Expression Recognition based on Facial Anatomy", grant number 109E061, supported by The Support Programme for Scientific and Technological Research Projects (1001) of The Scientific and Technological Research Council of Turkey (TUBITAK)Publisher's Versio

    Facial expression recognition based on anatomy

    No full text
    In this study, we propose a novel approach to facial expression recognition that capitalizes on the anatomical structure of the human face. We model human face with a high-polygon wireframe model that embeds all major muscles. Influence regions of facial muscles are estimated through a semi-automatic customization process. These regions are projected to the image plane to determine feature points. Relative displacement of each feature point between two image frames is treated as an evidence of muscular activity. Feature point displacements are projected back to the 3D space to estimate the new coordinates of the wireframe vertices. Muscular activities that would produce the estimated deformation are solved through a least squares algorithm. We demonstrate the representative power of muscle force based features on three classifiers; NB, SVM and Adaboost Ability to extract muscle forces that compose a facial expression will enable detection of subtle expressions, replicating an expression on animated characters and exploration of psychologically unknown mechanisms of facial expressions.This research is part of project “Expression Recognition based on Facial Anatomy”, Grant No. 109E061 , supported by The Support Programme for Scientific and Technological Research Projects of The Scientific and Technological Research Council of Turkey (TÜBİTAK)Publisher's Versio
    corecore