5 research outputs found

    Hierarchical Hidden Markov Model in Detecting Activities of Daily Living in Wearable Videos for Studies of Dementia

    Get PDF
    International audienceThis paper presents a method for indexing activities of daily living in videos obtained from wearable cameras. In the context of dementia diagnosis by doctors, the videos are recorded at patients' houses and later visualized by the medical practitioners. The videos may last up to two hours, therefore a tool for an efficient navigation in terms of activities of interest is crucial for the doctors. The specific recording mode provides video data which are really difficult, being a single sequence shot where strong motion and sharp lighting changes often appear. Our work introduces an automatic motion based segmentation of the video and a video structuring approach in terms of activities by a hierarchical two-level Hidden Markov Model. We define our description space over motion and visual characteristics of video and audio channels. Experiments on real data obtained from the recording at home of several patients show the difficulty of the task and the promising results of our approach

    A hybrid egocentric video summarization method to improve the healthcare for Alzheimer patients

    Get PDF
    Alzheimer patients face difficulty to remember the identity of persons and performing daily life activities. This paper presents a hybrid method to generate the egocentric video summary of important people, objects and medicines to facilitate the Alzheimer patients to recall their deserted memories. Lifelogging video data analysis is used to recall the human memory; however, the massive amount of lifelogging data makes it a challenging task to select the most relevant content to educate the Alzheimer’s patient. To address the challenges associated with massive lifelogging content, static video summarization approach is applied to select the key-frames that are more relevant in the context of recalling the deserted memories of the Alzheimer patients. This paper consists of three main modules that are face, object, and medicine recognition. Histogram of oriented gradient features are used to train the multi-class SVM for face recognition. SURF descriptors are employed to extract the features from the input video frames that are then used to find the corresponding points between the objects in the input video and the reference objects stored in the database. Morphological operators are applied followed by the optical character recognition to recognize and tag the medicines for Alzheimer patients. The performance of the proposed system is evaluated on 18 real-world homemade videos. Experimental results signify the effectiveness of the proposed system in terms of providing the most relevant content to enhance the memory of Alzheimer patients

    Detección automática de patrones rutinarios con imágenes egocéntricas

    Get PDF
    En los últimos años se ha comprobado que el registro de las actividades diarias de una persona compuesto principalmente por un conjunto de imágenes y otras magnitudes provenientes de sensores como GPS y acelerómetros, conforma un volumen de información extenso, el cual puede contribuir a tratamientos contra enfermedades o condiciones relacionadas a la pérdida de memoria. Una forma de capturar estos eventos es mediante el uso de cámaras egocéntricas, un tipo de cámaras que intentan imitar el campo visual de un individuo. Gracias a los avances en el área de la inteligencia artificial, los algoritmos de Deep Learning para la clasificación de imágenes convierten la tarea de clasificar estas imágenes en una tarea semi-automática. En base a esta premisa, se pretende estudiar la viabilidad de una solución que sea capaz de detectar e identificar actividades rutinarias a partir de un conjunto de imágenes tomadas a lo largo de un período de tiempo determinado.Sociedad Argentina de Informática e Investigación Operativ

    The Understanding of Human Activities by Computer Vision Techniques

    Get PDF
    Esta tesis propone nuevas metodologías para el aprendizaje de actividades humanas y su clasificación en categorías. Aunque este tema ha sido ampliamente estudiado por la comunidad investigadora en visión por computador, aún encontramos importantes dificultades por resolver. En primer lugar hemos encontrado que la literatura sobre técnicas de visión por computador para el aprendizaje de actividades humanas empleando pocas secuencias de entrenamiento es escasa y además presenta resultados pobres [1] [2]. Sin embargo, este aprendizaje es una herramienta crucial en varios escenarios. Por ejemplo, un sistema de reconocimiento recién desplegado necesita mucho tiempo para adquirir nuevas secuencias de entrenamiento así que el entrenamiento con pocos ejemplos puede acelerar la puesta en funcionamiento. También la detección de comportamientos anómalos, ejemplos de los cuales son difíciles de obtener, puede beneficiarse de estas técnicas. Existen soluciones mediante técnicas de cruce dominios o empleando características invariantes, sin embargo estas soluciones omiten información del escenario objetivo la cual reduce el ruido en el sistema mejorando los resultados cuando se tiene en cuenta y ejemplos de actividades anómalas siguen siendo difíciles de obtener. Estos sistemas entrenados con poca información se enfrentan a dos problemas principales: por una parte el sistema de entrenamiento puede sufrir de inestabilidades numéricas en la estimación de los parámetros del modelo, por otra, existe una falta de información representativa proveniente de actividades diversas. Nos hemos enfrentado a estos problemas proponiendo novedosos métodos para el aprendizaje de actividades humanas usando tan solo un ejemplo, lo que se denomina one-shot learning. Nuestras propuestas se basan en sistemas generativos, derivadas de los Modelos Ocultos de Markov[3][4], puesto que cada clase de actividad debe ser aprendida con tan solo un ejemplo. Además, hemos ampliado la diversidad de información en los modelos aplicado una transferencia de información desde fuentes externas al escenario[5]. En esta tesis se explican varias propuestas y se muestra como con ellas hemos conseguidos resultados en el estado del arte en tres bases de datos públicas [6][7][8]. La segunda dificultad a la que nos hemos enfrentado es el reconocimiento de actividades sin restricciones en el escenario. En este caso no tiene por qué coincidir el escenario de entrenamiento y el de evaluación por lo que la reducción de ruido anteriormente expuesta no es aplicable. Esto supone que se pueda emplear cualquier ejemplo etiquetado para entrenamiento independientemente del escenario de origen. Esta libertad nos permite extraer vídeos desde cualquier fuente evitando la restricción en el número de ejemplos de entrenamiento. Teniendo suficientes ejemplos de entrenamiento tanto métodos generativos como discriminativos pueden ser empleados. En el momento de realización de esta tesis encontramos que el estado del arte obtiene los mejores resultados empleando métodos discriminativos, sin embargo, la mayoría de propuestas no suelen considerar la información temporal a largo plazo de las actividades[9]. Esta información puede ser crucial para distinguir entre actividades donde el orden de sub-acciones es determinante, y puede ser una ayuda en otras situaciones[10]. Para ello hemos diseñado un sistema que incluye dicha información en una Máquina de Vectores de Soporte. Además, el sistema permite cierta flexibilidad en la alineación de las secuencias a comparar, característica muy útil si la segmentación de las actividades no es perfecta. Utilizando este sistema hemos obtenido resultados en el estado del arte para cuatro bases de datos complejas sin restricciones en los escenarios[11][12][13][14]. Los trabajos realizados en esta tesis han servido para realizar tres artículos en revistas del primer cuartil [15][16][17], dos ya publicados y otro enviado. Además, se han publicado 8 artículos en congresos internacionales y uno nacional [18][19][20][21][22][23][24][25][26]. [1]Seo, H. J. and Milanfar, P. (2011). Action recognition from one example. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(5):867–882.(2011) [2]Yang, Y., Saleemi, I., and Shah, M. Discovering motion primitives for unsupervised grouping and one-shot learning of human actions, gestures, and expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(7):1635–1648. (2013) [3]Rabiner, L. R. A tutorial on hidden markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257–286. (1989) [4]Bishop, C. M. Pattern Recognition and Machine Learning (Information Science and Statistics). Springer-Verlag New York, Inc., Secaucus, NJ, USA. (2006) [5]Cook, D., Feuz, K., and Krishnan, N. Transfer learning for activity recognition: a survey. Knowledge and Information Systems, pages 1–20. (2013) [6]Schuldt, C., Laptev, I., and Caputo, B. Recognizing human actions: a local svm approach. In International Conference on Pattern Recognition (ICPR). (2004) [7]Weinland, D., Ronfard, R., and Boyer, E. Free viewpoint action recognition using motion history volumes. Computer Vision and Image Understanding, 104(2-3):249–257. (2006) [8]Gorelick, L., Blank, M., Shechtman, E., Irani, M., and Basri, R. Actions as space-time shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(12):2247–2253. (2007) [9]Wang, H. and Schmid, C. Action recognition with improved trajectories. In IEEE International Conference on Computer Vision (ICCV). (2013) [10]Choi, J., Wang, Z., Lee, S.-C., and Jeon, W. J. A spatio-temporal pyramid matching for video retrieval. Computer Vision and Image Understanding, 117(6):660 – 669. (2013) [11]Oh, S., Hoogs, A., Perera, A., Cuntoor, N., Chen, C.-C., Lee, J. T., Mukherjee, S., Aggarwal, J. K., Lee, H., Davis, L., Swears, E., Wang, X., Ji, Q., Reddy, K., Shah, M., Vondrick, C., Pirsiavash, H., Ramanan, D., Yuen, J., Torralba, A., Song, B., Fong, A., Roy-Chowdhury, A., and Desai, M. A large-scale benchmark dataset for event recognition in surveillance video. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3153–3160. (2011) [12] Niebles, J. C., Chen, C.-W., and Fei-Fei, L. Modeling temporal structure of decomposable motion segments for activity classification. In European Conference on Computer Vision (ECCV), pages 392–405.(2010) [13]Reddy, K. K. and Shah, M. Recognizing 50 human action categories of web videos. Machine Vision and Applications, 24(5):971–981. (2013) [14]Kuehne, H., Jhuang, H., Garrote, E., Poggio, T., and Serre, T. HMDB: a large video database for human motion recognition. In IEEE International Conference on Computer Vision (ICCV). (2011) [15]Rodriguez, M., Orrite, C., Medrano, C., and Makris, D. One-shot learning of human activity with an map adapted gmm and simplex-hmm. IEEE Transactions on Cybernetics, PP(99):1–12. (2016) [16]Rodriguez, M., Orrite, C., Medrano, C., and Makris, D. A time flexible kernel framework for video-based activity recognition. Image and Vision Computing 48-49:26 – 36. (2016) [17]Rodriguez, M., Orrite, C., Medrano, C., and Makris, D. Extended Study for One-shot Learning of Human Activity by a Simplex-HMM. IEEE Transactions on Cybernetics (Enviado) [18]Orrite, C., Rodriguez, M., Medrano, C. One-shot learning of temporal sequences using a distance dependent Chinese Restaurant Process. In Proceedings of the 23nd International Conference Pattern Recognition ICPR (December 2016) [19]Rodriguez, M., Medrano, C., Herrero, E., and Orrite, C. Spectral Clustering Using Friendship Path Similarity Proceedings of the 7th Iberian Conference, IbPRIA (June 2015) [20]Orrite, C., Soler, J., Rodriguez, M., Herrero, E., and Casas, R. Image-based location recognition and scenario modelling. In Proceedings of the 10th International Conference on Computer Vision Theory and Applications, VISAPP (March 2015) [21]Castán, D., Rodríguez, M., Ortega, A., Orrite, C., and Lleida, E. Vivolab and cvlab - mediaeval 2014: Violent scenes detection affect task. In Working Notes Proceedings of the MediaEval (October 2014) [22]Orrite, C., Rodriguez, M., Herrero, E., Rogez, G., and Velastin, S. A. Automatic segmentation and recognition of human actions in monocular sequences In Proceedings of the 22nd International Conference Pattern Recognition ICPR (August 2014) [23]Rodriguez, M., Medrano, C., Herrero, E., and Orrite, C. Transfer learning of human poses for action recognition. In 4th International Workshop of Human Behavior Unterstanding (HBU). (October 2013) [24]Rodriguez, M., Orrite, C., and Medrano, C. Human action recognition with limited labelled data. In Actas del III Workshop de Reconocimiento de Formas y Analisis de Imagenes, WSRFAI. (September 2013) [25]Orrite, C., Monforte, P., Rodriguez, M., and Herrero, E. Human Action Recognition under Partial Occlusions . Proceedings of the 6th Iberian Conference, IbPRIA (June 2013) [26]Orrite, C., Rodriguez, M., and Montañes, M. One sequence learning of human actions. In 2nd International Workshop of Human Behavior Unterstanding (HBU). (November 2011)This thesis provides some novel frameworks for learning human activities and for further classifying them into categories. This field of research has been largely studied by the computer vision community however there are still many drawbacks to solve. First, we have found few proposals in the literature for learning human activities from limited number of sequences. However, this learning is critical in several scenarios. For instance, in the initial stage after a system installation the capture of activity examples is time expensive and therefore, the learning with limited examples may accelerate the operational launch of the system. Moreover, examples for training abnormal behaviour are hardly obtainable and their learning may benefit from the same techniques. This problem is solved by some approaches, such as cross domain implementations or the use of invariant features, but they do not consider the specific scenario information which is useful for reducing the clutter and improving the results. Systems trained with scarce information face two main problems: on the one hand, the training process may suffer from numerical instabilities while estimating the model parameters; on the other hand, the model lacks of representative information coming from a diverse set of activity classes. We have dealt with these problems providing some novel approaches for learning human activities from one example, what is called a one-shot learning method. To do so, we have proposed generative approaches based on Hidden Markov Models as we need to learn each activity class from only one example. In addition, we have transferred information from external sources in order to introduce diverse information into the model. This thesis explains our proposals and shows how these methods achieve state-of-the-art results in three public datasets. Second, we have studied the recognition of human activities in unconstrained scenarios. In this case, the scenario may or may not be repeated in training and evaluation and therefore the clutter reduction previously mentioned does not happen. On the other hand, we can use any labelled video for training the system independently of the target scenario. This freedom allows the extraction of videos from the Internet dismissing the implicit constrains when training with limited examples. Having plenty of training examples both, generative and discriminative, methods can be used and by the time this thesis has been made the state-of-the-art has been achieved by discriminative ones. However, most of the methods usually fail when taking into consideration long-term information of the activities. This information is critical when comparing activities where the order of sub-actions is important, and may be useful in other comparisons as well. Thus, we have designed a framework that incorporates this information in a discriminative classifier. In addition, this method introduces some flexibility for sequence alignment, useful feature when the activity segmentation is not exact. Using this framework we have obtained state-of-the-art results in four challenging public datasets with unconstrained scenarios

    Analyse sémantique d'un trafic routier dans un contexte de vidéo-surveillance

    Get PDF
    Les problématiques de sécurité, ainsi que le coût de moins en moins élevé des caméras numériques, amènent aujourd'hui à un développement rapide des systèmes de vidéosurveillance. Devant le nombre croissant de caméras et l'impossibilité de placer un opérateur humain devant chacune d'elles, il est nécessaire de mettre en oeuvre des outils d'analyse capables d'identifier des évènements spécifiques. Le travail présenté dans cette thèse s'inscrit dans le cadre d'une collaboration entre le Laboratoire Bordelais de Recherche en Informatique (LaBRI) et la société Adacis. L'objectif consiste à concevoir un système complet de vidéo-surveillance destiné à l'analyse automatique de scènes autoroutières et la détection d'incidents. Le système doit être autonome, le moins supervisé possible et doit fournir une détection en temps réel d'un évènement.Pour parvenir à cet objectif, l'approche utilisée se décompose en plusieurs étapes. Une étape d'analyse de bas-niveau, telle que l'estimation et la détection des régions en mouvement, une identification des caractéristiques d'un niveau sémantique plus élevé, telles que l'extraction des objets et la trajectoire des objets, et l'identification d'évènements ou de comportements particuliers, tel que le non respect des règles de sécurité. Les techniques employées s'appuient sur des modèles statistiques permettant de prendre en compte les incertitudes sur les mesures et observations (bruits d'acquisition, données manquantes, ...).Ainsi, la détection des régions en mouvement s'effectue au travers la modélisation de la couleur de l'arrière-plan. Le modèle statistique utilisé est un modèle de mélange de lois, permettant de caractériser la multi-modalité des valeurs prises par les pixels. L'estimation du flot optique, de la différence de gradient et la détection d'ombres et de reflets sont employées pour confirmer ou infirmer le résultat de la segmentation.L'étape de suivi repose sur un filtrage prédictif basé sur un modèle de mouvement à vitesse constante. Le cas particulier du filtrage de Kalman (filtrage tout gaussien) est employé, permettant de fournir une estimation a priori de la position des objets en se basant sur le modèle de mouvement prédéfini.L'étape d'analyse de comportement est constituée de deux approches : la première consiste à exploiter les informations obtenues dans les étapes précédentes de l'analyse. Autrement dit, il s'agit d'extraire et d'analyser chaque objet afin d'en étudier son comportement. La seconde étape consiste à détecter les évènements à travers une coupe du volume 2d+t de la vidéo. Les cartes spatio-temporelles obtenues sont utilisées pour estimer les statistiques du trafic, ainsi que pour détecter des évènements telles que l'arrêt des véhicules.Pour aider à la segmentation et au suivi des objets, un modèle de la structure de la scène et de ses caractéristiques est proposé. Ce modèle est construit à l'aide d'une étape d'apprentissage durant laquelle aucune intervention de l'utilisateur n'est requise. La construction du modèle s'effectue à travers l'analyse d'une séquence d'entraînement durant laquelle les contours de l'arrière-plan et les trajectoires typiques des véhicules sont estimés. Ces informations sont ensuite combinées pour fournit une estimation du point de fuite, les délimitations des voies de circulation et une approximation des lignes de profondeur dans l'image. En parallèle, un modèle statistique du sens de direction du trafic est proposé. La modélisation de données orientées nécessite l'utilisation de lois de distributions particulières, due à la nature périodique de la donnée. Un mélange de lois de type von-Mises est utilisée pour caractériser le sens de direction du trafic.Automatic traffic monitoring plays an important role in traffic surveillance. Video cameras are relatively inexpensive surveillance tools, but necessitate robust, efficient and automated video analysis algorithms. The loss of information caused by the formation of images under perspective projection made the automatic task of detection and tracking vehicles a very challenging problem, but essential to extract a semantic interpretation of vehicles behaviors. The work proposed in this thesis comes from a collaboration between the LaBRI (Laboratoire Bordelais de Recherche en Informatique) and the company Adacis. The aim is to elaborate a complete video-surveillance system designed for automatic incident detection.To reach this objective, traffic scene analysis proceeds from low-level processing to high-level descriptions of the traffic, which can be in a wide variety of type: vehicles entering or exiting the scene, vehicles collisions, vehicles' speed that are too fast or too low, stopped vehicles or objects obstructing part of the road... A large number of road traffic monitoring systems are based on background subtraction techniques to segment the regions of interest of the image. Resulted regions are then tracked and trajectories are used to extract a semantic interpretation of the vehicles behaviors.The motion detection is based on a statistical model of background color. The model used is a mixture model of probabilistic laws, which allows to characterize multimodal distributions for each pixel. Estimation of optical flow, a gradient difference estimation and shadow and highlight detection are used to confirm or invalidate the segmentation results.The tracking process is based on a predictive filter using a motion model with constant velocity. A simple Kalman filter is employed, which allow to predict state of objets based on a \textit{a priori} information from the motion model.The behavior analysis step contains two approaches : the first one consists in exploiting information from low-level and mid-level analysis. Objects and their trajectories are analysed and used to extract abnormal behavior. The second approach consists in analysing a spatio-temporal slice in the 3D video volume. The extracted maps are used to estimate statistics about traffic and are used to detect abnormal behavior such as stopped vehicules or wrong way drivers.In order to help the segmentaion and the tracking processes, a structure model of the scene is proposed. This model is constructed using an unsupervised learning step. During this learning step, gradient information from the background image and typical trajectories of vehicles are estimated. The results are combined to estimate the vanishing point of the scene, the lanes boundaries and a rough depth estimation is performed. In parallel, a statistical model of the trafic flow direction is proposed. To deal with periodic data, a von-Mises mixture model is used to characterize the traffic flow direction.BORDEAUX1-Bib.electronique (335229901) / SudocSudocFranceF
    corecore