15 research outputs found

    Human segmentation in surveillance video with deep learning

    Get PDF
    Advanced intelligent surveillance systems are able to automatically analyze video of surveillance data without human intervention. These systems allow high accuracy of human activity recognition and then a high-level activity evaluation. To provide such features, an intelligent surveillance system requires a background subtraction scheme for human segmentation that captures a sequence of images containing moving humans from the reference background image. This paper proposes an alternative approach for human segmentation in videos through the use of a deep convolutional neural network. Two specific datasets were created to train our network, using the shapes of 35 different moving actors arranged on background images related to the area where the camera is located, allowing the network to take advantage of the entire site chosen for video surveillance. To assess the proposed approach, we compare our results with an Adobe Photoshop tool called Select Subject, the conditional generative adversarial network Pix2Pix, and the fully-convolutional model for real-time instance segmentation Yolact. The results show that the main benefit of our method is the possibility to automatically recognize and segment people in videos without constraints on camera and people movements in the scene (Video, code and datasets are available at http://graphics.unibas.it/www/HumanSegmentation/index.md.html)

    WELD PENETRATION IDENTIFICATION BASED ON CONVOLUTIONAL NEURAL NETWORK

    Get PDF
    Weld joint penetration determination is the key factor in welding process control area. Not only has it directly affected the weld joint mechanical properties, like fatigue for example. It also requires much of human intelligence, which either complex modeling or rich of welding experience. Therefore, weld penetration status identification has become the obstacle for intelligent welding system. In this dissertation, an innovative method has been proposed to detect the weld joint penetration status using machine-learning algorithms. A GTAW welding system is firstly built. Project a dot-structured laser pattern onto the weld pool surface during welding process, the reflected laser pattern is captured which contains all the information about the penetration status. An experienced welder is able to determine weld penetration status just based on the reflected laser pattern. However, it is difficult to characterize the images to extract key information that used to determine penetration status. To overcome the challenges in finding right features and accurately processing images to extract key features using conventional machine vision algorithms, we propose using convolutional neural network (CNN) to automatically extract key features and determine penetration status. Data-label pairs are needed to train a CNN. Therefore, an image acquiring system is designed to collect reflected laser pattern and the image of work-piece backside. Data augmentation is performed to enlarge the training data size, which resulting in 270,000 training data, 45,000 validation data and 45,000 test data. A six-layer convolutional neural network (CNN) has been designed and trained using a revised mini-batch gradient descent optimizer. Final test accuracy is 90.7% and using a voting mechanism based on three consequent images further improve the prediction accuracy

    3D pose estimation of flying animals in multi-view video datasets

    Get PDF
    Flying animals such as bats, birds, and moths are actively studied by researchers wanting to better understand these animals’ behavior and flight characteristics. Towards this goal, multi-view videos of flying animals have been recorded both in lab- oratory conditions and natural habitats. The analysis of these videos has shifted over time from manual inspection by scientists to more automated and quantitative approaches based on computer vision algorithms. This thesis describes a study on the largely unexplored problem of 3D pose estimation of flying animals in multi-view video data. This problem has received little attention in the computer vision community where few flying animal datasets exist. Additionally, published solutions from researchers in the natural sciences have not taken full advantage of advancements in computer vision research. This thesis addresses this gap by proposing three different approaches for 3D pose estimation of flying animals in multi-view video datasets, which evolve from successful pose estimation paradigms used in computer vision. The first approach models the appearance of a flying animal with a synthetic 3D graphics model and then uses a Markov Random Field to model 3D pose estimation over time as a single optimization problem. The second approach builds on the success of Pictorial Structures models and further improves them for the case where only a sparse set of landmarks are annotated in training data. The proposed approach first discovers parts from regions of the training images that are not annotated. The discovered parts are then used to generate more accurate appearance likelihood terms which in turn produce more accurate landmark localizations. The third approach takes advantage of the success of deep learning models and adapts existing deep architectures to perform landmark localization. Both the second and third approaches perform 3D pose estimation by first obtaining accurate localization of key landmarks in individual views, and then using calibrated cameras and camera geometry to reconstruct the 3D position of key landmarks. This thesis shows that the proposed algorithms generate first-of-a-kind and leading results on real world datasets of bats and moths, respectively. Furthermore, a variety of resources are made freely available to the public to further strengthen the connection between research communities

    Contribuciones a la estimación de la pose de la cámara en aplicaciones industriales de realidad aumentada

    Get PDF
    Augmented Reality (AR) aims to complement the visual perception of the user environment superimposing virtual elements. The main challenge of this technology is to combine the virtual and real world in a precise and natural way. To carry out this goal, estimating the user position and orientation in both worlds at all times is a crucial task. Currently, there are numerous techniques and algorithms developed for camera pose estimation. However, the use of synthetic square markers has become the fastest, most robust and simplest solution in these cases. In this scope, a big number of marker detection systems have been developed. Nevertheless, most of them presents some limitations, (1) their unattractive and non-customizable visual appearance prevent their use in industrial products and (2) the detection rate is drastically reduced in presence of noise, blurring and occlusions. In this doctoral dissertation the above-mentioned limitations are addressed. In first place, a comparison has been made between the different marker detection systems currently available in the literature, emphasizing the limitations of each. Secondly, a novel approach to design, detect and track customized markers capable of easily adapting to the visual limitations of commercial products has been developed. In third place, a method that combines the detection of black and white square markers with keypoints and contours has been implemented to estimate the camera position in AR applications. The main motivation of this work is to offer a versatile alternative (based on contours and keypoints) in cases where, due to noise, blurring or occlusions, it is not possible to identify markers in the images. Finally, a method for reconstruction and semantic segmentation of 3D objects using square markers in photogrammetry processes has been presented.La Realidad Aumentada (AR) tiene como objetivo complementar la percepción visual del entorno circunstante al usuario mediante la superposición de elementos virtuales. El principal reto de dicha tecnología se basa en fusionar, de forma precisa y natural, el mundo virtual con el mundo real. Para llevar a cabo dicha tarea, es de vital importancia conocer en todo momento tanto la posición, así como la orientación del usuario en ambos mundos. Actualmente, existen un gran número de técnicas de estimación de pose. No obstante, el uso de marcadores sintéticos cuadrados se ha convertido en la solución más rápida, robusta y sencilla utilizada en estos casos. En este ámbito de estudio, existen un gran número de sistemas de detección de marcadores ampliamente extendidos. Sin embargo, su uso presenta ciertas limitaciones, (1) su aspecto visual, poco atractivo y nada customizable impiden su uso en ciertos productos industriales en donde la personalización comercial es un aspecto crucial y (2) la tasa de detección se ve duramente decrementada ante la presencia de ruido, desenfoques y oclusiones Esta tesis doctoral se ocupa de las limitaciones anteriormente mencionadas. En primer lugar, se ha realizado una comparativa entre los diferentes sistemas de detección de marcadores actualmente en uso, enfatizando las limitaciones de cada uno. En segundo lugar, se ha desarrollado un novedoso enfoque para diseñar, detectar y trackear marcadores personalizados capaces de adaptarse fácilmente a las limitaciones visuales de productos comerciales. En tercer lugar, se ha implementado un método que combina la detección de marcadores cuadrados blancos y negros con keypoints y contornos, para estimar de la posición de la cámara en aplicaciones AR. La principal motivación de este trabajo se basa en ofrecer una alternativa versátil (basada en contornos y keypoints) en aquellos casos donde, por motivos de ruido, desenfoques u oclusiones no sea posible identificar marcadores en las imágenes. Por último, se ha desarrollado un método de reconstrucción y segmentación semántica de objetos 3D utilizando marcadores cuadrados en procesos de fotogrametría

    Image Enhancement via Deep Spatial and Temporal Networks

    Get PDF
    Image enhancement is a classic problem in computer vision and has been studied for decades. It includes various subtasks such as super-resolution, image deblurring, rain removal and denoise. Among these tasks, image deblurring and rain removal have become increasingly active, as they play an important role in many areas such as autonomous driving, video surveillance and mobile applications. In addition, there exists connection between them. For example, blur and rain often degrade images simultaneously, and the performance of their removal rely on the spatial and temporal learning. To help generate sharp images and videos, in this thesis, we propose efficient algorithms based on deep neural networks for solving the problems of image deblurring and rain removal. In the first part of this thesis, we study the problem of image deblurring. Four deep learning based image deblurring methods are proposed. First, for single image deblurring, a new framework is presented which firstly learns how to transfer sharp images to realistic blurry images via a learning-to-blur Generative Adversarial Network (GAN) module, and then trains a learning-to-deblur GAN module to learn how to generate sharp images from blurry versions. In contrast to prior work which solely focuses on learning to deblur, the proposed method learns to realistically synthesize blurring effects using unpaired sharp and blurry images. Second, for video deblurring, spatio-temporal learning and adversarial training methods are used to recover sharp and realistic video frames from input blurry versions. 3D convolutional kernels on the basis of deep residual neural networks are employed to capture better spatio-temporal features, and train the proposed network with both the content loss and adversarial loss to drive the model to generate realistic frames. Third, the problem of extracting sharp image sequences from a single motion-blurred image is tackled. A detail-aware network is presented, which is a cascaded generator to handle the problems of ambiguity, subtle motion and loss of details. Finally, this thesis proposes a level-attention deblurring network, and constructs a new large-scale dataset including images with blur caused by various factors. We use this dataset to evaluate current deep deblurring methods and our proposed method. In the second part of this thesis, we study the problem of image deraining. Three deep learning based image deraining methods are proposed. First, for single image deraining, the problem of joint removal of raindrops and rain streaks is tackled. In contrast to most of prior works which solely focus on the raindrops or rain streaks removal, a dual attention-in-attention model is presented, which removes raindrops and rain streaks simultaneously. Second, for video deraining, a novel end-to-end framework is proposed to obtain the spatial representation, and temporal correlations based on ResNet-based and LSTM-based architectures, respectively. The proposed method can generate multiple deraining frames at a time, which outperforms the state-of-the-art methods in terms of quality and speed. Finally, for stereo image deraining, a deep stereo semantic-aware deraining network is proposed for the first time in computer vision. Different from the previous methods which only learn from pixel-level loss function or monocular information, the proposed network advances image deraining by leveraging semantic information and visual deviation between two views
    corecore