123 research outputs found

    Robust object detection in the wild via cascaded DCGAN

    Get PDF
    This research deals with the challenges of object detection at a distance or low resolution in the wild. The main intention of this research is to exploit and cascade state-of-the-art models and propose a new framework for enabling successful deployment for diverse applications. Specifically, the proposed deep learning framework uses state-of-the-art deep networks, such as Deep Convolutional Generative Adversarial Network (DCGAN) and Single Shot Detector (SSD). It combines the above two deep learning models to generate a new framework, namely DCGAN-SSD. The proposed model can deal with object detection and recognition in the wild with various image resolutions and scaling differences. To deal with multiple object detection tasks, the training of this network model in this research has been conducted using different cross-domain datasets for various applications. The efficiency of the proposed model can further be determined by the validation of diverse applications such as visual surveillance in the wild in intelligent cities, underwater object detection for crewless underwater vehicles, and on-street in-vehicle object detection for driverless vehicle technologies. The results produced by DCGAN-SSD indicate that the proposed method in this research, along with Particle Swarm Optimization (PSO), outperforms every other application concerning object detection and demonstrates its great superiority in improving object detection performance in diverse testing cases. The DCGAN-SSD model is equipped with PSO, which helps select the hyperparameter for the object detector. Most object detectors struggle in this regard, as they require manual effort in selecting the hyperparameters to obtain better object detection. This research encountered the problem of hyperparameter selection through the integration of PSO with SSD. The main reason the research conducted with deep learning models was the traditional machine learning models lag in accuracy and performance. The advantage of this research and it is achieved with the integration of DCGAN-SSD has been accommodated under a single pipeline

    Automatic object classification for surveillance videos.

    Get PDF
    PhDThe recent popularity of surveillance video systems, specially located in urban scenarios, demands the development of visual techniques for monitoring purposes. A primary step towards intelligent surveillance video systems consists on automatic object classification, which still remains an open research problem and the keystone for the development of more specific applications. Typically, object representation is based on the inherent visual features. However, psychological studies have demonstrated that human beings can routinely categorise objects according to their behaviour. The existing gap in the understanding between the features automatically extracted by a computer, such as appearance-based features, and the concepts unconsciously perceived by human beings but unattainable for machines, or the behaviour features, is most commonly known as semantic gap. Consequently, this thesis proposes to narrow the semantic gap and bring together machine and human understanding towards object classification. Thus, a Surveillance Media Management is proposed to automatically detect and classify objects by analysing the physical properties inherent in their appearance (machine understanding) and the behaviour patterns which require a higher level of understanding (human understanding). Finally, a probabilistic multimodal fusion algorithm bridges the gap performing an automatic classification considering both machine and human understanding. The performance of the proposed Surveillance Media Management framework has been thoroughly evaluated on outdoor surveillance datasets. The experiments conducted demonstrated that the combination of machine and human understanding substantially enhanced the object classification performance. Finally, the inclusion of human reasoning and understanding provides the essential information to bridge the semantic gap towards smart surveillance video systems

    Palm tree detection in UAV images: a hybrid approach based on multimodal particle swarm optimisation

    Get PDF
    In recent years, there has been a surge of interest in palm tree detection using unmanned aerial vehicle (UAV) images, with implications for sustainability, productivity, and profitability. Similar to other object detection problems in the field of computer vision, palm tree detection typically involves classifying palm trees from non-palm tree objects or background and localising every palm tree instance in an image. Palm tree detection in large-scale high-resolution UAV images is challenging due to the large number of pixels that need to be visited by the object detector, which is computationally costly. In this thesis, we design a novel hybrid approach based on multimodal particle swarm optimisation (MPSO) algorithm that can speed up the localisation process whilst maintaining optimal accuracy for palm tree detection in UAV images. The proposed method uses a feature-extraction-based classifier as the MPSO's objective function to seek multiple positions and scales in an image that maximise the detection score. The feature-extraction-based classifier was carefully selected through empirical study and was proven seven times faster than the state-of-the-art convolutional neural network (CNN) with comparable accuracy. The research goes on with the development of a new k-d tree-structured MPSO algorithm, which is called KDT-SPSO that significantly speeds up MPSO's nearest neighbour search by only exploring the subspaces that most likely contain the query point's neighbours. KDT-SPSO was demonstrated effective in solving multimodal benchmark functions and outperformed other competitors when applied on UAV images. Finally, we devise a new approach that utilises a 3D digital surface model (DSM) to generate high confidence proposals for KDT-SPSO and existing region-based CNN (R-CNN) for palm tree detection. The use of DSM as prior information about the number and location of palm trees reduces the search space within images and decreases overall computation time. Our hybrid approach can be executed in non-specialised hardware without long training hours, achieving similar accuracy as the state-of-the-art R-CNN

    Pedestrian detection in far infrared images

    Get PDF
    Detection of people in images is a relatively new field of research, but has been widely accepted. The applications are multiple, such as self-labeling of large databases, security systems and pedestrian detection in intelligent transportation systems. Within the latter, the purpose of a pedestrian detector from a moving vehicle is to detect the presence of people in the path of the vehicle. The ultimate goal is to avoid a collision between the two. This thesis is framed with the advanced driver assistance systems, passive safety systems that warn the driver of conditions that may be adverse. An advanced driving assistance system module, aimed to warn the driver about the presence of pedestrians, using computer vision in thermal images, is presented in this thesis. Such sensors are particularly useful under conditions of low illumination.The document is divided following the usual parts of a pedestrian detection system: development of descriptors that define the appearance of people in these kind of images, the application of these descriptors to full-sized images and temporal tracking of pedestrians found. As part of the work developed in this thesis, database of pedestrians in the far infrared spectrum is presented. This database has been used in developing an evaluation of pedestrian detection systems as well as for the development of new descriptors. These descriptors use techniques for the systematic description of the shape of the pedestrian as well as methods to achieve invariance to contrast, illumination or ambient temperature. The descriptors are analyzed and modified to improve their performance in a detection problem, where potential candidates are searched for in full size images. Finally, a method for tracking the detected pedestrians is proposed to reduce the number of miss-detections that occurred at earlier stages of the algorithm. --La detección de personas en imágenes es un campo de investigación relativamente nuevo, pero que ha tenido una amplia acogida. Las aplicaciones son múltiples, tales como auto-etiquetado de grandes bases de datos, sistemas de seguridad y detección de peatones en sistemas inteligentes de transporte. Dentro de este último, la detección de peatones desde un vehículo móvil tiene como objetivo detectar la presencia de personas en la trayectoria del vehículo. EL fin último es evitar una colisión entre ambos. Esta tesis se enmarca en los sistemas avanzados de ayuda a la conducción; sistemas de seguridad pasivos, que advierten al conductor de condiciones que pueden ser adversas. En esta tesis se presenta un módulo de ayuda a la conducción destinado a advertir de la presencia de peatones, mediante el uso de visión por computador en imágenes térmicas. Este tipo de sensores resultan especialmente útiles en condiciones de baja iluminación. El documento se divide siguiendo las partes habituales de una sistema de detección de peatones: desarrollo de descriptores que defina la apariencia de las personas en este tipo de imágenes, la aplicación de estos en imágenes de tamano completo y el seguimiento temporal de los peatones encontrados. Como parte del trabajo desarrollado en esta tesis se presenta una base de datos de peatones en el espectro infrarrojo lejano. Esta base de datos ha sido utilizada para desarrollar una evaluación de sistemas de detección de peatones, así como para el desarrollo de nuevos descriptores. Estos integran técnicas para la descripción sistemática de la forma del peatón, así como métodos para la invariancia al contraste, la iluminación o la temperatura externa. Los descriptores son analizados y modificados para mejorar su rendimiento en un problema de detección, donde se buscan posibles candidatos en una imagen de tamano completo. Finalmente, se propone una método de seguimiento de los peatones detectados para reducir el número de fallos que se hayan producido etapas anteriores del algoritmo

    Palm tree detection in UAV images: a hybrid approach based on multimodal particle swarm optimisation

    Get PDF
    In recent years, there has been a surge of interest in palm tree detection using unmanned aerial vehicle (UAV) images, with implications for sustainability, productivity, and profitability. Similar to other object detection problems in the field of computer vision, palm tree detection typically involves classifying palm trees from non-palm tree objects or background and localising every palm tree instance in an image. Palm tree detection in large-scale high-resolution UAV images is challenging due to the large number of pixels that need to be visited by the object detector, which is computationally costly. In this thesis, we design a novel hybrid approach based on multimodal particle swarm optimisation (MPSO) algorithm that can speed up the localisation process whilst maintaining optimal accuracy for palm tree detection in UAV images. The proposed method uses a feature-extraction-based classifier as the MPSO's objective function to seek multiple positions and scales in an image that maximise the detection score. The feature-extraction-based classifier was carefully selected through empirical study and was proven seven times faster than the state-of-the-art convolutional neural network (CNN) with comparable accuracy. The research goes on with the development of a new k-d tree-structured MPSO algorithm, which is called KDT-SPSO that significantly speeds up MPSO's nearest neighbour search by only exploring the subspaces that most likely contain the query point's neighbours. KDT-SPSO was demonstrated effective in solving multimodal benchmark functions and outperformed other competitors when applied on UAV images. Finally, we devise a new approach that utilises a 3D digital surface model (DSM) to generate high confidence proposals for KDT-SPSO and existing region-based CNN (R-CNN) for palm tree detection. The use of DSM as prior information about the number and location of palm trees reduces the search space within images and decreases overall computation time. Our hybrid approach can be executed in non-specialised hardware without long training hours, achieving similar accuracy as the state-of-the-art R-CNN

    Comprehensive review of vision-based fall detection systems

    Get PDF
    Vision-based fall detection systems have experienced fast development over the last years. To determine the course of its evolution and help new researchers, the main audience of this paper, a comprehensive revision of all published articles in the main scientific databases regarding this area during the last five years has been made. After a selection process, detailed in the Materials and Methods Section, eighty-one systems were thoroughly reviewed. Their characterization and classification techniques were analyzed and categorized. Their performance data were also studied, and comparisons were made to determine which classifying methods best work in this field. The evolution of artificial vision technology, very positively influenced by the incorporation of artificial neural networks, has allowed fall characterization to become more resistant to noise resultant from illumination phenomena or occlusion. The classification has also taken advantage of these networks, and the field starts using robots to make these systems mobile. However, datasets used to train them lack real-world data, raising doubts about their performances facing real elderly falls. In addition, there is no evidence of strong connections between the elderly and the communities of researchers

    A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community

    Full text link
    In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote Sensin

    Pattern Recognition

    Get PDF
    A wealth of advanced pattern recognition algorithms are emerging from the interdiscipline between technologies of effective visual features and the human-brain cognition process. Effective visual features are made possible through the rapid developments in appropriate sensor equipments, novel filter designs, and viable information processing architectures. While the understanding of human-brain cognition process broadens the way in which the computer can perform pattern recognition tasks. The present book is intended to collect representative researches around the globe focusing on low-level vision, filter design, features and image descriptors, data mining and analysis, and biologically inspired algorithms. The 27 chapters coved in this book disclose recent advances and new ideas in promoting the techniques, technology and applications of pattern recognition

    Análise de multidões usando coerência de vizinhança local

    Get PDF
    Large numbers of crowd analysis methods using computer vision have been developed in the past years. This dissertation presents an approach to explore characteristics inherent to human crowds – proxemics, and neighborhood relationship – with the purpose of extracting crowd features and using them for crowd flow estimation and anomaly detection and localization. Given the optical flow produced by any method, the proposed approach compares the similarity of each flow vector and its neighborhood using the Mahalanobis distance, which can be obtained in an efficient manner using integral images. This similarity value is then used either to filter the original optical flow or to extract features that describe the crowd behavior in different resolutions, depending on the radius of the personal space selected in the analysis. To show that the extracted features are indeed relevant, we tested several classifiers in the context of abnormality detection. More precisely, we used Recurrent Neural Networks, Dense Neural Networks, Support Vector Machines, Random Forest and Extremely Random Trees. The two developed approaches (crowd flow estimation and abnormality detection) were tested on publicly available datasets involving human crowded scenarios and compared with state-of-the-art methods.Métodos para análise de ambientes de multidões são amplamente desenvolvidos na área de visão computacional. Esta tese apresenta uma abordagem para explorar características inerentes às multidões humanas - comunicação proxêmica e relações de vizinhança - para extrair características de multidões e usá-las para estimativa de fluxo de multidões e detecção e localização de anomalias. Dado o fluxo óptico produzido por qualquer método, a abordagem proposta compara a similaridade de cada vetor de fluxo e sua vizinhança usando a distância de Mahalanobis, que pode ser obtida de maneira eficiente usando imagens integrais. Esse valor de similaridade é então utilizado para filtrar o fluxo óptico original ou para extrair informações que descrevem o comportamento da multidão em diferentes resoluções, dependendo do raio do espaço pessoal selecionado na análise. Para mostrar que as características são realmente relevantes, testamos vários classificadores no contexto da detecção de anormalidades. Mais precisamente, usamos redes neurais recorrentes, redes neurais densas, máquinas de vetores de suporte, floresta aleatória e árvores extremamente aleatórias. As duas abordagens desenvolvidas (estimativa do fluxo de multidões e detecção de anormalidades) foram testadas em conjuntos de dados públicos, envolvendo cenários de multidões humanas e comparados com métodos estado-da-arte

    Real-Time, Multiple Pan/Tilt/Zoom Computer Vision Tracking and 3D Positioning System for Unmanned Aerial System Metrology

    Get PDF
    The study of structural characteristics of Unmanned Aerial Systems (UASs) continues to be an important field of research for developing state of the art nano/micro systems. Development of a metrology system using computer vision (CV) tracking and 3D point extraction would provide an avenue for making these theoretical developments. This work provides a portable, scalable system capable of real-time tracking, zooming, and 3D position estimation of a UAS using multiple cameras. Current state-of-the-art photogrammetry systems use retro-reflective markers or single point lasers to obtain object poses and/or positions over time. Using a CV pan/tilt/zoom (PTZ) system has the potential to circumvent their limitations. The system developed in this paper exploits parallel-processing and the GPU for CV-tracking, using optical flow and known camera motion, in order to capture a moving object using two PTU cameras. The parallel-processing technique developed in this work is versatile, allowing the ability to test other CV methods with a PTZ system using known camera motion. Utilizing known camera poses, the object\u27s 3D position is estimated and focal lengths are estimated for filling the image to a desired amount. This system is tested against truth data obtained using an industrial system
    corecore