2,991 research outputs found

    Vision for Looking at Traffic Lights:Issues, Survey, and Perspectives

    Get PDF

    Aprendizaje evolutivo supervisado: Uso de histograma de gradiente y algoritmo de enjambre de partículas para detección y seguimiento de peatones en secuencia de imágenes infrarrojas

    Get PDF
    Recently, tracking and pedestrian detection from various images have become one of the major issues in the field of image processing and statistical identification.  In this regard, using evolutionary learning-based approaches to improve performance in different contexts can greatly influence the appropriate response.  There are problems with pedestrian tracking/identification, such as low accuracy for detection, high processing time, and uncertainty in response to answers.  Researchers are looking for new processing models that can accurately monitor one's position on the move.  In this study, a hybrid algorithm for the automatic detection of pedestrian position is presented.  It is worth noting that this method, contrary to the analysis of visible images, examines pedestrians' thermal and infrared components while walking and combines a neural network with maximum learning capability, wavelet kernel (Wavelet transform), and particle swarm optimization (PSO) to find parameters of learner model. Gradient histograms have a high effect on extracting features in infrared images.  As well, the neural network algorithm can achieve its goal (pedestrian detection and tracking) by maximizing learning.  The proposed method, despite the possibility of maximum learning, has a high speed in education, and results of various data sets in this field have been analyzed. The result indicates a negligible error in observing the infrared sequence of pedestrian movements, and it is suggested to use neural networks because of their precision and trying to boost the selection of their hyperparameters based on evolutionary algorithms

    Unmanned Aerial Systems for Wildland and Forest Fires

    Full text link
    Wildfires represent an important natural risk causing economic losses, human death and important environmental damage. In recent years, we witness an increase in fire intensity and frequency. Research has been conducted towards the development of dedicated solutions for wildland and forest fire assistance and fighting. Systems were proposed for the remote detection and tracking of fires. These systems have shown improvements in the area of efficient data collection and fire characterization within small scale environments. However, wildfires cover large areas making some of the proposed ground-based systems unsuitable for optimal coverage. To tackle this limitation, Unmanned Aerial Systems (UAS) were proposed. UAS have proven to be useful due to their maneuverability, allowing for the implementation of remote sensing, allocation strategies and task planning. They can provide a low-cost alternative for the prevention, detection and real-time support of firefighting. In this paper we review previous work related to the use of UAS in wildfires. Onboard sensor instruments, fire perception algorithms and coordination strategies are considered. In addition, we present some of the recent frameworks proposing the use of both aerial vehicles and Unmanned Ground Vehicles (UV) for a more efficient wildland firefighting strategy at a larger scale.Comment: A recent published version of this paper is available at: https://doi.org/10.3390/drones501001

    Particle Filters for Colour-Based Face Tracking Under Varying Illumination

    Get PDF
    Automatic human face tracking is the basis of robotic and active vision systems used for facial feature analysis, automatic surveillance, video conferencing, intelligent transportation, human-computer interaction and many other applications. Superior human face tracking will allow future safety surveillance systems which monitor drowsy drivers, or patients and elderly people at the risk of seizure or sudden falls and will perform with lower risk of failure in unexpected situations. This area has actively been researched in the current literature in an attempt to make automatic face trackers more stable in challenging real-world environments. To detect faces in video sequences, features like colour, texture, intensity, shape or motion is used. Among these feature colour has been the most popular, because of its insensitivity to orientation and size changes and fast process-ability. The challenge of colour-based face trackers, however, has been dealing with the instability of trackers in case of colour changes due to the drastic variation in environmental illumination. Probabilistic tracking and the employment of particle filters as powerful Bayesian stochastic estimators, on the other hand, is increasing in the visual tracking field thanks to their ability to handle multi-modal distributions in cluttered scenes. Traditional particle filters utilize transition prior as importance sampling function, but this can result in poor posterior sampling. The objective of this research is to investigate and propose stable face tracker capable of dealing with challenges like rapid and random motion of head, scale changes when people are moving closer or further from the camera, motion of multiple people with close skin tones in the vicinity of the model person, presence of clutter and occlusion of face. The main focus has been on investigating an efficient method to address the sensitivity of the colour-based trackers in case of gradual or drastic illumination variations. The particle filter is used to overcome the instability of face trackers due to nonlinear and random head motions. To increase the traditional particle filter\u27s sampling efficiency an improved version of the particle filter is introduced that considers the latest measurements. This improved particle filter employs a new colour-based bottom-up approach that leads particles to generate an effective proposal distribution. The colour-based bottom-up approach is a classification technique for fast skin colour segmentation. This method is independent to distribution shape and does not require excessive memory storage or exhaustive prior training. Finally, to address the adaptability of the colour-based face tracker to illumination changes, an original likelihood model is proposed based of spatial rank information that considers both the illumination invariant colour ordering of a face\u27s pixels in an image or video frame and the spatial interaction between them. The original contribution of this work lies in the unique mixture of existing and proposed components to improve colour-base recognition and tracking of faces in complex scenes, especially where drastic illumination changes occur. Experimental results of the final version of the proposed face tracker, which combines the methods developed, are provided in the last chapter of this manuscript

    A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community

    Full text link
    In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote Sensin

    Vision-based vehicle detection and tracking in intelligent transportation system

    Get PDF
    This thesis aims to realize vision-based vehicle detection and tracking in the Intelligent Transportation System. First, it introduces the methods for vehicle detection and tracking. Next, it establishes the sensor fusion framework of the system, including dynamic model and sensor model. Then, it simulates the traffic scene at a crossroad by a driving simulator, where the research target is one single car, and the traffic scene is ideal. YOLO Neural Network is applied to the image sequence for vehicle detection. Kalman filter method, extended Kalman filter method, and particle filter method are utilized and compared for vehicle tracking. The Following part is the practical experiment where there are multiple vehicles at the same time, and the traffic scene is in real life with various interference factors. YOLO Neural Network combined with OpenCV is adopted to realize real-time vehicle detection. Kalman filter and extended Kalman filter are applied for vehicle tracking; an identification algorithm is proposed to solve the occlusion of the vehicles. The effects of process noise as well as measurement noise are analysed using variable-controlling approach. Additionally, perspective transformation is illustrated and implemented to transfer the coordinates from the image plane to the ground plane. If the vision-based vehicle detection and tracking can be realized and popularized in daily lives, vehicle information can be shared among infrastructures, vehicles, and users, so as to build interactions inside the Intelligent Transportation System

    Vision Sensors and Edge Detection

    Get PDF
    Vision Sensors and Edge Detection book reflects a selection of recent developments within the area of vision sensors and edge detection. There are two sections in this book. The first section presents vision sensors with applications to panoramic vision sensors, wireless vision sensors, and automated vision sensor inspection, and the second one shows image processing techniques, such as, image measurements, image transformations, filtering, and parallel computing

    PERFORMANCE METRICS IN VIDEO SURVEILLANCE SYSTEM

    Get PDF
    Video surveillance is an active research topic in computer vision. One of the areas that are being actively researched is on the abilities of surveillance systems to track multiple objects over time in occluded scenes and to keep a consistent identity for each target object. These abilities enable a surveillance system to provide crucial information about moving objects behaviour and interaction. This survey reviews the recent developments in moving object detection and also different techniques and approaches in multiple objects tracking that have been developed by researchers. The algorithms and filters that can be incorporated in tracking multiples object to solve the occluded and natural busy scenes in surveillance systems are also reviewed in this paper. This survey is meant to provide researchers in the field with a summary of progress achieved up to date in multiple moving objects tracking. Despite recent progress in computer vision and other related areas, there are still major technical challenges that need to be solved before reliable automated video surveillance system can be realized
    • …
    corecore