10,695 research outputs found

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Spontaneous Subtle Expression Detection and Recognition based on Facial Strain

    Full text link
    Optical strain is an extension of optical flow that is capable of quantifying subtle changes on faces and representing the minute facial motion intensities at the pixel level. This is computationally essential for the relatively new field of spontaneous micro-expression, where subtle expressions can be technically challenging to pinpoint. In this paper, we present a novel method for detecting and recognizing micro-expressions by utilizing facial optical strain magnitudes to construct optical strain features and optical strain weighted features. The two sets of features are then concatenated to form the resultant feature histogram. Experiments were performed on the CASME II and SMIC databases. We demonstrate on both databases, the usefulness of optical strain information and more importantly, that our best approaches are able to outperform the original baseline results for both detection and recognition tasks. A comparison of the proposed method with other existing spatio-temporal feature extraction approaches is also presented.Comment: 21 pages (including references), single column format, accepted to Signal Processing: Image Communication journa

    Collision detection for UAVs using Event Cameras

    Get PDF
    This dissertation explores the use of event cameras for collision detection in unmanned aerial vehicles (UAVs). Traditional cameras have been widely used in UAVs for obstacle avoidance and navigation, but they suffer from high latency and low dynamic range. Event cameras, on the other hand, capture only the changes in the scene and can operate at high speeds with low latency. The goal of this research is to investigate the potential of event cameras in UAVs collision detection, which is crucial for safe operation in complex and dynamic environments. The dissertation presents a review of the current state of the art in the field and evaluates a developed algorithm for event-based collision detection for UAVs. The performance of the algorithm was tested through practical experiments in which 9 sequences of events were recorded using an event camera, depicting different scenarios with stationary and moving objects as obstacles. Simultaneously, inertial measurement unit (IMU) data was collected to provide additional information about the UAV’s movement. The recorded data was then processed using the proposed event-based collision detection algorithm for UAVs, which consists of four components: ego-motion compensation, normalized mean timestamp, morphological operations, and clustering. Firstly, the ego-motion component compensates for the UAV’s motion by estimating its rotational movement using the IMU data. Next, the normalized mean timestamp component calculates the mean timestamp of each event and normalizes it, helping to reduce the noise in the event data and improving the accuracy of collision detection. The morphological operations component applies mathematical operations such as erosion and dilation to the event data to remove small noise and enhance the edges of objects. Finally, the last component uses a clustering method called DBSCAN to group the events, allowing for the detection of objects and estimation of their positions. This step provides the final output of the collision detection algorithm, which can be used for obstacle avoidance and navigation in UAVs. The algorithm was evaluated based on its accuracy, latency, and computational efficiency. The findings demonstrate that event-based collision detection has the potential to be an effective and efficient method for detecting collisions in UAVs, with high accuracy and low latency. These results suggest that event cameras could be beneficial for enhancing the safety and dependability of UAVs in challenging situations. Moreover, the datasets and algorithm developed in this research are made publicly available, facilitating the evaluation and enhancement of the algorithm for specific applications. This approach could encourage collaboration among researchers and enable further comparisons and investigations.Esta dissertação explora o uso de câmeras de eventos para deteção de colisões em veículos aéreos não tripulados (UAVs). As câmeras tradicionais têm sido amplamente utilizadas em UAVs para evitar obstáculos, mas sofrem de alguns problemas como alta latência ou baixa faixa dinâmica. As câmeras de eventos, por outro lado, capturam apenas as alterações na cena e podem operar em alta velocidade com baixa latência. O objetivo desta pesquisa é investigar o potencial de câmeras de eventos na deteção de colisões em UAVs, o que é crucial para uma operação segura em ambientes complexos e dinâmicos. A dissertação apresenta uma revisão do estado atual da arte neste tema e avalia um algoritmo desenvolvido para deteção de colisões em UAVs baseado em eventos. O desempenho do algoritmo foi avaliado através de testes práticas em que foram registadas 9 sequências de eventos utilizando uma câmera de eventos, retratando diferentes cenários com objetos estacionários e em movimento. Simultaneamente, foram capturados dados da unidade de medida inercial (IMU) para fornecer informações adicionais sobre o movimento do UAV. Os dados registados foram então processados usando o algoritmo proposto de deteção de colisões, que consiste em quatro etapas: ego-motion compensation, normalized mean timestamp, operações morfológicas e clustering. Primeiramente, o ego-motion compensation compensa o movimento do UAV estimando o seu movimento rotacional usando os dados do IMU. Em seguida, o componente de normalized mean timestamp cálcula o timestamp médio de cada evento e normaliza-o, ajudando a reduzir o ruído nos dados de eventos e melhorando a precisão da deteção de colisões. A etapa de operações morfológicas aplica operações matemáticas como erosão e dilatação nos dados dos eventos para remover pequenos ruídos. Finalmente, a última etapa utiliza um método de clustering chamado DBSCAN para agrupar os eventos, permitindo a deteção de objetos e a estimativa das suas posições. Esta etapa fornece o output final do algoritmo de deteção de colisões, que pode ser usado para evitar obstáculos em UAVs. O algoritmo foi avaliado com base na sua precisão, latência e eficiência computacional. Os resultados demonstram que a deteção de colisões baseada em eventos tem o potencial de ser um método eficaz e eficiente para a deteção de colisões em UAVs, com alta precisão e baixa latência. Estes resultados sugerem que as câmeras de eventos poderiam ser benéficas para melhorar a segurança e a confiabilidade dos UAVs em situações desafiadoras. Além disso, os conjuntos de dados e o algoritmo desenvolvido nesta pesquisa estão disponíveis online, facilitando a avaliação e o aprimoramento do algoritmo para aplicações específicas. Esta abordagem pode incentivar a colaboração entre os investigadores da área e possibilitar mais comparações e investigações

    Técnicas de visión por computador para la detección del verdor y la detección de obstáculos en campos de maíz

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Informática, Departamento de Ingeniería del Software e Inteligencia Artificial, leída el 22/06/2017There is an increasing demand in the use of Computer Vision techniques in Precision Agriculture (PA) based on images captured with cameras on-board autonomous vehicles. Two techniques have been developed in this research. The rst for greenness identi cation and the second for obstacle detection in maize elds, including people and animals, for tractors in the RHEA (robot eets for highly e ective and forestry management) project, equipped with monocular cameras on-board the tractors. For vegetation identi cation in agricultural images the combination of colour vegetation indices (CVIs) with thresholding techniques is the usual strategy where the remaining elements on the image are also extracted. The main goal of this research line is the development of an alternative strategy for vegetation detection. To achieve our goal, we propose a methodology based on two well-known techniques in computer vision: Bag of Words representation (BoW) and Support Vector Machines (SVM). Then, each image is partitioned into several Regions Of Interest (ROIs). Afterwards, a feature descriptor is obtained for each ROI, then the descriptor is evaluated with a classi er model (previously trained to discriminate between vegetation and background) to determine whether or not the ROI is vegetation...Cada vez existe mayor demanda en el uso de t ecnicas de Visi on por Computador en Agricultura de Precisi on mediante el procesamiento de im agenes captadas por c amaras instaladas en veh culos aut onomos. En este trabajo de investigaci on se han desarrollado dos tipos de t ecnicas. Una para la identi caci on de plantas verdes y otra para la detecci on de obst aculos en campos de ma z, incluyendo personas y animales, para tractores del proyecto RHEA. El objetivo nal de los veh culos aut onomos fue la identi caci on y eliminaci on de malas hierbas en los campos de ma z. En im agenes agr colas la vegetaci on se detecta generalmente mediante ndices de vegetaci on y m etodos de umbralizaci on. Los ndices se calculan a partir de las propiedades espectrales en las im agenes de color. En esta tesis se propone un nuevo m etodo con tal n, lo que constituye un objetivo primordial de la investigaci on. La propuesta se basa en una estrategia conocida como \bolsa de palabras" conjuntamente con un modelo se aprendizaje supervisado. Ambas t ecnicas son ampliamente utilizadas en reconocimiento y clasi caci on de im agenes. La imagen se divide inicialmente en regiones homog eneas o de inter es (RIs). Dada una colecci on de RIs, obtenida de un conjunto de im agenes agr colas, se calculan sus caracter sticas locales que se agrupan por su similitud. Cada grupo representa una \palabra visual", y el conjunto de palabras visuales encontradas forman un \diccionario visual". Cada RI se representa por un conjunto de palabras visuales las cuales se cuanti can de acuerdo a su ocurrencia dentro de la regi on obteniendo as un vector-c odigo o \codebook", que es descriptor de la RI. Finalmente, se usan las M aquinas de Vectores Soporte para evaluar los vectores-c odigo y as , discriminar entre RIs que son vegetaci on del resto...Depto. de Ingeniería de Software e Inteligencia Artificial (ISIA)Fac. de InformáticaTRUEunpu
    corecore