3,868 research outputs found
Unmanned Aerial Systems for Wildland and Forest Fires
Wildfires represent an important natural risk causing economic losses, human
death and important environmental damage. In recent years, we witness an
increase in fire intensity and frequency. Research has been conducted towards
the development of dedicated solutions for wildland and forest fire assistance
and fighting. Systems were proposed for the remote detection and tracking of
fires. These systems have shown improvements in the area of efficient data
collection and fire characterization within small scale environments. However,
wildfires cover large areas making some of the proposed ground-based systems
unsuitable for optimal coverage. To tackle this limitation, Unmanned Aerial
Systems (UAS) were proposed. UAS have proven to be useful due to their
maneuverability, allowing for the implementation of remote sensing, allocation
strategies and task planning. They can provide a low-cost alternative for the
prevention, detection and real-time support of firefighting. In this paper we
review previous work related to the use of UAS in wildfires. Onboard sensor
instruments, fire perception algorithms and coordination strategies are
considered. In addition, we present some of the recent frameworks proposing the
use of both aerial vehicles and Unmanned Ground Vehicles (UV) for a more
efficient wildland firefighting strategy at a larger scale.Comment: A recent published version of this paper is available at:
https://doi.org/10.3390/drones501001
Action Recognition in Videos: from Motion Capture Labs to the Web
This paper presents a survey of human action recognition approaches based on
visual data recorded from a single video camera. We propose an organizing
framework which puts in evidence the evolution of the area, with techniques
moving from heavily constrained motion capture scenarios towards more
challenging, realistic, "in the wild" videos. The proposed organization is
based on the representation used as input for the recognition task, emphasizing
the hypothesis assumed and thus, the constraints imposed on the type of video
that each technique is able to address. Expliciting the hypothesis and
constraints makes the framework particularly useful to select a method, given
an application. Another advantage of the proposed organization is that it
allows categorizing newest approaches seamlessly with traditional ones, while
providing an insightful perspective of the evolution of the action recognition
task up to now. That perspective is the basis for the discussion in the end of
the paper, where we also present the main open issues in the area.Comment: Preprint submitted to CVIU, survey paper, 46 pages, 2 figures, 4
table
MusA: Using Indoor Positioning and Navigation to Enhance Cultural Experiences in a museum
In recent years there has been a growing interest into the use of multimedia mobile guides in museum environments. Mobile devices have the capabilities to detect the user context and to provide pieces of information suitable to help visitors discovering and following the logical and emotional connections that develop during the visit. In this scenario, location based services (LBS) currently represent an asset, and the choice of the technology to determine users' position, combined with the definition of methods that can effectively convey information, become key issues in the design process. In this work, we present MusA (Museum Assistant), a general framework for the development of multimedia interactive guides for mobile devices. Its main feature is a vision-based indoor positioning system that allows the provision of several LBS, from way-finding to the contextualized communication of cultural contents, aimed at providing a meaningful exploration of exhibits according to visitors' personal interest and curiosity. Starting from the thorough description of the system architecture, the article presents the implementation of two mobile guides, developed to respectively address adults and children, and discusses the evaluation of the user experience and the visitors' appreciation of these application
Automatic Color Inspection for Colored Wires in Electric Cables
In this paper, an automatic optical inspection system for checking the sequence of colored wires in electric cable is presented. The system is able to inspect cables with flat connectors differing in the type and number of wires. This variability is managed in an automatic way by means of a self-learning subsystem and does not require manual input from the operator or loading new data to the machine. The system is coupled to a connector crimping machine and once the model of a correct cable is learned, it can automatically inspect each cable assembled by the machine. The main contributions of this paper are: (i) the self-learning system; (ii) a robust segmentation algorithm for extracting wires from images even if they are strongly bent and partially overlapped; (iii) a color recognition algorithm able to cope with highlights and different finishing of the wire insulation. We report the system evaluation over a period of several months during the actual production of large batches of different cables; tests demonstrated a high level of accuracy and the absence of false negatives, which is a key point in order to guarantee defect-free productions
Smart environment monitoring through micro unmanned aerial vehicles
In recent years, the improvements of small-scale Unmanned Aerial Vehicles (UAVs) in terms of flight time, automatic control, and remote transmission are promoting the development of a wide range of practical applications. In aerial video surveillance, the monitoring of broad areas still has many challenges due to the achievement of different tasks in real-time, including mosaicking, change detection, and object detection. In this thesis work, a small-scale UAV based vision system to maintain regular surveillance over target areas is proposed. The system works in two modes. The first mode allows to monitor an area of interest by performing several flights. During the first flight, it creates an incremental geo-referenced mosaic of an area of interest and classifies all the known elements (e.g., persons) found on the ground by an improved Faster R-CNN architecture previously trained. In subsequent reconnaissance flights, the system searches for any changes (e.g., disappearance of persons) that may occur in the mosaic by a histogram equalization and RGB-Local Binary Pattern (RGB-LBP) based algorithm. If present, the mosaic is updated. The second mode, allows to perform a real-time classification by using, again, our improved Faster R-CNN model, useful for time-critical operations. Thanks to different design features, the system works in real-time and performs mosaicking and change detection tasks at low-altitude, thus allowing the classification even of small objects. The proposed system was tested by using the whole set of challenging video sequences contained in the UAV Mosaicking and Change Detection (UMCD) dataset and other public datasets. The evaluation of the system by well-known performance metrics has shown remarkable results in terms of mosaic creation and updating, as well as in terms of change detection and object detection
The Evolution of First Person Vision Methods: A Survey
The emergence of new wearable technologies such as action cameras and
smart-glasses has increased the interest of computer vision scientists in the
First Person perspective. Nowadays, this field is attracting attention and
investments of companies aiming to develop commercial devices with First Person
Vision recording capabilities. Due to this interest, an increasing demand of
methods to process these videos, possibly in real-time, is expected. Current
approaches present a particular combinations of different image features and
quantitative methods to accomplish specific objectives like object detection,
activity recognition, user machine interaction and so on. This paper summarizes
the evolution of the state of the art in First Person Vision video analysis
between 1997 and 2014, highlighting, among others, most commonly used features,
methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart
Glasses, Computer Vision, Video Analytics, Human-machine Interactio
QUIS-CAMPI: Biometric Recognition in Surveillance Scenarios
The concerns about individuals security have justified the increasing number of surveillance
cameras deployed both in private and public spaces. However, contrary to popular belief,
these devices are in most cases used solely for recording, instead of feeding intelligent analysis
processes capable of extracting information about the observed individuals. Thus, even though
video surveillance has already proved to be essential for solving multiple crimes, obtaining relevant
details about the subjects that took part in a crime depends on the manual inspection
of recordings. As such, the current goal of the research community is the development of
automated surveillance systems capable of monitoring and identifying subjects in surveillance
scenarios. Accordingly, the main goal of this thesis is to improve the performance of biometric
recognition algorithms in data acquired from surveillance scenarios. In particular, we aim at
designing a visual surveillance system capable of acquiring biometric data at a distance (e.g.,
face, iris or gait) without requiring human intervention in the process, as well as devising biometric
recognition methods robust to the degradation factors resulting from the unconstrained
acquisition process.
Regarding the first goal, the analysis of the data acquired by typical surveillance systems
shows that large acquisition distances significantly decrease the resolution of biometric samples,
and thus their discriminability is not sufficient for recognition purposes. In the literature,
diverse works point out Pan Tilt Zoom (PTZ) cameras as the most practical way for acquiring
high-resolution imagery at a distance, particularly when using a master-slave configuration. In
the master-slave configuration, the video acquired by a typical surveillance camera is analyzed
for obtaining regions of interest (e.g., car, person) and these regions are subsequently imaged
at high-resolution by the PTZ camera. Several methods have already shown that this configuration
can be used for acquiring biometric data at a distance. Nevertheless, these methods
failed at providing effective solutions to the typical challenges of this strategy, restraining its
use in surveillance scenarios. Accordingly, this thesis proposes two methods to support the development
of a biometric data acquisition system based on the cooperation of a PTZ camera
with a typical surveillance camera. The first proposal is a camera calibration method capable
of accurately mapping the coordinates of the master camera to the pan/tilt angles of the PTZ
camera. The second proposal is a camera scheduling method for determining - in real-time -
the sequence of acquisitions that maximizes the number of different targets obtained, while
minimizing the cumulative transition time. In order to achieve the first goal of this thesis,
both methods were combined with state-of-the-art approaches of the human monitoring field
to develop a fully automated surveillance capable of acquiring biometric data at a distance and
without human cooperation, designated as QUIS-CAMPI system.
The QUIS-CAMPI system is the basis for pursuing the second goal of this thesis. The analysis
of the performance of the state-of-the-art biometric recognition approaches shows that these
approaches attain almost ideal recognition rates in unconstrained data. However, this performance
is incongruous with the recognition rates observed in surveillance scenarios. Taking into
account the drawbacks of current biometric datasets, this thesis introduces a novel dataset comprising
biometric samples (face images and gait videos) acquired by the QUIS-CAMPI system at a
distance ranging from 5 to 40 meters and without human intervention in the acquisition process.
This set allows to objectively assess the performance of state-of-the-art biometric recognition
methods in data that truly encompass the covariates of surveillance scenarios. As such, this set
was exploited for promoting the first international challenge on biometric recognition in the wild. This thesis describes the evaluation protocols adopted, along with the results obtained
by the nine methods specially designed for this competition. In addition, the data acquired by
the QUIS-CAMPI system were crucial for accomplishing the second goal of this thesis, i.e., the
development of methods robust to the covariates of surveillance scenarios. The first proposal
regards a method for detecting corrupted features in biometric signatures inferred by a redundancy
analysis algorithm. The second proposal is a caricature-based face recognition approach
capable of enhancing the recognition performance by automatically generating a caricature
from a 2D photo. The experimental evaluation of these methods shows that both approaches
contribute to improve the recognition performance in unconstrained data.A crescente preocupação com a segurança dos indivĂduos tem justificado o crescimento
do nĂșmero de cĂąmaras de vĂdeo-vigilĂąncia instaladas tanto em espaços privados como pĂșblicos.
Contudo, ao contrĂĄrio do que normalmente se pensa, estes dispositivos sĂŁo, na maior parte dos
casos, usados apenas para gravação, não estando ligados a nenhum tipo de software inteligente
capaz de inferir em tempo real informaçÔes sobre os indivĂduos observados. Assim, apesar de a
vĂdeo-vigilĂąncia ter provado ser essencial na resolução de diversos crimes, o seu uso estĂĄ ainda
confinado Ă disponibilização de vĂdeos que tĂȘm que ser manualmente inspecionados para extrair
informaçÔes relevantes dos sujeitos envolvidos no crime. Como tal, atualmente, o principal
desafio da comunidade cientĂfica Ă© o desenvolvimento de sistemas automatizados capazes de
monitorizar e identificar indivĂduos em ambientes de vĂdeo-vigilĂąncia.
Esta tese tem como principal objetivo estender a aplicabilidade dos sistemas de reconhecimento
biomĂ©trico aos ambientes de vĂdeo-vigilĂąncia. De forma mais especifica, pretende-se
1) conceber um sistema de vĂdeo-vigilĂąncia que consiga adquirir dados biomĂ©tricos a longas distĂąncias
(e.g., imagens da cara, Ăris, ou vĂdeos do tipo de passo) sem requerer a cooperação dos
indivĂduos no processo; e 2) desenvolver mĂ©todos de reconhecimento biomĂ©trico robustos aos
fatores de degradação inerentes aos dados adquiridos por este tipo de sistemas.
No que diz respeito ao primeiro objetivo, a anĂĄlise aos dados adquiridos pelos sistemas tĂpicos
de vĂdeo-vigilĂąncia mostra que, devido Ă distĂąncia de captura, os traços biomĂ©tricos amostrados
nĂŁo sĂŁo suficientemente discriminativos para garantir taxas de reconhecimento aceitĂĄveis.
Na literatura, vĂĄrios trabalhos advogam o uso de cĂąmaras Pan Tilt Zoom (PTZ) para adquirir
imagens de alta resolução à distùncia, principalmente o uso destes dispositivos no modo masterslave.
Na configuração master-slave um módulo de anålise inteligente seleciona zonas de interesse
(e.g. carros, pessoas) a partir do vĂdeo adquirido por uma cĂąmara de vĂdeo-vigilĂąncia
e a cùmara PTZ é orientada para adquirir em alta resolução as regiÔes de interesse. Diversos
métodos jå mostraram que esta configuração pode ser usada para adquirir dados biométricos
Ă distĂąncia, ainda assim estes nĂŁo foram capazes de solucionar alguns problemas relacionados
com esta estratĂ©gia, impedindo assim o seu uso em ambientes de vĂdeo-vigilĂąncia. Deste modo,
esta tese propÔe dois métodos para permitir a aquisição de dados biométricos em ambientes de
vĂdeo-vigilĂąncia usando uma cĂąmara PTZ assistida por uma cĂąmara tĂpica de vĂdeo-vigilĂąncia. O
primeiro é um método de calibração capaz de mapear de forma exata as coordenadas da cùmara
master para o Ăąngulo da cĂąmara PTZ (slave) sem o auxĂlio de outros dispositivos Ăłticos. O
segundo método determina a ordem pela qual um conjunto de sujeitos vai ser observado pela
cĂąmara PTZ. O mĂ©todo proposto consegue determinar em tempo-real a sequĂȘncia de observaçÔes
que maximiza o nĂșmero de diferentes sujeitos observados e simultaneamente minimiza o
tempo total de transição entre sujeitos. De modo a atingir o primeiro objetivo desta tese, os
dois métodos propostos foram combinados com os avanços alcançados na årea da monitorização
de humanos para assim desenvolver o primeiro sistema de vĂdeo-vigilĂąncia completamente automatizado
e capaz de adquirir dados biométricos a longas distùncias sem requerer a cooperação
dos indivĂduos no processo, designado por sistema QUIS-CAMPI.
O sistema QUIS-CAMPI representa o ponto de partida para iniciar a investigação relacionada
com o segundo objetivo desta tese. A anålise do desempenho dos métodos de reconhecimento
biométrico do estado-da-arte mostra que estes conseguem obter taxas de reconhecimento
quase perfeitas em dados adquiridos sem restriçÔes (e.g., taxas de reconhecimento
maiores do que 99% no conjunto de dados LFW). Contudo, este desempenho nĂŁo Ă© corroborado pelos resultados observados em ambientes de vĂdeo-vigilĂąncia, o que sugere que os conjuntos
de dados atuais nĂŁo contĂȘm verdadeiramente os fatores de degradação tĂpicos dos ambientes de
vĂdeo-vigilĂąncia. Tendo em conta as vulnerabilidades dos conjuntos de dados biomĂ©tricos atuais,
esta tese introduz um novo conjunto de dados biomĂ©tricos (imagens da face e vĂdeos do tipo de
passo) adquiridos pelo sistema QUIS-CAMPI a uma distùncia måxima de 40m e sem a cooperação
dos sujeitos no processo de aquisição. Este conjunto permite avaliar de forma objetiva o desempenho
dos mĂ©todos do estado-da-arte no reconhecimento de indivĂduos em imagens/vĂdeos
capturados num ambiente real de vĂdeo-vigilĂąncia. Como tal, este conjunto foi utilizado para
promover a primeira competição de reconhecimento biométrico em ambientes não controlados.
Esta tese descreve os protocolos de avaliação usados, assim como os resultados obtidos por 9
métodos especialmente desenhados para esta competição. Para além disso, os dados adquiridos
pelo sistema QUIS-CAMPI foram essenciais para o desenvolvimento de dois métodos para
aumentar a robustez aos fatores de degradação observados em ambientes de vĂdeo-vigilĂąncia. O
primeiro Ă© um mĂ©todo para detetar caracterĂsticas corruptas em assinaturas biomĂ©tricas atravĂ©s
da anĂĄlise da redundĂąncia entre subconjuntos de caracterĂsticas. O segundo Ă© um mĂ©todo de
reconhecimento facial baseado em caricaturas automaticamente geradas a partir de uma Ășnica
foto do sujeito. As experiĂȘncias realizadas mostram que ambos os mĂ©todos conseguem reduzir
as taxas de erro em dados adquiridos de forma nĂŁo controlada
Cooperative heterogeneous robots for autonomous insects trap monitoring system in a precision agriculture scenario
The recent advances in precision agriculture are due to the emergence of modern robotics systems. For instance, unmanned aerial systems (UASs) give new possibilities that advance the solution of existing problems in this area in many different aspects. The reason is due to these platformsâ ability to perform activities at varying levels of complexity. Therefore, this research presents a multiple-cooperative robot solution for UAS and unmanned ground vehicle (UGV) systems for their joint inspection of olive grove inspect traps. This work evaluated the UAS and UGV vision-based navigation based on a yellow fly trap fixed in the trees to provide visual position data using the You Only Look Once (YOLO) algorithms. The experimental setup evaluated the fuzzy control algorithm applied to the UAS to make it reach the trap efficiently. Experimental tests were conducted in a realistic simulation environment using a robot operating system (ROS) and CoppeliaSim platforms to verify the methodologyâs performance, and all tests considered specific real-world environmental conditions. A search and landing algorithm based on augmented reality tag (AR-Tag) visual processing was evaluated to allow for the return and landing of the UAS to the UGV base. The outcomes obtained in this work demonstrate the robustness and feasibility of the multiple-cooperative robot architecture for UGVs and UASs applied in the olive inspection scenario.The authors would like to thank the Foundation for Science and Technology (FCT, Portugal) for financial support through national funds FCT/MCTES (PIDDAC) to CeDRI (UIDB/05757/2020 and UIDP/05757/2020) and SusTEC (LA/P/0007/2021). In addition, the authors would like to thank the following Brazilian Agencies CEFET-RJ, CAPES, CNPq, and FAPERJ. In addition, the authors also want to thank the Research Centre in Digitalization and Intelligent Robotics (CeDRI), Instituto PolitĂ©cnico de Braganca (IPB) - Campus de Santa Apolonia, Portugal, LaboratĂłrio Associado para a Sustentabilidade e Tecnologia em RegiĂ”es de Montanha (SusTEC), Portugal, INESC Technology and Science - Porto, Portugal and Universidade de TrĂĄs-os-Montes e Alto Douro - Vila Real, Portugal. This work was carried out under the Project âOleaChain: CompetĂȘncias para a sustentabilidade e inovação da cadeia de valor do olival tradicional no Norte Interior de Portugalâ (NORTE-06-3559-FSE-000188), an operation used to hire highly qualified human resources, funded by NORTE 2020 through the European Social Fund (ESF).info:eu-repo/semantics/publishedVersio
- âŠ