3,612 research outputs found
Inteligence system and national security in Nigeria: the challenges of data gathering
Nigeria today faces a variety of security risks that are threatening to undermine its status as an independent republic. These include armed robbery, urban violence, smuggling of weapons, kidnapping, trafficking in people, and disputes between communities and religions. A strong intelligence system that can readily gather and analyse data to precisely predict the movement of criminals and other unwanted elements inside society might alleviate all these concerns. However, it appears that the government, security, and intelligence agencies are caught off guard by the on-going attacks by militants, herders, and incidents of ethnic-religious strife. These unexpected attacks might not be unrelated to incorrect and insufficient information provided about these acts. The study employs qualitative methodologies and draws on secondary sources like newspapers, the internet, and published academic works. The paper's findings show, among other things, that a number of intricate and interconnected problems can be blamed for the Nigerian intelligence system's lack of efficacy. These problems, which include an apparent lack of data, under-use of the data that is already available, and improper data, are made worse by inconsistencies in data management and sharing across the numerous security agencies operating in the nation. The article concludes that it is important for the various security apparatuses to be data-driven and exchange intelligence with one another in order to promote early response to any threat to the security of citizens' lives and property
Deepfakes, Misinformation, and Disinformation in the Era of Frontier AI, Generative AI, and Large AI Models
With the advent of sophisticated artificial intelligence (AI) technologies,
the proliferation of deepfakes and the spread of m/disinformation have emerged
as formidable threats to the integrity of information ecosystems worldwide.
This paper provides an overview of the current literature. Within the frontier
AI's crucial application in developing defense mechanisms for detecting
deepfakes, we highlight the mechanisms through which generative AI based on
large models (LM-based GenAI) craft seemingly convincing yet fabricated
contents. We explore the multifaceted implications of LM-based GenAI on
society, politics, and individual privacy violations, underscoring the urgent
need for robust defense strategies. To address these challenges, in this study,
we introduce an integrated framework that combines advanced detection
algorithms, cross-platform collaboration, and policy-driven initiatives to
mitigate the risks associated with AI-Generated Content (AIGC). By leveraging
multi-modal analysis, digital watermarking, and machine learning-based
authentication techniques, we propose a defense mechanism adaptable to AI
capabilities of ever-evolving nature. Furthermore, the paper advocates for a
global consensus on the ethical usage of GenAI and implementing cyber-wellness
educational programs to enhance public awareness and resilience against
m/disinformation. Our findings suggest that a proactive and collaborative
approach involving technological innovation and regulatory oversight is
essential for safeguarding netizens while interacting with cyberspace against
the insidious effects of deepfakes and GenAI-enabled m/disinformation
campaigns.Comment: This paper appears in IEEE International Conference on Computer and
Applications (ICCA) 202
Privacy Intelligence: A Survey on Image Sharing on Online Social Networks
Image sharing on online social networks (OSNs) has become an indispensable
part of daily social activities, but it has also led to an increased risk of
privacy invasion. The recent image leaks from popular OSN services and the
abuse of personal photos using advanced algorithms (e.g. DeepFake) have
prompted the public to rethink individual privacy needs when sharing images on
OSNs. However, OSN image sharing itself is relatively complicated, and systems
currently in place to manage privacy in practice are labor-intensive yet fail
to provide personalized, accurate and flexible privacy protection. As a result,
an more intelligent environment for privacy-friendly OSN image sharing is in
demand. To fill the gap, we contribute a systematic survey of 'privacy
intelligence' solutions that target modern privacy issues related to OSN image
sharing. Specifically, we present a high-level analysis framework based on the
entire lifecycle of OSN image sharing to address the various privacy issues and
solutions facing this interdisciplinary field. The framework is divided into
three main stages: local management, online management and social experience.
At each stage, we identify typical sharing-related user behaviors, the privacy
issues generated by those behaviors, and review representative intelligent
solutions. The resulting analysis describes an intelligent privacy-enhancing
chain for closed-loop privacy management. We also discuss the challenges and
future directions existing at each stage, as well as in publicly available
datasets.Comment: 32 pages, 9 figures. Under revie
QUIS-CAMPI: Biometric Recognition in Surveillance Scenarios
The concerns about individuals security have justified the increasing number of surveillance
cameras deployed both in private and public spaces. However, contrary to popular belief,
these devices are in most cases used solely for recording, instead of feeding intelligent analysis
processes capable of extracting information about the observed individuals. Thus, even though
video surveillance has already proved to be essential for solving multiple crimes, obtaining relevant
details about the subjects that took part in a crime depends on the manual inspection
of recordings. As such, the current goal of the research community is the development of
automated surveillance systems capable of monitoring and identifying subjects in surveillance
scenarios. Accordingly, the main goal of this thesis is to improve the performance of biometric
recognition algorithms in data acquired from surveillance scenarios. In particular, we aim at
designing a visual surveillance system capable of acquiring biometric data at a distance (e.g.,
face, iris or gait) without requiring human intervention in the process, as well as devising biometric
recognition methods robust to the degradation factors resulting from the unconstrained
acquisition process.
Regarding the first goal, the analysis of the data acquired by typical surveillance systems
shows that large acquisition distances significantly decrease the resolution of biometric samples,
and thus their discriminability is not sufficient for recognition purposes. In the literature,
diverse works point out Pan Tilt Zoom (PTZ) cameras as the most practical way for acquiring
high-resolution imagery at a distance, particularly when using a master-slave configuration. In
the master-slave configuration, the video acquired by a typical surveillance camera is analyzed
for obtaining regions of interest (e.g., car, person) and these regions are subsequently imaged
at high-resolution by the PTZ camera. Several methods have already shown that this configuration
can be used for acquiring biometric data at a distance. Nevertheless, these methods
failed at providing effective solutions to the typical challenges of this strategy, restraining its
use in surveillance scenarios. Accordingly, this thesis proposes two methods to support the development
of a biometric data acquisition system based on the cooperation of a PTZ camera
with a typical surveillance camera. The first proposal is a camera calibration method capable
of accurately mapping the coordinates of the master camera to the pan/tilt angles of the PTZ
camera. The second proposal is a camera scheduling method for determining - in real-time -
the sequence of acquisitions that maximizes the number of different targets obtained, while
minimizing the cumulative transition time. In order to achieve the first goal of this thesis,
both methods were combined with state-of-the-art approaches of the human monitoring field
to develop a fully automated surveillance capable of acquiring biometric data at a distance and
without human cooperation, designated as QUIS-CAMPI system.
The QUIS-CAMPI system is the basis for pursuing the second goal of this thesis. The analysis
of the performance of the state-of-the-art biometric recognition approaches shows that these
approaches attain almost ideal recognition rates in unconstrained data. However, this performance
is incongruous with the recognition rates observed in surveillance scenarios. Taking into
account the drawbacks of current biometric datasets, this thesis introduces a novel dataset comprising
biometric samples (face images and gait videos) acquired by the QUIS-CAMPI system at a
distance ranging from 5 to 40 meters and without human intervention in the acquisition process.
This set allows to objectively assess the performance of state-of-the-art biometric recognition
methods in data that truly encompass the covariates of surveillance scenarios. As such, this set
was exploited for promoting the first international challenge on biometric recognition in the wild. This thesis describes the evaluation protocols adopted, along with the results obtained
by the nine methods specially designed for this competition. In addition, the data acquired by
the QUIS-CAMPI system were crucial for accomplishing the second goal of this thesis, i.e., the
development of methods robust to the covariates of surveillance scenarios. The first proposal
regards a method for detecting corrupted features in biometric signatures inferred by a redundancy
analysis algorithm. The second proposal is a caricature-based face recognition approach
capable of enhancing the recognition performance by automatically generating a caricature
from a 2D photo. The experimental evaluation of these methods shows that both approaches
contribute to improve the recognition performance in unconstrained data.A crescente preocupação com a segurança dos indivĂduos tem justificado o crescimento
do nĂşmero de câmaras de vĂdeo-vigilância instaladas tanto em espaços privados como pĂşblicos.
Contudo, ao contrário do que normalmente se pensa, estes dispositivos são, na maior parte dos
casos, usados apenas para gravação, não estando ligados a nenhum tipo de software inteligente
capaz de inferir em tempo real informações sobre os indivĂduos observados. Assim, apesar de a
vĂdeo-vigilância ter provado ser essencial na resolução de diversos crimes, o seu uso está ainda
confinado Ă disponibilização de vĂdeos que tĂŞm que ser manualmente inspecionados para extrair
informações relevantes dos sujeitos envolvidos no crime. Como tal, atualmente, o principal
desafio da comunidade cientĂfica Ă© o desenvolvimento de sistemas automatizados capazes de
monitorizar e identificar indivĂduos em ambientes de vĂdeo-vigilância.
Esta tese tem como principal objetivo estender a aplicabilidade dos sistemas de reconhecimento
biomĂ©trico aos ambientes de vĂdeo-vigilância. De forma mais especifica, pretende-se
1) conceber um sistema de vĂdeo-vigilância que consiga adquirir dados biomĂ©tricos a longas distâncias
(e.g., imagens da cara, Ăris, ou vĂdeos do tipo de passo) sem requerer a cooperação dos
indivĂduos no processo; e 2) desenvolver mĂ©todos de reconhecimento biomĂ©trico robustos aos
fatores de degradação inerentes aos dados adquiridos por este tipo de sistemas.
No que diz respeito ao primeiro objetivo, a análise aos dados adquiridos pelos sistemas tĂpicos
de vĂdeo-vigilância mostra que, devido Ă distância de captura, os traços biomĂ©tricos amostrados
não são suficientemente discriminativos para garantir taxas de reconhecimento aceitáveis.
Na literatura, vários trabalhos advogam o uso de câmaras Pan Tilt Zoom (PTZ) para adquirir
imagens de alta resolução à distância, principalmente o uso destes dispositivos no modo masterslave.
Na configuração master-slave um módulo de análise inteligente seleciona zonas de interesse
(e.g. carros, pessoas) a partir do vĂdeo adquirido por uma câmara de vĂdeo-vigilância
e a câmara PTZ é orientada para adquirir em alta resolução as regiões de interesse. Diversos
métodos já mostraram que esta configuração pode ser usada para adquirir dados biométricos
à distância, ainda assim estes não foram capazes de solucionar alguns problemas relacionados
com esta estratĂ©gia, impedindo assim o seu uso em ambientes de vĂdeo-vigilância. Deste modo,
esta tese propõe dois métodos para permitir a aquisição de dados biométricos em ambientes de
vĂdeo-vigilância usando uma câmara PTZ assistida por uma câmara tĂpica de vĂdeo-vigilância. O
primeiro é um método de calibração capaz de mapear de forma exata as coordenadas da câmara
master para o ângulo da câmara PTZ (slave) sem o auxĂlio de outros dispositivos Ăłticos. O
segundo método determina a ordem pela qual um conjunto de sujeitos vai ser observado pela
câmara PTZ. O método proposto consegue determinar em tempo-real a sequência de observações
que maximiza o nĂşmero de diferentes sujeitos observados e simultaneamente minimiza o
tempo total de transição entre sujeitos. De modo a atingir o primeiro objetivo desta tese, os
dois métodos propostos foram combinados com os avanços alcançados na área da monitorização
de humanos para assim desenvolver o primeiro sistema de vĂdeo-vigilância completamente automatizado
e capaz de adquirir dados biométricos a longas distâncias sem requerer a cooperação
dos indivĂduos no processo, designado por sistema QUIS-CAMPI.
O sistema QUIS-CAMPI representa o ponto de partida para iniciar a investigação relacionada
com o segundo objetivo desta tese. A análise do desempenho dos métodos de reconhecimento
biométrico do estado-da-arte mostra que estes conseguem obter taxas de reconhecimento
quase perfeitas em dados adquiridos sem restrições (e.g., taxas de reconhecimento
maiores do que 99% no conjunto de dados LFW). Contudo, este desempenho nĂŁo Ă© corroborado pelos resultados observados em ambientes de vĂdeo-vigilância, o que sugere que os conjuntos
de dados atuais nĂŁo contĂŞm verdadeiramente os fatores de degradação tĂpicos dos ambientes de
vĂdeo-vigilância. Tendo em conta as vulnerabilidades dos conjuntos de dados biomĂ©tricos atuais,
esta tese introduz um novo conjunto de dados biomĂ©tricos (imagens da face e vĂdeos do tipo de
passo) adquiridos pelo sistema QUIS-CAMPI a uma distância máxima de 40m e sem a cooperação
dos sujeitos no processo de aquisição. Este conjunto permite avaliar de forma objetiva o desempenho
dos mĂ©todos do estado-da-arte no reconhecimento de indivĂduos em imagens/vĂdeos
capturados num ambiente real de vĂdeo-vigilância. Como tal, este conjunto foi utilizado para
promover a primeira competição de reconhecimento biométrico em ambientes não controlados.
Esta tese descreve os protocolos de avaliação usados, assim como os resultados obtidos por 9
métodos especialmente desenhados para esta competição. Para além disso, os dados adquiridos
pelo sistema QUIS-CAMPI foram essenciais para o desenvolvimento de dois métodos para
aumentar a robustez aos fatores de degradação observados em ambientes de vĂdeo-vigilância. O
primeiro Ă© um mĂ©todo para detetar caracterĂsticas corruptas em assinaturas biomĂ©tricas atravĂ©s
da análise da redundância entre subconjuntos de caracterĂsticas. O segundo Ă© um mĂ©todo de
reconhecimento facial baseado em caricaturas automaticamente geradas a partir de uma Ăşnica
foto do sujeito. As experiências realizadas mostram que ambos os métodos conseguem reduzir
as taxas de erro em dados adquiridos de forma nĂŁo controlada
Multi-teacher knowledge distillation as an effective method for compressing ensembles of neural networks
Deep learning has contributed greatly to many successes in artificial
intelligence in recent years. Today, it is possible to train models that have
thousands of layers and hundreds of billions of parameters. Large-scale deep
models have achieved great success, but the enormous computational complexity
and gigantic storage requirements make it extremely difficult to implement them
in real-time applications. On the other hand, the size of the dataset is still
a real problem in many domains. Data are often missing, too expensive, or
impossible to obtain for other reasons. Ensemble learning is partially a
solution to the problem of small datasets and overfitting. However, ensemble
learning in its basic version is associated with a linear increase in
computational complexity. We analyzed the impact of the ensemble
decision-fusion mechanism and checked various methods of sharing the decisions
including voting algorithms. We used the modified knowledge distillation
framework as a decision-fusion mechanism which allows in addition compressing
of the entire ensemble model into a weight space of a single model. We showed
that knowledge distillation can aggregate knowledge from multiple teachers in
only one student model and, with the same computational complexity, obtain a
better-performing model compared to a model trained in the standard manner. We
have developed our own method for mimicking the responses of all teachers at
the same time, simultaneously. We tested these solutions on several benchmark
datasets. In the end, we presented a wide application use of the efficient
multi-teacher knowledge distillation framework. In the first example, we used
knowledge distillation to develop models that could automate corrosion
detection on aircraft fuselage. The second example describes detection of smoke
on observation cameras in order to counteract wildfires in forests.Comment: Doctoral dissertation in the field of computer science, machine
learning. Application of knowledge distillation as aggregation of ensemble
models. Along with several uses. 140 pages, 67 figures, 13 table
Cooperative multi-sensor tracking of vulnerable road users in the presence of missing detections
This paper presents a vulnerable road user (VRU) tracking algorithm capable of handling noisy and missing detections from heterogeneous sensors. We propose a cooperative fusion algorithm for matching and reinforcing of radar and camera detections using their proximity and positional uncertainty. The belief in the existence and position of objects is then maximized by temporal integration of fused detections by a multi-object tracker. By switching between observation models, the tracker adapts to the detection noise characteristics making it robust to individual sensor failures. The main novelty of this paper is an improved imputation sampling function for updating the state when detections are missing. The proposed function uses a likelihood without association that is conditioned on the sensor information instead of the sensor model. The benefits of the proposed solution are two-fold: firstly, particle updates become computationally tractable and secondly, the problem of imputing samples from a state which is predicted without an associated detection is bypassed. Experimental evaluation shows a significant improvement in both detection and tracking performance over multiple control algorithms. In low light situations, the cooperative fusion outperforms intermediate fusion by as much as 30%, while increases in tracking performance are most significant in complex traffic scenes
CORE: Cooperative Reconstruction for Multi-Agent Perception
This paper presents CORE, a conceptually simple, effective and
communication-efficient model for multi-agent cooperative perception. It
addresses the task from a novel perspective of cooperative reconstruction,
based on two key insights: 1) cooperating agents together provide a more
holistic observation of the environment, and 2) the holistic observation can
serve as valuable supervision to explicitly guide the model learning how to
reconstruct the ideal observation based on collaboration. CORE instantiates the
idea with three major components: a compressor for each agent to create more
compact feature representation for efficient broadcasting, a lightweight
attentive collaboration component for cross-agent message aggregation, and a
reconstruction module to reconstruct the observation based on aggregated
feature representations. This learning-to-reconstruct idea is task-agnostic,
and offers clear and reasonable supervision to inspire more effective
collaboration, eventually promoting perception tasks. We validate CORE on
OPV2V, a large-scale multi-agent percetion dataset, in two tasks, i.e., 3D
object detection and semantic segmentation. Results demonstrate that the model
achieves state-of-the-art performance on both tasks, and is more
communication-efficient.Comment: Accepted to ICCV 2023; Code: https://github.com/zllxot/COR
- …