1,464 research outputs found
Mobile Device Background Sensors: Authentication vs Privacy
The increasing number of mobile devices in recent years has caused the collection of a large amount of personal information that needs to be protected. To this aim, behavioural biometrics has become very popular. But, what is the discriminative power of mobile behavioural biometrics in real scenarios? With the success of Deep Learning (DL), architectures based on Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), such as Long Short-Term Memory (LSTM), have shown improvements compared to traditional machine learning methods. However, these DL architectures still have limitations that need to be addressed. In response, new DL architectures like Transformers have emerged. The question is, can these new Transformers outperform previous biometric approaches? To answers to these questions, this thesis focuses on behavioural biometric authentication with data acquired from mobile background sensors (i.e., accelerometers and gyroscopes). In addition, to the best of our knowledge, this is the first thesis that explores and proposes novel behavioural biometric systems based on Transformers, achieving state-of-the-art results in gait, swipe, and keystroke biometrics. The adoption of biometrics requires a balance between security and privacy. Biometric modalities provide a unique and inherently personal approach for authentication. Nevertheless, biometrics also give rise to concerns regarding the invasion of personal privacy. According to the General Data Protection Regulation (GDPR) introduced by the European Union, personal data such as biometric data are sensitive and must be used and protected properly. This thesis analyses the impact of sensitive data in the performance of biometric systems and proposes a novel unsupervised privacy-preserving approach. The research conducted in this thesis makes significant contributions, including: i) a comprehensive review of the privacy vulnerabilities of mobile device sensors, covering metrics for quantifying privacy in relation to sensitive data, along with protection methods for safeguarding sensitive information; ii) an analysis of authentication systems for behavioural biometrics on mobile devices (i.e., gait, swipe, and keystroke), being the first thesis that explores the potential of Transformers for behavioural biometrics, introducing novel architectures that outperform the state of the art; and iii) a novel privacy-preserving approach for mobile biometric gait verification using unsupervised learning techniques, ensuring the protection of sensitive data during the verification process
Flood dynamics derived from video remote sensing
Flooding is by far the most pervasive natural hazard, with the human impacts of floods expected to worsen in the coming decades due to climate change. Hydraulic models are a key tool for understanding flood dynamics and play a pivotal role in unravelling the processes that occur during a flood event, including inundation flow patterns and velocities. In the realm of river basin dynamics, video remote sensing is emerging as a transformative tool that can offer insights into flow dynamics and thus, together with other remotely sensed data, has the potential to be deployed to estimate discharge. Moreover, the integration of video remote sensing data with hydraulic models offers a pivotal opportunity to enhance the predictive capacity of these models.
Hydraulic models are traditionally built with accurate terrain, flow and bathymetric data and are often calibrated and validated using observed data to obtain meaningful and actionable model predictions. Data for accurately calibrating and validating hydraulic models are not always available, leaving the assessment of the predictive capabilities of some models deployed in flood risk management in question. Recent advances in remote sensing have heralded the availability of vast video datasets of high resolution. The parallel evolution of computing capabilities, coupled with advancements in artificial intelligence are enabling the processing of data at unprecedented scales and complexities, allowing us to glean meaningful insights into datasets that can be integrated with hydraulic models. The aims of the research presented in this thesis were twofold. The first aim was to evaluate and explore the potential applications of video from air- and space-borne platforms to comprehensively calibrate and validate two-dimensional hydraulic models. The second aim was to estimate river discharge using satellite video combined with high resolution topographic data. In the first of three empirical chapters, non-intrusive image velocimetry techniques were employed to estimate river surface velocities in a rural catchment. For the first time, a 2D hydraulicvmodel was fully calibrated and validated using velocities derived from Unpiloted Aerial Vehicle (UAV) image velocimetry approaches. This highlighted the value of these data in mitigating the limitations associated with traditional data sources used in parameterizing two-dimensional hydraulic models. This finding inspired the subsequent chapter where river surface velocities, derived using Large Scale Particle Image Velocimetry (LSPIV), and flood extents, derived using deep neural network-based segmentation, were extracted from satellite video and used to rigorously assess the skill of a two-dimensional hydraulic model. Harnessing the ability of deep neural networks to learn complex features and deliver accurate and contextually informed flood segmentation, the potential value of satellite video for validating two dimensional hydraulic model simulations is exhibited. In the final empirical chapter, the convergence of satellite video imagery and high-resolution topographical data bridges the gap between visual observations and quantitative measurements by enabling the direct extraction of velocities from video imagery, which is used to estimate river discharge. Overall, this thesis demonstrates the significant potential of emerging video-based remote sensing datasets and offers approaches for integrating these data into hydraulic modelling and discharge estimation practice. The incorporation of LSPIV techniques into flood modelling workflows signifies a methodological progression, especially in areas lacking robust data collection infrastructure. Satellite video remote sensing heralds a major step forward in our ability to observe river dynamics in real time, with potentially significant implications in the domain of flood modelling science
Multidisciplinary perspectives on Artificial Intelligence and the law
This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio
Progressive Transformation Learning for Leveraging Virtual Images in Training
To effectively interrogate UAV-based images for detecting objects of
interest, such as humans, it is essential to acquire large-scale UAV-based
datasets that include human instances with various poses captured from widely
varying viewing angles. As a viable alternative to laborious and costly data
curation, we introduce Progressive Transformation Learning (PTL), which
gradually augments a training dataset by adding transformed virtual images with
enhanced realism. Generally, a virtual2real transformation generator in the
conditional GAN framework suffers from quality degradation when a large domain
gap exists between real and virtual images. To deal with the domain gap, PTL
takes a novel approach that progressively iterates the following three steps:
1) select a subset from a pool of virtual images according to the domain gap,
2) transform the selected virtual images to enhance realism, and 3) add the
transformed virtual images to the training set while removing them from the
pool. In PTL, accurately quantifying the domain gap is critical. To do that, we
theoretically demonstrate that the feature representation space of a given
object detector can be modeled as a multivariate Gaussian distribution from
which the Mahalanobis distance between a virtual object and the Gaussian
distribution of each object category in the representation space can be readily
computed. Experiments show that PTL results in a substantial performance
increase over the baseline, especially in the small data and the cross-domain
regime.Comment: CVPR 2023 (Selected as Highlight
Masked Discriminators for Content-Consistent Unpaired Image-to-Image Translation
A common goal of unpaired image-to-image translation is to preserve content
consistency between source images and translated images while mimicking the
style of the target domain. Due to biases between the datasets of both domains,
many methods suffer from inconsistencies caused by the translation process.
Most approaches introduced to mitigate these inconsistencies do not constrain
the discriminator, leading to an even more ill-posed training setup. Moreover,
none of these approaches is designed for larger crop sizes. In this work, we
show that masking the inputs of a global discriminator for both domains with a
content-based mask is sufficient to reduce content inconsistencies
significantly. However, this strategy leads to artifacts that can be traced
back to the masking process. To reduce these artifacts, we introduce a local
discriminator that operates on pairs of small crops selected with a similarity
sampling strategy. Furthermore, we apply this sampling strategy to sample
global input crops from the source and target dataset. In addition, we propose
feature-attentive denormalization to selectively incorporate content-based
statistics into the generator stream. In our experiments, we show that our
method achieves state-of-the-art performance in photorealistic sim-to-real
translation and weather translation and also performs well in day-to-night
translation. Additionally, we propose the cKVD metric, which builds on the sKVD
metric and enables the examination of translation quality at the class or
category level.Comment: 24 pages, 22 figures, under revie
AI: Limits and Prospects of Artificial Intelligence
The emergence of artificial intelligence has triggered enthusiasm and promise of boundless opportunities as much as uncertainty about its limits. The contributions to this volume explore the limits of AI, describe the necessary conditions for its functionality, reveal its attendant technical and social problems, and present some existing and potential solutions. At the same time, the contributors highlight the societal and attending economic hopes and fears, utopias and dystopias that are associated with the current and future development of artificial intelligence
MOVES: Movable and Moving LiDAR Scene Segmentation in Label-Free settings using Static Reconstruction
Accurate static structure reconstruction and segmentation of non-stationary
objects is of vital importance for autonomous navigation applications. These
applications assume a LiDAR scan to consist of only static structures. In the
real world however, LiDAR scans consist of non-stationary dynamic structures -
moving and movable objects. Current solutions use segmentation information to
isolate and remove moving structures from LiDAR scan. This strategy fails in
several important use-cases where segmentation information is not available. In
such scenarios, moving objects and objects with high uncertainty in their
motion i.e. movable objects, may escape detection. This violates the above
assumption. We present MOVES, a novel GAN based adversarial model that segments
out moving as well as movable objects in the absence of segmentation
information. We achieve this by accurately transforming a dynamic LiDAR scan to
its corresponding static scan. This is obtained by replacing dynamic objects
and corresponding occlusions with static structures which were occluded by
dynamic objects. We leverage corresponding static-dynamic LiDAR pairs.Comment: 35 pages, 8 figures, 6 table
- …