1,066 research outputs found
Authentication enhancement in command and control networks: (a study in Vehicular Ad-Hoc Networks)
Intelligent transportation systems contribute to improved traffic safety by facilitating real time communication between vehicles. By using wireless channels for communication, vehicular networks are susceptible to a wide range of attacks, such as impersonation, modification, and replay. In this context, securing data exchange between intercommunicating terminals, e.g., vehicle-to-everything (V2X) communication, constitutes a technological challenge that needs to be addressed. Hence, message authentication is crucial to safeguard vehicular ad-hoc networks (VANETs) from malicious attacks. The current state-of-the-art for authentication in VANETs relies on conventional cryptographic primitives, introducing significant computation and communication overheads. In this challenging scenario, physical (PHY)-layer authentication has gained popularity, which involves leveraging the inherent characteristics of wireless channels and the hardware imperfections to discriminate between wireless devices. However, PHY-layerbased authentication cannot be an alternative to crypto-based methods as the initial legitimacy detection must be conducted using cryptographic methods to extract the communicating terminal secret features. Nevertheless, it can be a promising complementary solution for the reauthentication problem in VANETs, introducing what is known as “cross-layer authentication.” This thesis focuses on designing efficient cross-layer authentication schemes for VANETs, reducing the communication and computation overheads associated with transmitting and verifying a crypto-based signature for each transmission. The following provides an overview of the proposed methodologies employed in various contributions presented in this thesis.
1. The first cross-layer authentication scheme: A four-step process represents this approach: initial crypto-based authentication, shared key extraction, re-authentication via a PHY challenge-response algorithm, and adaptive adjustments based on channel conditions. Simulation results validate its efficacy, especially in low signal-to-noise ratio (SNR) scenarios while proving its resilience against active and passive attacks.
2. The second cross-layer authentication scheme: Leveraging the spatially and temporally correlated wireless channel features, this scheme extracts high entropy shared keys that can be used to create dynamic PHY-layer signatures for authentication. A 3-Dimensional (3D) scattering Doppler emulator is designed to investigate the scheme’s performance at different speeds of a moving vehicle and SNRs. Theoretical and hardware implementation analyses prove the scheme’s capability to support high detection probability for an acceptable false alarm value ≤ 0.1 at SNR ≥ 0 dB and speed ≤ 45 m/s.
3. The third proposal: Reconfigurable intelligent surfaces (RIS) integration for improved authentication: Focusing on enhancing PHY-layer re-authentication, this proposal explores integrating RIS technology to improve SNR directed at designated vehicles. Theoretical analysis and practical implementation of the proposed scheme are conducted using a 1-bit RIS, consisting of 64 × 64 reflective units. Experimental results show a significant improvement in the Pd, increasing from 0.82 to 0.96 at SNR = − 6 dB for multicarrier communications.
4. The fourth proposal: RIS-enhanced vehicular communication security: Tailored for challenging SNR in non-line-of-sight (NLoS) scenarios, this proposal optimises key extraction and defends against denial-of-service (DoS) attacks through selective signal strengthening. Hardware implementation studies prove its effectiveness, showcasing improved key extraction performance and resilience against potential threats.
5. The fifth cross-layer authentication scheme: Integrating PKI-based initial legitimacy detection and blockchain-based reconciliation techniques, this scheme ensures secure data exchange. Rigorous security analyses and performance evaluations using network simulators and computation metrics showcase its effectiveness, ensuring its resistance against common attacks and time efficiency in message verification.
6. The final proposal: Group key distribution: Employing smart contract-based blockchain technology alongside PKI-based authentication, this proposal distributes group session keys securely. Its lightweight symmetric key cryptography-based method maintains privacy in VANETs, validated via Ethereum’s main network (MainNet) and comprehensive computation and communication evaluations.
The analysis shows that the proposed methods yield a noteworthy reduction, approximately ranging from 70% to 99%, in both computation and communication overheads, as compared to the conventional approaches. This reduction pertains to the verification and transmission of 1000 messages in total
Watermarks in the Sand: Impossibility of Strong Watermarking for Generative Models
Watermarking generative models consists of planting a statistical signal (watermark) in a model’s output so that it can be later verified that the output was generated by the given model. A strong watermarking scheme satisfies the property that a computationally bounded attacker cannot erase the watermark without causing significant quality degradation. In this paper, we study the (im)possibility of strong watermarking schemes. We prove that, under well-specified and natural assumptions, strong watermarking is impossible to achieve. This holds even in the private detection algorithm setting, where the watermark insertion and detection algorithms share a secret key, unknown to the attacker. To prove this result, we introduce a generic efficient watermark attack; the attacker is not required to know the private key of the scheme or even which scheme is used.
Our attack is based on two assumptions: (1) The attacker has access to a “quality oracle” that can evaluate whether a candidate output is a high-quality response to a prompt, and (2) The attacker has access to a “perturbation oracle” which can modify an output with a nontrivial probability of maintaining quality, and which induces an efficiently mixing random walk on high-quality outputs. We argue that both assumptions can be satisfied in practice by an attacker with weaker computational capabilities than the watermarked model itself, to which the attacker has only black-box access. Furthermore, our assumptions will likely only be easier to satisfy over time as models grow in capabilities and modalities.
We demonstrate the feasibility of our attack by instantiating it to attack three existing watermarking schemes for large language models: Kirchenbauer et al. (2023), Kuditipudi et al. (2023), and Zhao et al. (2023). The same attack successfully removes the watermarks planted by all three schemes, with only minor quality degradation
Towards an Accurate and Secure Detector against Adversarial Perturbations
The vulnerability of deep neural networks to adversarial perturbations has
been widely perceived in the computer vision community. From a security
perspective, it poses a critical risk for modern vision systems, e.g., the
popular Deep Learning as a Service (DLaaS) frameworks. For protecting
off-the-shelf deep models while not modifying them, current algorithms
typically detect adversarial patterns through discriminative decomposition of
natural-artificial data. However, these decompositions are biased towards
frequency or spatial discriminability, thus failing to capture adversarial
patterns comprehensively. More seriously, successful defense-aware (secondary)
adversarial attack (i.e., evading the detector as well as fooling the model) is
practical under the assumption that the adversary is fully aware of the
detector (i.e., the Kerckhoffs's principle). Motivated by such facts, we
propose an accurate and secure adversarial example detector, relying on a
spatial-frequency discriminative decomposition with secret keys. It expands the
above works on two aspects: 1) the introduced Krawtchouk basis provides better
spatial-frequency discriminability and thereby is more suitable for capturing
adversarial patterns than the common trigonometric or wavelet basis; 2) the
extensive parameters for decomposition are generated by a pseudo-random
function with secret keys, hence blocking the defense-aware adversarial attack.
Theoretical and numerical analysis demonstrates the increased accuracy and
security of our detector with respect to a number of state-of-the-art
algorithms
"Le present est plein de l’avenir, et chargé du passé" : Vorträge des XI. Internationalen Leibniz-Kongresses, 31. Juli – 4. August 2023, Leibniz Universität Hannover, Deutschland. Band 3
[No abstract available]Deutschen Forschungsgemeinschaft (DFG)/Projektnr. 517991912VGH VersicherungNiedersächsisches Ministerium für Wissenschaft und Kultur (MWK
Numerical modeling for groundwater protection in the Venetian plain between the Brenta and Piave Rivers
The Ph.D. project tackled the scientific challenges that a water utility company in the northeast of Italy, Alto Trevigiano Servizi, must face in the elaboration of the Water Safety Plan (WSP), which is the most effective preventive tool to ensure good quality water and consumers health protection. The WSPs guidelines were defined by the World Health Organization and were subsequently implemented in a European Directive and Italian law.
The thesis, after an introduction on the scientifical issues, started with the description of the work done to reproduce in CATHY the model that the PhD student Tommaso Trentin built using the software FeFlow.
The study area has an extension of around 900 km2 and is delimited to the north-east by the Piave river, to the west side by a flow line parallel to the Brenta river, while the southern boundary is closed by the Risorgive area, and the North boundary by the Montello and colli Asolani.
The north part is characterized by an undifferentiated aquifer, while the southern part hosts a multilayer system with 8 confined aquifers.
Some modifications, e.g., the mesh refining, the sensitivity analysis, were implemented in the model to try to improve its performance. Also, the soil conductivity of the shallowest soil layer (1 m) was changed following the indications of Carta della permeabilità dei suoli from ARPAV site and the boundary conditions of the norther part of the domain were better defined.
Before the calibration step, the initial mesh that hosts the multilayers systems of 8 aquitards and 8 aquifers was cut at the bottom of the first unconfined aquifer. This allowed to speed up the calibration and focus on the aquifer directly influenced by the atmospheric boundary conditions and subject to recharge variability. The calibration was performed alternating FePESt and CATHY. FePEST, having already implemented the PEST algorithm, allowed to easily implement the pilot points method that in CATHY would have require too much time. Both the bottom of the unconfined aquifer and the hydraulic conductivity field were calibrated. The improvement in terms of RMSE was relevant, the errors being reduced to 1/3. Once the calibrated model was obtained, also a validation step was performed. The resulting model allowed us to investigate an irrigation variation scenario, planned in compliance with the European directive indication, to save water: currently a large area of the domain is interested by flood irrigation considered no more sustainable, since it requires a large amount of water. The scenario considered a switch to sprinkler irrigation only. The results show a slight groundwater head decrease in the wells located in the area affected by the irrigation technique conversion. This result was confirmed by the difference of the total cumulative recharge over the domain in case of sprinkler and flood irrigation and sprinkler irrigation only. The model seems to be not particularly affected by the irrigation modification but more sensitive to the hydraulic conductivity values: a map of the mean distribution of the recharge shows that the larger fraction of the recharge occurs where hydraulic conductivity is larger.
Parallelly to the continuation of this project, also a study on the analysis of numerical dispersion affecting CATHY model was carry out. This study will be useful for future simulations on vulnerability to contaminations that require an accurate solute transport modeling.
Due to lack of time it was not possible to investigate the contaminants transport phenomenon in the area of study to accurately define the wells’ head protection areas, important part of the WSPs, but the preliminary results obtained from the model we built can be considered a good starting point for future transport studies.The Ph.D. project tackled the scientific challenges that a water utility company in the northeast of Italy, Alto Trevigiano Servizi, must face in the elaboration of the Water Safety Plan (WSP), which is the most effective preventive tool to ensure good quality water and consumers health protection. The WSPs guidelines were defined by the World Health Organization and were subsequently implemented in a European Directive and Italian law.
The thesis, after an introduction on the scientifical issues, started with the description of the work done to reproduce in CATHY the model that the PhD student Tommaso Trentin built using the software FeFlow.
The study area has an extension of around 900 km2 and is delimited to the north-east by the Piave river, to the west side by a flow line parallel to the Brenta river, while the southern boundary is closed by the Risorgive area, and the North boundary by the Montello and colli Asolani.
The north part is characterized by an undifferentiated aquifer, while the southern part hosts a multilayer system with 8 confined aquifers.
Some modifications, e.g., the mesh refining, the sensitivity analysis, were implemented in the model to try to improve its performance. Also, the soil conductivity of the shallowest soil layer (1 m) was changed following the indications of Carta della permeabilità dei suoli from ARPAV site and the boundary conditions of the norther part of the domain were better defined.
Before the calibration step, the initial mesh that hosts the multilayers systems of 8 aquitards and 8 aquifers was cut at the bottom of the first unconfined aquifer. This allowed to speed up the calibration and focus on the aquifer directly influenced by the atmospheric boundary conditions and subject to recharge variability. The calibration was performed alternating FePESt and CATHY. FePEST, having already implemented the PEST algorithm, allowed to easily implement the pilot points method that in CATHY would have require too much time. Both the bottom of the unconfined aquifer and the hydraulic conductivity field were calibrated. The improvement in terms of RMSE was relevant, the errors being reduced to 1/3. Once the calibrated model was obtained, also a validation step was performed. The resulting model allowed us to investigate an irrigation variation scenario, planned in compliance with the European directive indication, to save water: currently a large area of the domain is interested by flood irrigation considered no more sustainable, since it requires a large amount of water. The scenario considered a switch to sprinkler irrigation only. The results show a slight groundwater head decrease in the wells located in the area affected by the irrigation technique conversion. This result was confirmed by the difference of the total cumulative recharge over the domain in case of sprinkler and flood irrigation and sprinkler irrigation only. The model seems to be not particularly affected by the irrigation modification but more sensitive to the hydraulic conductivity values: a map of the mean distribution of the recharge shows that the larger fraction of the recharge occurs where hydraulic conductivity is larger.
Parallelly to the continuation of this project, also a study on the analysis of numerical dispersion affecting CATHY model was carry out. This study will be useful for future simulations on vulnerability to contaminations that require an accurate solute transport modeling.
Due to lack of time it was not possible to investigate the contaminants transport phenomenon in the area of study to accurately define the wells’ head protection areas, important part of the WSPs, but the preliminary results obtained from the model we built can be considered a good starting point for future transport studies
Performative Prediction: Past and Future
Predictions in the social world generally influence the target of prediction,
a phenomenon known as performativity. Self-fulfilling and self-negating
predictions are examples of performativity. Of fundamental importance to
economics, finance, and the social sciences, the notion has been absent from
the development of machine learning. In machine learning applications,
performativity often surfaces as distribution shift. A predictive model
deployed on a digital platform, for example, influences consumption and thereby
changes the data-generating distribution. We survey the recently founded area
of performative prediction that provides a definition and conceptual framework
to study performativity in machine learning. A consequence of performative
prediction is a natural equilibrium notion that gives rise to new optimization
challenges. Another consequence is a distinction between learning and steering,
two mechanisms at play in performative prediction. The notion of steering is in
turn intimately related to questions of power in digital markets. We review the
notion of performative power that gives an answer to the question how much a
platform can steer participants through its predictions. We end on a discussion
of future directions, such as the role that performativity plays in contesting
algorithmic systems
Towards trustworthy computing on untrustworthy hardware
Historically, hardware was thought to be inherently secure and trusted due to its
obscurity and the isolated nature of its design and manufacturing. In the last two
decades, however, hardware trust and security have emerged as pressing issues.
Modern day hardware is surrounded by threats manifested mainly in undesired
modifications by untrusted parties in its supply chain, unauthorized and pirated
selling, injected faults, and system and microarchitectural level attacks. These threats,
if realized, are expected to push hardware to abnormal and unexpected behaviour
causing real-life damage and significantly undermining our trust in the electronic and
computing systems we use in our daily lives and in safety critical applications. A
large number of detective and preventive countermeasures have been proposed in
literature. It is a fact, however, that our knowledge of potential consequences to
real-life threats to hardware trust is lacking given the limited number of real-life
reports and the plethora of ways in which hardware trust could be undermined. With
this in mind, run-time monitoring of hardware combined with active mitigation of
attacks, referred to as trustworthy computing on untrustworthy hardware, is proposed
as the last line of defence. This last line of defence allows us to face the issue of live
hardware mistrust rather than turning a blind eye to it or being helpless once it occurs.
This thesis proposes three different frameworks towards trustworthy computing
on untrustworthy hardware. The presented frameworks are adaptable to different
applications, independent of the design of the monitored elements, based on
autonomous security elements, and are computationally lightweight. The first
framework is concerned with explicit violations and breaches of trust at run-time,
with an untrustworthy on-chip communication interconnect presented as a potential
offender. The framework is based on the guiding principles of component guarding,
data tagging, and event verification. The second framework targets hardware elements
with inherently variable and unpredictable operational latency and proposes a
machine-learning based characterization of these latencies to infer undesired latency
extensions or denial of service attacks. The framework is implemented on a DDR3
DRAM after showing its vulnerability to obscured latency extension attacks. The
third framework studies the possibility of the deployment of untrustworthy hardware
elements in the analog front end, and the consequent integrity issues that might arise
at the analog-digital boundary of system on chips. The framework uses machine
learning methods and the unique temporal and arithmetic features of signals at this
boundary to monitor their integrity and assess their trust level
Self-prior guided pixel adversarial networks for blind image inpainting
Blind image inpainting involves two critical aspects, i.e. "where to inpaint" and "how to inpaint". Knowing "where to inpaint" can eliminate the interference arising from corrupted pixel values; a good "how to inpaint" strategy yields high-quality inpainted results robust to various corruptions. In existing methods, these two aspects usually lack explicit and separate consideration. This paper fully explores these two aspects and proposes a self-prior guided inpainting network (SIN). The self-priors are obtained by detecting semantic-discontinuous regions and by predicting global semantic structures of the input image. On the one hand, the self-priors are incorporated into the SIN, which enables the SIN to perceive valid context information from uncorrupted regions and to synthesize semantic-aware textures for corrupted regions. On the other hand, the self-priors are reformulated to provide a pixel-wise adversarial feedback and a high-level semantic structure feedback, which can promote the semantic continuity of inpainted images. Experimental results demonstrate that our method achieves state-of-the-art perfomance in metric scores and in visual quality. It has an advantage over many existing methods that assume "where to inpaint" is known in advance. Extensive experiments on a series of related image restoration tasks validate the effectiveness of our method in obtaining high-quality inpainting
- …