526 research outputs found
Bridging formal methods and machine learning with model checking and global optimisation
Formal methods and machine learning are two research fields with drastically different foundations and philosophies. Formal methods utilise mathematically rigorous techniques for software and hardware systems' specification, development and verification. Machine learning focuses on pragmatic approaches to gradually improve a parameterised model by observing a training data set. While historically, the two fields lack communication, this trend has changed in the past few years with an outburst of research interest in the robustness verification of neural networks. This paper will briefly review these works, and focus on the urgent need for broader and more in-depth communication between the two fields, with the ultimate goal of developing learning-enabled systems with excellent performance and acceptable safety and security. We present a specification language, MLS2, and show that it can express a set of known safety and security properties, including generalisation, uncertainty, robustness, data poisoning, backdoor, model stealing, membership inference, model inversion, interpretability, and fairness. To verify MLS2 properties, we promote the global optimisation-based methods, which have provable guarantees on the convergence to the optimal solution. Many of them have theoretical bounds on the gap between current solutions and the optimal solution
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
Evaluating Architectural Safeguards for Uncertain AI Black-Box Components
Although tremendous progress has been made in Artificial Intelligence (AI), it entails new challenges. The growing complexity of learning tasks requires more complex AI components, which increasingly exhibit unreliable behaviour. In this book, we present a model-driven approach to model architectural safeguards for AI components and analyse their effect on the overall system reliability
Automatic Generation of Personalized Recommendations in eCoaching
Denne avhandlingen omhandler eCoaching for personlig livsstilsstøtte i sanntid ved bruk av informasjons- og kommunikasjonsteknologi. Utfordringen er å designe, utvikle og teknisk evaluere en prototyp av en intelligent eCoach som automatisk genererer personlige og evidensbaserte anbefalinger til en bedre livsstil. Den utviklede løsningen er fokusert på forbedring av fysisk aktivitet. Prototypen bruker bærbare medisinske aktivitetssensorer. De innsamlede data blir semantisk representert og kunstig intelligente algoritmer genererer automatisk meningsfulle, personlige og kontekstbaserte anbefalinger for mindre stillesittende tid. Oppgaven bruker den veletablerte designvitenskapelige forskningsmetodikken for å utvikle teoretiske grunnlag og praktiske implementeringer. Samlet sett fokuserer denne forskningen på teknologisk verifisering snarere enn klinisk evaluering.publishedVersio
Towards trustworthy computing on untrustworthy hardware
Historically, hardware was thought to be inherently secure and trusted due to its
obscurity and the isolated nature of its design and manufacturing. In the last two
decades, however, hardware trust and security have emerged as pressing issues.
Modern day hardware is surrounded by threats manifested mainly in undesired
modifications by untrusted parties in its supply chain, unauthorized and pirated
selling, injected faults, and system and microarchitectural level attacks. These threats,
if realized, are expected to push hardware to abnormal and unexpected behaviour
causing real-life damage and significantly undermining our trust in the electronic and
computing systems we use in our daily lives and in safety critical applications. A
large number of detective and preventive countermeasures have been proposed in
literature. It is a fact, however, that our knowledge of potential consequences to
real-life threats to hardware trust is lacking given the limited number of real-life
reports and the plethora of ways in which hardware trust could be undermined. With
this in mind, run-time monitoring of hardware combined with active mitigation of
attacks, referred to as trustworthy computing on untrustworthy hardware, is proposed
as the last line of defence. This last line of defence allows us to face the issue of live
hardware mistrust rather than turning a blind eye to it or being helpless once it occurs.
This thesis proposes three different frameworks towards trustworthy computing
on untrustworthy hardware. The presented frameworks are adaptable to different
applications, independent of the design of the monitored elements, based on
autonomous security elements, and are computationally lightweight. The first
framework is concerned with explicit violations and breaches of trust at run-time,
with an untrustworthy on-chip communication interconnect presented as a potential
offender. The framework is based on the guiding principles of component guarding,
data tagging, and event verification. The second framework targets hardware elements
with inherently variable and unpredictable operational latency and proposes a
machine-learning based characterization of these latencies to infer undesired latency
extensions or denial of service attacks. The framework is implemented on a DDR3
DRAM after showing its vulnerability to obscured latency extension attacks. The
third framework studies the possibility of the deployment of untrustworthy hardware
elements in the analog front end, and the consequent integrity issues that might arise
at the analog-digital boundary of system on chips. The framework uses machine
learning methods and the unique temporal and arithmetic features of signals at this
boundary to monitor their integrity and assess their trust level
Towards Automated Detection of Single-Trace Side-Channel Vulnerabilities in Constant-Time Cryptographic Code
Although cryptographic algorithms may be mathematically secure, it is often
possible to leak secret information from the implementation of the algorithms.
Timing and power side-channel vulnerabilities are some of the most widely
considered threats to cryptographic algorithm implementations. Timing
vulnerabilities may be easier to detect and exploit, and all high-quality
cryptographic code today should be written in constant-time style. However,
this does not prevent power side-channels from existing. With constant time
code, potential attackers can resort to power side-channel attacks to try
leaking secrets. Detecting potential power side-channel vulnerabilities is a
tedious task, as it requires analyzing code at the assembly level and needs
reasoning about which instructions could be leaking information based on their
operands and their values. To help make the process of detecting potential
power side-channel vulnerabilities easier for cryptographers, this work
presents Pascal: Power Analysis Side Channel Attack Locator, a tool that
introduces novel symbolic register analysis techniques for binary analysis of
constant-time cryptographic algorithms, and verifies locations of potential
power side-channel vulnerabilities with high precision. Pascal is evaluated on
a number of implementations of post-quantum cryptographic algorithms, and it is
able to find dozens of previously reported single-trace power side-channel
vulnerabilities in these algorithms, all in an automated manner
LIPIcs, Volume 261, ICALP 2023, Complete Volume
LIPIcs, Volume 261, ICALP 2023, Complete Volum
- …