1,117 research outputs found

    Mobile Device Background Sensors: Authentication vs Privacy

    Get PDF
    The increasing number of mobile devices in recent years has caused the collection of a large amount of personal information that needs to be protected. To this aim, behavioural biometrics has become very popular. But, what is the discriminative power of mobile behavioural biometrics in real scenarios? With the success of Deep Learning (DL), architectures based on Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), such as Long Short-Term Memory (LSTM), have shown improvements compared to traditional machine learning methods. However, these DL architectures still have limitations that need to be addressed. In response, new DL architectures like Transformers have emerged. The question is, can these new Transformers outperform previous biometric approaches? To answers to these questions, this thesis focuses on behavioural biometric authentication with data acquired from mobile background sensors (i.e., accelerometers and gyroscopes). In addition, to the best of our knowledge, this is the first thesis that explores and proposes novel behavioural biometric systems based on Transformers, achieving state-of-the-art results in gait, swipe, and keystroke biometrics. The adoption of biometrics requires a balance between security and privacy. Biometric modalities provide a unique and inherently personal approach for authentication. Nevertheless, biometrics also give rise to concerns regarding the invasion of personal privacy. According to the General Data Protection Regulation (GDPR) introduced by the European Union, personal data such as biometric data are sensitive and must be used and protected properly. This thesis analyses the impact of sensitive data in the performance of biometric systems and proposes a novel unsupervised privacy-preserving approach. The research conducted in this thesis makes significant contributions, including: i) a comprehensive review of the privacy vulnerabilities of mobile device sensors, covering metrics for quantifying privacy in relation to sensitive data, along with protection methods for safeguarding sensitive information; ii) an analysis of authentication systems for behavioural biometrics on mobile devices (i.e., gait, swipe, and keystroke), being the first thesis that explores the potential of Transformers for behavioural biometrics, introducing novel architectures that outperform the state of the art; and iii) a novel privacy-preserving approach for mobile biometric gait verification using unsupervised learning techniques, ensuring the protection of sensitive data during the verification process

    ENHANCING CLOUD SYSTEM RUNTIME TO ADDRESS COMPLEX FAILURES

    Get PDF
    As the reliance on cloud systems intensifies in our progressively digital world, understanding and reinforcing their reliability becomes more crucial than ever. Despite impressive advancements in augmenting the resilience of cloud systems, the growing incidence of complex failures now poses a substantial challenge to the availability of these systems. With cloud systems continuing to scale and increase in complexity, failures not only become more elusive to detect but can also lead to more catastrophic consequences. Such failures question the foundational premises of conventional fault-tolerance designs, necessitating the creation of novel system designs to counteract them. This dissertation aims to enhance distributed systems’ capabilities to detect, localize, and react to complex failures at runtime. To this end, this dissertation makes contributions to address three emerging categories of failures in cloud systems. The first part delves into the investigation of partial failures, introducing OmegaGen, a tool adept at generating tailored checkers for detecting and localizing such failures. The second part grapples with silent semantic failures prevalent in cloud systems, showcasing our study findings, and introducing Oathkeeper, a tool that leverages past failures to infer rules and expose these silent issues. The third part explores solutions to slow failures via RESIN, a framework specifically designed to detect, diagnose, and mitigate memory leaks in cloud-scale infrastructures, developed in collaboration with Microsoft Azure. The dissertation concludes by offering insights into future directions for the construction of reliable cloud systems

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    BASALISC: Programmable Hardware Accelerator for BGV Fully Homomorphic Encryption

    Get PDF
    Fully Homomorphic Encryption (FHE) allows for secure computation on encrypted data. Unfortunately, huge memory size, computational cost and bandwidth requirements limit its practicality. We present BASALISC, an architecture family of hardware accelerators that aims to substantially accelerate FHE computations in the cloud. BASALISC is the first to implement the BGV scheme with fully-packed bootstrapping – the noise removal capability necessary for arbitrary-depth computation. It supports a customized version of bootstrapping that can be instantiated with hardware multipliers optimized for area and power. BASALISC is a three-abstraction-layer RISC architecture, designed for a 1 GHz ASIC implementation and underway toward 150mm2 die tape-out in a 12nm GF process. BASALISC\u27s four-layer memory hierarchy includes a two-dimensional conflict-free inner memory layer that enables 32 Tb/s radix-256 NTT computations without pipeline stalls. Its conflict-resolution permutation hardware is generalized and re-used to compute BGV automorphisms without throughput penalty. BASALISC also has a custom multiply-accumulate unit to accelerate BGV key switching. The BASALISC toolchain comprises a custom compiler and a joint performance and correctness simulator. To evaluate BASALISC, we study its physical realizability, emulate and formally verify its core functional units, and we study its performance on a set of benchmarks. Simulation results show a speedup of more than 5,000× over HElib – a popular software FHE library

    Waks-On/Waks-Off: Fast Oblivious Offline/Online Shuffling and Sorting with Waksman Networks

    Get PDF
    As more privacy-preserving solutions leverage trusted execution environments (TEEs) like Intel SGX, it becomes pertinent that these solutions can by design thwart TEE side-channel attacks that research has brought to light. In particular, such solutions need to be fully oblivious to circumvent leaking private information through memory or timing side channels. In this work, we present fast fully oblivious algorithms for shuffling and sorting data. Oblivious shuffling and sorting are two fundamental primitives that are frequently used for permuting data in privacy-preserving solutions. We present novel oblivious shuffling and sorting algorithms in the offline/online model such that the bulk of the computation can be done in an offline phase that is independent of the data to be permuted. The resulting online phase provides performance improvements over state-of-the-art oblivious shuffling and sorting algorithms both asymptotically (O(βnlogn)O(\beta n\log n) vs. O(βnlog2n)O(\beta n \log^2n)) and concretely (>5×>5\times and >3×>3\times speedups), when permuting nn items each of size β\beta. Our work revisits Waksman networks, and it uses the key observation that setting the control bits of a Waksman network for a uniformly random shuffle is independent of the data to be shuffled. However, setting the control bits of a Waksman network efficiently and fully obliviously poses a challenge, and we provide a novel algorithm to this end. The total costs (inclusive of offline computation) of our WaksShuffle shuffling algorithm and our WaksSort sorting algorithm are lower than all other fully oblivious shuffling and sorting algorithms when the items are at least moderately sized (i.e., β\beta > 1400 B), and the performance gap only widens as the item sizes increase. Furthermore, WaksShuffle improves the online cost of oblivious shuffling by >5×>5\times for shuffling 2202^{20} items of any size; similarly WaksShuffle+QS, our other sorting algorithm, provides >2.7×>2.7\times speedups in the online cost of oblivious sorting

    Towards trustworthy computing on untrustworthy hardware

    Get PDF
    Historically, hardware was thought to be inherently secure and trusted due to its obscurity and the isolated nature of its design and manufacturing. In the last two decades, however, hardware trust and security have emerged as pressing issues. Modern day hardware is surrounded by threats manifested mainly in undesired modifications by untrusted parties in its supply chain, unauthorized and pirated selling, injected faults, and system and microarchitectural level attacks. These threats, if realized, are expected to push hardware to abnormal and unexpected behaviour causing real-life damage and significantly undermining our trust in the electronic and computing systems we use in our daily lives and in safety critical applications. A large number of detective and preventive countermeasures have been proposed in literature. It is a fact, however, that our knowledge of potential consequences to real-life threats to hardware trust is lacking given the limited number of real-life reports and the plethora of ways in which hardware trust could be undermined. With this in mind, run-time monitoring of hardware combined with active mitigation of attacks, referred to as trustworthy computing on untrustworthy hardware, is proposed as the last line of defence. This last line of defence allows us to face the issue of live hardware mistrust rather than turning a blind eye to it or being helpless once it occurs. This thesis proposes three different frameworks towards trustworthy computing on untrustworthy hardware. The presented frameworks are adaptable to different applications, independent of the design of the monitored elements, based on autonomous security elements, and are computationally lightweight. The first framework is concerned with explicit violations and breaches of trust at run-time, with an untrustworthy on-chip communication interconnect presented as a potential offender. The framework is based on the guiding principles of component guarding, data tagging, and event verification. The second framework targets hardware elements with inherently variable and unpredictable operational latency and proposes a machine-learning based characterization of these latencies to infer undesired latency extensions or denial of service attacks. The framework is implemented on a DDR3 DRAM after showing its vulnerability to obscured latency extension attacks. The third framework studies the possibility of the deployment of untrustworthy hardware elements in the analog front end, and the consequent integrity issues that might arise at the analog-digital boundary of system on chips. The framework uses machine learning methods and the unique temporal and arithmetic features of signals at this boundary to monitor their integrity and assess their trust level

    Improving low latency applications for reconfigurable devices

    Get PDF
    This thesis seeks to improve low latency application performance via architectural improvements in reconfigurable devices. This is achieved by improving resource utilisation and access, and by exploiting the different environments within which reconfigurable devices are deployed. Our first contribution leverages devices deployed at the network level to enable the low latency processing of financial market data feeds. Financial exchanges transmit messages via two identical data feeds to reduce the chance of message loss. We present an approach to arbitrate these redundant feeds at the network level using a Field-Programmable Gate Array (FPGA). With support for any messaging protocol, we evaluate our design using the NASDAQ TotalView-ITCH, OPRA, and ARCA data feed protocols, and provide two simultaneous outputs: one prioritising low latency, and one prioritising high reliability with three dynamically configurable windowing methods. Our second contribution is a new ring-based architecture for low latency, parallel access to FPGA memory. Traditional FPGA memory is formed by grouping block memories (BRAMs) together and accessing them as a single device. Our architecture accesses these BRAMs independently and in parallel. Targeting memory-based computing, which stores pre-computed function results in memory, we benefit low latency applications that rely on: highly-complex functions; iterative computation; or many parallel accesses to a shared resource. We assess square root, power, trigonometric, and hyperbolic functions within the FPGA, and provide a tool to convert Python functions to our new architecture. Our third contribution extends the ring-based architecture to support any FPGA processing element. We unify E heterogeneous processing elements within compute pools, with each element implementing the same function, and the pool serving D parallel function calls. Our implementation-agnostic approach supports processing elements with different latencies, implementations, and pipeline lengths, as well as non-deterministic latencies. Compute pools evenly balance access to processing elements across the entire application, and are evaluated by implementing eight different neural network activation functions within an FPGA.Open Acces

    Application of knowledge management principles to support maintenance strategies in healthcare organisations

    Get PDF
    Healthcare is a vital service that touches people's lives on a daily basis by providing treatment and resolving patients' health problems through the staff. Human lives are ultimately dependent on the skilled hands of the staff and those who manage the infrastructure that supports the daily operations of the service, making it a compelling reason for a dedicated research study. However, the UK healthcare sector is undergoing rapid changes, driven by rising costs, technological advancements, changing patient expectations, and increasing pressure to deliver sustainable healthcare. With the global rise in healthcare challenges, the need for sustainable healthcare delivery has become imperative. Sustainable healthcare delivery requires the integration of various practices that enhance the efficiency and effectiveness of healthcare infrastructural assets. One critical area that requires attention is the management of healthcare facilities. Healthcare facilitiesis considered one of the core elements in the delivery of effective healthcare services, as shortcomings in the provision of facilities management (FM) services in hospitals may have much more drastic negative effects than in any other general forms of buildings. An essential element in healthcare FM is linked to the relationship between action and knowledge. With a full sense of understanding of infrastructural assets, it is possible to improve, manage and make buildings suitable to the needs of users and to ensure the functionality of the structure and processes. The premise of FM is that an organisation's effectiveness and efficiency are linked to the physical environment in which it operates and that improving the environment can result in direct benefits in operational performance. The goal of healthcare FM is to support the achievement of organisational mission and goals by designing and managing space and infrastructural assets in the best combination of suitability, efficiency, and cost. In operational terms, performance refers to how well a building contributes to fulfilling its intended functions. Therefore, comprehensive deployment of efficient FM approaches is essential for ensuring quality healthcare provision while positively impacting overall patient experiences. In this regard, incorporating knowledge management (KM) principles into hospitals' FM processes contributes significantly to ensuring sustainable healthcare provision and enhancement of patient experiences. Organisations implementing KM principles are better positioned to navigate the constantly evolving business ecosystem easily. Furthermore, KM is vital in processes and service improvement, strategic decision-making, and organisational adaptation and renewal. In this regard, KM principles can be applied to improve hospital FM, thereby ensuring sustainable healthcare delivery. Knowledge management assumes that organisations that manage their organisational and individual knowledge more effectively will be able to cope more successfully with the challenges of the new business ecosystem. There is also the argument that KM plays a crucial role in improving processes and services, strategic decision-making, and adapting and renewing an organisation. The goal of KM is to aid action – providing "a knowledge pull" rather than the information overload most people experience in healthcare FM. Other motivations for seeking better KM in healthcare FM include patient safety, evidence-based care, and cost efficiency as the dominant drivers. The most evidence exists for the success of such approaches at knowledge bottlenecks, such as infection prevention and control, working safely, compliances, automated systems and reminders, and recall based on best practices. The ability to cultivate, nurture and maximise knowledge at multiple levels and in multiple contexts is one of the most significant challenges for those responsible for KM. However, despite the potential benefits, applying KM principles in hospital facilities is still limited. There is a lack of understanding of how KM can be effectively applied in this context, and few studies have explored the potential challenges and opportunities associated with implementing KM principles in hospitals facilities for sustainable healthcare delivery. This study explores applying KM principles to support maintenance strategies in healthcare organisations. The study also explores the challenges and opportunities, for healthcare organisations and FM practitioners, in operationalising a framework which draws the interconnectedness between healthcare. The study begins by defining healthcare FM and its importance in the healthcare industry. It then discusses the concept of KM and the different types of knowledge that are relevant in the healthcare FM sector. The study also examines the challenges that healthcare FM face in managing knowledge and how the application of KM principles can help to overcome these challenges. The study then explores the different KM strategies that can be applied in healthcare FM. The KM benefits include improved patient outcomes, reduced costs, increased efficiency, and enhanced collaboration among healthcare professionals. Additionally, issues like creating a culture of innovation, technology, and benchmarking are considered. In addition, a framework that integrates the essential concepts of KM in healthcare FM will be presented and discussed. The field of KM is introduced as a complex adaptive system with numerous possibilities and challenges. In this context, and in consideration of healthcare FM, five objectives have been formulated to achieve the research aim. As part of the research, a number of objectives will be evaluated, including appraising the concept of KM and how knowledge is created, stored, transferred, and utilised in healthcare FM, evaluating the impact of organisational structure on job satisfaction as well as exploring how cultural differences impact knowledge sharing and performance in healthcare FM organisations. This study uses a combination of qualitative methods, such as meetings, observations, document analysis (internal and external), and semi-structured interviews, to discover the subjective experiences of healthcare FM employees and to understand the phenomenon within a real-world context and attitudes of healthcare FM as the data collection method, using open questions to allow probing where appropriate and facilitating KM development in the delivery and practice of healthcare FM. The study describes the research methodology using the theoretical concept of the "research onion". The qualitative research was conducted in the NHS acute and non-acute hospitals in Northwest England. Findings from the research study revealed that while the concept of KM has grown significantly in recent years, KM in healthcare FM has received little or no attention. The target population was fifty (five FM directors, five academics, five industry experts, ten managers, ten supervisors, five team leaders and ten operatives). These seven groups were purposively selected as the target population because they play a crucial role in KM enhancement in healthcare FM. Face-to-face interviews were conducted with all participants based on their pre-determined availability. Out of the 50-target population, only 25 were successfully interviewed to the point of saturation. Data collected from the interview were coded and analysed using NVivo to identify themes and patterns related to KM in healthcare FM. The study is divided into eight major sections. First, it discusses literature findings regarding healthcare FM and KM, including underlying trends in FM, KM in general, and KM in healthcare FM. Second, the research establishes the study's methodology, introducing the five research objectives, questions and hypothesis. The chapter introduces the literature on methodology elements, including philosophical views and inquiry strategies. The interview and data analysis look at the feedback from the interviews. Lastly, a conclusion and recommendation summarise the research objectives and suggest further research. Overall, this study highlights the importance of KM in healthcare FM and provides insights for healthcare FM directors, managers, supervisors, academia, researchers and operatives on effectively leveraging knowledge to improve patient care and organisational effectiveness

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    PUF for the Commons: Enhancing Embedded Security on the OS Level

    Full text link
    Security is essential for the Internet of Things (IoT). Cryptographic operations for authentication and encryption commonly rely on random input of high entropy and secure, tamper-resistant identities, which are difficult to obtain on constrained embedded devices. In this paper, we design and analyze a generic integration of physically unclonable functions (PUFs) into the IoT operating system RIOT that supports about 250 platforms. Our approach leverages uninitialized SRAM to act as the digital fingerprint for heterogeneous devices. We ground our design on an extensive study of PUF performance in the wild, which involves SRAM measurements on more than 700 IoT nodes that aged naturally in the real-world. We quantify static SRAM bias, as well as the aging effects of devices and incorporate the results in our system. This work closes a previously identified gap of missing statistically significant sample sizes for testing the unpredictability of PUFs. Our experiments on COTS devices of 64 kB SRAM indicate that secure random seeds derived from the SRAM PUF provide 256 Bits-, and device unique keys provide more than 128 Bits of security. In a practical security assessment we show that SRAM PUFs resist moderate attack scenarios, which greatly improves the security of low-end IoT devices.Comment: 18 pages, 12 figures, 3 table
    corecore