119 research outputs found

    Development of an Event Management Web Application For Students: A Focus on Back-end

    Get PDF
    Managing schedules can be challenging for students, with different calendars on various platforms leading to confusion and missed events. To address this problem, this thesis presents the development of an event management website designed to help students stay organized and motivated. With a focus on the application's back-end, this thesis explores the technology stack used to build the website and the implementation details of each chosen technology. By providing a detailed case study of the website development process, this thesis serves as a helpful resource for future developers looking to build their web applications

    Measuring the impact of COVID-19 on hospital care pathways

    Get PDF
    Care pathways in hospitals around the world reported significant disruption during the recent COVID-19 pandemic but measuring the actual impact is more problematic. Process mining can be useful for hospital management to measure the conformance of real-life care to what might be considered normal operations. In this study, we aim to demonstrate that process mining can be used to investigate process changes associated with complex disruptive events. We studied perturbations to accident and emergency (A &E) and maternity pathways in a UK public hospital during the COVID-19 pandemic. Co-incidentally the hospital had implemented a Command Centre approach for patient-flow management affording an opportunity to study both the planned improvement and the disruption due to the pandemic. Our study proposes and demonstrates a method for measuring and investigating the impact of such planned and unplanned disruptions affecting hospital care pathways. We found that during the pandemic, both A &E and maternity pathways had measurable reductions in the mean length of stay and a measurable drop in the percentage of pathways conforming to normative models. There were no distinctive patterns of monthly mean values of length of stay nor conformance throughout the phases of the installation of the hospital’s new Command Centre approach. Due to a deficit in the available A &E data, the findings for A &E pathways could not be interpreted

    Securing IoT Applications through Decentralised and Distributed IoT-Blockchain Architectures

    Get PDF
    The integration of blockchain into IoT can provide reliable control of the IoT network's ability to distribute computation over a large number of devices. It also allows the AI system to use trusted data for analysis and forecasts while utilising the available IoT hardware to coordinate the execution of tasks in parallel, using a fully distributed approach. This thesis's  rst contribution is a practical implementation of a real world IoT- blockchain application, ood detection use case, is demonstrated using Ethereum proof of authority (PoA). This includes performance measurements of the transaction con-  rmation time, the system end-to-end latency, and the average power consumption. The study showed that blockchain can be integrated into IoT applications, and that Ethereum PoA can be used within IoT for permissioned implementation. This can be achieved while the average energy consumption of running the ood detection system including the Ethereum Geth client is small (around 0.3J). The second contribution is a novel IoT-centric consensus protocol called honesty- based distributed proof of authority (HDPoA) via scalable work. HDPoA was analysed and then deployed and tested. Performance measurements and evaluation along with the security analyses of HDPoA were conducted using a total of 30 di erent IoT de- vices comprising Raspberry Pis, ESP32, and ESP8266 devices. These measurements included energy consumption, the devices' hash power, and the transaction con rma- tion time. The measured values of hash per joule (h/J) for mining were 13.8Kh/J, 54Kh/J, and 22.4Kh/J when using the Raspberry Pi, the ESP32 devices, and the ESP8266 devices, respectively, this achieved while there is limited impact on each de- vice's power. In HDPoA the transaction con rmation time was reduced to only one block compared to up to six blocks in bitcoin. The third contribution is a novel, secure, distributed and decentralised architecture for supporting the implementation of distributed arti cial intelligence (DAI) using hardware platforms provided by IoT. A trained DAI system was implemented over the IoT, where each IoT device hosts one or more neurons within the DAI layers. This is accomplished through the utilisation of blockchain technology that allows trusted interaction and information exchange between distributed neurons. Three di erent datasets were tested and the system achieved a similar accuracy as when testing on a standalone system; both achieved accuracies of 92%-98%. The system accomplished that while ensuring an overall latency of as low as two minutes. This showed the secure architecture capabilities of facilitating the implementation of DAI within IoT while ensuring the accuracy of the system is preserved. The fourth contribution is a novel and secure architecture that integrates the ad- vantages o ered by edge computing, arti cial intelligence (AI), IoT end-devices, and blockchain. This new architecture has the ability to monitor the environment, collect data, analyse it, process it using an AI-expert engine, provide predictions and action- able outcomes, and  nally share it on a public blockchain platform. The pandemic caused by the wide and rapid spread of the novel coronavirus COVID-19 was used as a use-case implementation to test and evaluate the proposed system. While providing the AI-engine trusted data, the system achieved an accuracy of 95%,. This is achieved while the AI-engine only requires a 7% increase in power consumption. This demon- strate the system's ability to protect the data and support the AI system, and improves the IoT overall security with limited impact on the IoT devices. The  fth and  nal contribution is enhancing the security of the HDPoA through the integration of a hardware secure module (HSM) and a hardware wallet (HW). A performance evaluation regarding the energy consumption of nodes that are equipped with HSM and HW and a security analysis were conducted. In addition to enhancing the nodes' security, the HSM can be used to sign more than 120 bytes/joule and encrypt up to 100 bytes/joule, while the HW can be used to sign up to 90 bytes/joule and encrypt up to 80 bytes/joule. The result and analyses demonstrated that the HSM and HW enhance the security of HDPoA, and also can be utilised within IoT-blockchain applications while providing much needed security in terms of con dentiality, trust in devices, and attack deterrence. The above contributions showed that blockchain can be integrated into IoT systems. It showed that blockchain can successfully support the integration of other technolo- gies such as AI, IoT end devices, and edge computing into one system thus allowing organisations and users to bene t greatly from a resilient, distributed, decentralised, self-managed, robust, and secure systems

    Fundamentals of Business

    Get PDF
    Fundamentals of Business, fourth edition (2023) is an open education resource intended to serve as a no-cost, faculty-customizable primary text for one-semester undergraduate introductory business courses. It covers the following topics in business: Teamwork; economics; ethics; entrepreneurship; business ownership, management, and leadership; organizational structures and operations management; human resources and motivating employees; managing in labor union contexts; marketing and pricing strategy; hospitality and tourism, accounting and finance, personal finances, and technology in business

    Analysing and Reducing Costs of Deep Learning Compiler Auto-tuning

    Get PDF
    Deep Learning (DL) is significantly impacting many industries, including automotive, retail and medicine, enabling autonomous driving, recommender systems and genomics modelling, amongst other applications. At the same time, demand for complex and fast DL models is continually growing. The most capable models tend to exhibit highest operational costs, primarily due to their large computational resource footprint and inefficient utilisation of computational resources employed by DL systems. In an attempt to tackle these problems, DL compilers and auto-tuners emerged, automating the traditionally manual task of DL model performance optimisation. While auto-tuning improves model inference speed, it is a costly process, which limits its wider adoption within DL deployment pipelines. The high operational costs associated with DL auto-tuning have multiple causes. During operation, DL auto-tuners explore large search spaces consisting of billions of tensor programs, to propose potential candidates that improve DL model inference latency. Subsequently, DL auto-tuners measure candidate performance in isolation on the target-device, which constitutes the majority of auto-tuning compute-time. Suboptimal candidate proposals, combined with their serial measurement in an isolated target-device lead to prolonged optimisation time and reduced resource availability, ultimately reducing cost-efficiency of the process. In this thesis, we investigate the reasons behind prolonged DL auto-tuning and quantify their impact on the optimisation costs, revealing directions for improved DL auto-tuner design. Based on these insights, we propose two complementary systems: Trimmer and DOPpler. Trimmer improves tensor program search efficacy by filtering out poorly performing candidates, and controls end-to-end auto-tuning using cost objectives, monitoring optimisation cost. Simultaneously, DOPpler breaks long-held assumptions about the serial candidate measurements by successfully parallelising them intra-device, with minimal penalty to optimisation quality. Through extensive experimental evaluation of both systems, we demonstrate that they significantly improve cost-efficiency of autotuning (up to 50.5%) across a plethora of tensor operators, DL models, auto-tuners and target-devices

    Efficient and Side-Channel Resistant Implementations of Next-Generation Cryptography

    Get PDF
    The rapid development of emerging information technologies, such as quantum computing and the Internet of Things (IoT), will have or have already had a huge impact on the world. These technologies can not only improve industrial productivity but they could also bring more convenience to people’s daily lives. However, these techniques have “side effects” in the world of cryptography – they pose new difficulties and challenges from theory to practice. Specifically, when quantum computing capability (i.e., logical qubits) reaches a certain level, Shor’s algorithm will be able to break almost all public-key cryptosystems currently in use. On the other hand, a great number of devices deployed in IoT environments have very constrained computing and storage resources, so the current widely-used cryptographic algorithms may not run efficiently on those devices. A new generation of cryptography has thus emerged, including Post-Quantum Cryptography (PQC), which remains secure under both classical and quantum attacks, and LightWeight Cryptography (LWC), which is tailored for resource-constrained devices. Research on next-generation cryptography is of importance and utmost urgency, and the US National Institute of Standards and Technology in particular has initiated the standardization process for PQC and LWC in 2016 and in 2018 respectively. Since next-generation cryptography is in a premature state and has developed rapidly in recent years, its theoretical security and practical deployment are not very well explored and are in significant need of evaluation. This thesis aims to look into the engineering aspects of next-generation cryptography, i.e., the problems concerning implementation efficiency (e.g., execution time and memory consumption) and security (e.g., countermeasures against timing attacks and power side-channel attacks). In more detail, we first explore efficient software implementation approaches for lattice-based PQC on constrained devices. Then, we study how to speed up isogeny-based PQC on modern high-performance processors especially by using their powerful vector units. Moreover, we research how to design sophisticated yet low-area instruction set extensions to further accelerate software implementations of LWC and long-integer-arithmetic-based PQC. Finally, to address the threats from potential power side-channel attacks, we present a concept of using special leakage-aware instructions to eliminate overwriting leakage for masked software implementations (of next-generation cryptography)

    Flexible Hardware-based Security-aware Mechanisms and Architectures

    Get PDF
    For decades, software security has been the primary focus in securing our computing platforms. Hardware was always assumed trusted, and inherently served as the foundation, and thus the root of trust, of our systems. This has been further leveraged in developing hardware-based dedicated security extensions and architectures to protect software from attacks exploiting software vulnerabilities such as memory corruption. However, the recent outbreak of microarchitectural attacks has shaken these long-established trust assumptions in hardware entirely, thereby threatening the security of all of our computing platforms and bringing hardware and microarchitectural security under scrutiny. These attacks have undeniably revealed the grave consequences of hardware/microarchitecture security flaws to the entire platform security, and how they can even subvert the security guarantees promised by dedicated security architectures. Furthermore, they shed light on the sophisticated challenges particular to hardware/microarchitectural security; it is more critical (and more challenging) to extensively analyze the hardware for security flaws prior to production, since hardware, unlike software, cannot be patched/updated once fabricated. Hardware cannot reliably serve as the root of trust anymore, unless we develop and adopt new design paradigms where security is proactively addressed and scrutinized across the full stack of our computing platforms, at all hardware design and implementation layers. Furthermore, novel flexible security-aware design mechanisms are required to be incorporated in processor microarchitecture and hardware-assisted security architectures, that can practically address the inherent conflict between performance and security by allowing that the trade-off is configured to adapt to the desired requirements. In this thesis, we investigate the prospects and implications at the intersection of hardware and security that emerge across the full stack of our computing platforms and System-on-Chips (SoCs). On one front, we investigate how we can leverage hardware and its advantages, in contrast to software, to build more efficient and effective security extensions that serve security architectures, e.g., by providing execution attestation and enforcement, to protect the software from attacks exploiting software vulnerabilities. We further propose that they are microarchitecturally configured at runtime to provide different types of security services, thus adapting flexibly to different deployment requirements. On another front, we investigate how we can protect these hardware-assisted security architectures and extensions themselves from microarchitectural and software attacks that exploit design flaws that originate in the hardware, e.g., insecure resource sharing in SoCs. More particularly, we focus in this thesis on cache-based side-channel attacks, where we propose sophisticated cache designs, that fundamentally mitigate these attacks, while still preserving performance by enabling that the performance security trade-off is configured by design. We also investigate how these can be incorporated into flexible and customizable security architectures, thus complementing them to further support a wide spectrum of emerging applications with different performance/security requirements. Lastly, we inspect our computing platforms further beneath the design layer, by scrutinizing how the actual implementation of these mechanisms is yet another potential attack surface. We explore how the security of hardware designs and implementations is currently analyzed prior to fabrication, while shedding light on how state-of-the-art hardware security analysis techniques are fundamentally limited, and the potential for improved and scalable approaches

    20. ASIM Fachtagung Simulation in Produktion und Logistik 2023

    Get PDF

    Research Paper: Process Mining and Synthetic Health Data: Reflections and Lessons Learnt

    Get PDF
    Analysing the treatment pathways in real-world health data can provide valuable insight for clinicians and decision-makers. However, the procedures for acquiring real-world data for research can be restrictive, time-consuming and risks disclosing identifiable information. Synthetic data might enable representative analysis without direct access to sensitive data. In the first part of our paper, we propose an approach for grading synthetic data for process analysis based on its fidelity to relationships found in real-world data. In the second part, we apply our grading approach by assessing cancer patient pathways in a synthetic healthcare dataset (The Simulacrum provided by the English National Cancer Registration and Analysis Service) using process mining. Visualisations of the patient pathways within the synthetic data appear plausible, showing relationships between events confirmed in the underlying non-synthetic data. Data quality issues are also present within the synthetic data which reflect real-world problems and artefacts from the synthetic dataset’s creation. Process mining of synthetic data in healthcare is an emerging field with novel challenges. We conclude that researchers should be aware of the risks when extrapolating results produced from research on synthetic data to real-world scenarios and assess findings with analysts who are able to view the underlying data
    corecore