6,130 research outputs found

    The Viability and Potential Consequences of IoT-Based Ransomware

    Get PDF
    With the increased threat of ransomware and the substantial growth of the Internet of Things (IoT) market, there is significant motivation for attackers to carry out IoT-based ransomware campaigns. In this thesis, the viability of such malware is tested. As part of this work, various techniques that could be used by ransomware developers to attack commercial IoT devices were explored. First, methods that attackers could use to communicate with the victim were examined, such that a ransom note was able to be reliably sent to a victim. Next, the viability of using "bricking" as a method of ransom was evaluated, such that devices could be remotely disabled unless the victim makes a payment to the attacker. Research was then performed to ascertain whether it was possible to remotely gain persistence on IoT devices, which would improve the efficacy of existing ransomware methods, and provide opportunities for more advanced ransomware to be created. Finally, after successfully identifying a number of persistence techniques, the viability of privacy-invasion based ransomware was analysed. For each assessed technique, proofs of concept were developed. A range of devices -- with various intended purposes, such as routers, cameras and phones -- were used to test the viability of these proofs of concept. To test communication hijacking, devices' "channels of communication" -- such as web services and embedded screens -- were identified, then hijacked to display custom ransom notes. During the analysis of bricking-based ransomware, a working proof of concept was created, which was then able to remotely brick five IoT devices. After analysing the storage design of an assortment of IoT devices, six different persistence techniques were identified, which were then successfully tested on four devices, such that malicious filesystem modifications would be retained after the device was rebooted. When researching privacy-invasion based ransomware, several methods were created to extract information from data sources that can be commonly found on IoT devices, such as nearby WiFi signals, images from cameras, or audio from microphones. These were successfully implemented in a test environment such that ransomable data could be extracted, processed, and stored for later use to blackmail the victim. Overall, IoT-based ransomware has not only been shown to be viable but also highly damaging to both IoT devices and their users. While the use of IoT-ransomware is still very uncommon "in the wild", the techniques demonstrated within this work highlight an urgent need to improve the security of IoT devices to avoid the risk of IoT-based ransomware causing havoc in our society. Finally, during the development of these proofs of concept, a number of potential countermeasures were identified, which can be used to limit the effectiveness of the attacking techniques discovered in this PhD research

    A Decision Support System for Economic Viability and Environmental Impact Assessment of Vertical Farms

    Get PDF
    Vertical farming (VF) is the practice of growing crops or animals using the vertical dimension via multi-tier racks or vertically inclined surfaces. In this thesis, I focus on the emerging industry of plant-specific VF. Vertical plant farming (VPF) is a promising and relatively novel practice that can be conducted in buildings with environmental control and artificial lighting. However, the nascent sector has experienced challenges in economic viability, standardisation, and environmental sustainability. Practitioners and academics call for a comprehensive financial analysis of VPF, but efforts are stifled by a lack of valid and available data. A review of economic estimation and horticultural software identifies a need for a decision support system (DSS) that facilitates risk-empowered business planning for vertical farmers. This thesis proposes an open-source DSS framework to evaluate business sustainability through financial risk and environmental impact assessments. Data from the literature, alongside lessons learned from industry practitioners, would be centralised in the proposed DSS using imprecise data techniques. These techniques have been applied in engineering but are seldom used in financial forecasting. This could benefit complex sectors which only have scarce data to predict business viability. To begin the execution of the DSS framework, VPF practitioners were interviewed using a mixed-methods approach. Learnings from over 19 shuttered and operational VPF projects provide insights into the barriers inhibiting scalability and identifying risks to form a risk taxonomy. Labour was the most commonly reported top challenge. Therefore, research was conducted to explore lean principles to improve productivity. A probabilistic model representing a spectrum of variables and their associated uncertainty was built according to the DSS framework to evaluate the financial risk for VF projects. This enabled flexible computation without precise production or financial data to improve economic estimation accuracy. The model assessed two VPF cases (one in the UK and another in Japan), demonstrating the first risk and uncertainty quantification of VPF business models in the literature. The results highlighted measures to improve economic viability and the viability of the UK and Japan case. The environmental impact assessment model was developed, allowing VPF operators to evaluate their carbon footprint compared to traditional agriculture using life-cycle assessment. I explore strategies for net-zero carbon production through sensitivity analysis. Renewable energies, especially solar, geothermal, and tidal power, show promise for reducing the carbon emissions of indoor VPF. Results show that renewably-powered VPF can reduce carbon emissions compared to field-based agriculture when considering the land-use change. The drivers for DSS adoption have been researched, showing a pathway of compliance and design thinking to overcome the ‘problem of implementation’ and enable commercialisation. Further work is suggested to standardise VF equipment, collect benchmarking data, and characterise risks. This work will reduce risk and uncertainty and accelerate the sector’s emergence

    Sensors and Methods for Railway Signalling Equipment Monitoring

    Get PDF
    Signalling upgrade projects that have been installed in equipment rooms in the recent past have limited capability to monitor performance of certain types of external circuits. To modify the equipment rooms on the commissioned railway would prove very expensive to implement and would be unacceptable in terms of delays caused to passenger services due to re-commissioning circuits after modification, to comply with rail signalling standards. The use of magnetoresistive sensors to provide performance data on point circuit operation and point operation is investigated. The sensors are bench tested on their ability to measure current in a circuit in a non-intrusive manner. The effect of shielding on the sensor performance is tested and found to be significant. The response of the sensors with various levels of amplification produces linear responses across a range of circuit gain. The output of the sensor circuit is demonstrated for various periods of interruption of conductor current. A three-axis accelerometer is mounted on a linear actuator to demonstrate the type of output expected from similar sensors mounted on a set of points. Measurements of current in point detection circuits and acceleration forces resulting from vibration of out of tolerance mechanical assemblies can provide valuable information on performance and possible threats to safe operation of equipment. The sensors seem capable of measuring the current in a conductor with a comparatively high degree of sensitivity. There is development work required on shielding the sensor from magnetic fields other than those being measured. The accelerometer work is at a demonstration level and requires development. The future testing work with accelerometers should be at a facility where multiple point moves can be made; with the capability to introduce faults to the point mechanisms. Methods can then be developed for analysis of the vibration signatures produced by the various faults

    Translating erasure: Proposing auto-theory as a practice for artistic enquiry and analysis while comprehending personal grief

    Get PDF
    Erasure as an artistic technique has developed in my moving image work after my father's passing. I export videos into sequences of thousands of images and erase outlines of the targeted objects in each frame. The repetitive and low conscious labour is a way to ease the agony and to grieve my father. Hours compressed into thousands of frames, turning into a glimpse of illusion and leaving a ghostly emptiness on the images. Both its visual presentation and making reflect the life events and encounters I've experienced in the UK and Taiwan in the past years. I consider an artwork embodies interconnected relationships between one's personal impulses and artistic training. As an art student, I have found it challenging to describe such a creative process with conventional academic writing. Within a construct that inclines to present thoughts as reasonable and rational arguments, my personal experiences and the intensity of feeling seem out of place. Within an academic framework, how can I make an argument out of how I have developed the erasure in my artwork to perform the grief, fading memories of a loved one, existential crisis and what's in-between? Through auto-theoretical approaches to writing and making of moving image work, this research aims to build a structure that can express both the intimate and intellectual aspects of an art practice. This writing up process interweaves my personal stories that motivate my artistic expression into art theories. The memories about my late father, my relationship with languages, and my lives between the UK and Taiwan meet with different artists' uses of erasure. As the conversations between the introspections and theoretical analysis accumulate, my writing and moving image work unravel an art journey that encompasses the nuances and struggles I've experienced as an international student. Within the search for an ideal model to illustrate an art practice, this research further generates profound understandings of memory, grief, loss, language, conflicted identities and cultural belonging

    Addressing infrastructure challenges posed by the Harwich Formation through understanding its geological origins

    Get PDF
    Variable deposits known to make up the sequence of the Harwich Formation in London have been the subject of ongoing uncertainty within the engineering industry. Current stratigraphical subdivisions do not account for the systematic recognition of individual members in unexposed ground where recovered material is usually disturbed - fines are flushed out during the drilling process and loose materials are often lost or mixed with the surrounding layers. Most engineering problems associated with the Harwich Formation deposits are down to their unconsolidated nature and irregular cementation within layers. The consequent engineering hazards are commonly reflected in high permeability, raised groundwater pressures, ground settlements - when found near the surface and poor stability - when exposed during excavations or tunnelling operations. This frequently leads to sudden design changes or requires contingency measures during construction. All of these can result in damaged equipment, slow progress, and unforeseen costs. This research proposes a facies-based approach where the lithological facies assigned were identified based on reinterpretation of available borehole data from various ground investigations in London, supported by visual inspection of deposits in-situ and a selection of laboratory testing including Particle Size Distribution, Optical and Scanning Electron Microscopy and X-ray Diffraction analyses. Two ground models were developed as a result: 1st a 3D geological model (MOVE model) of the stratigraphy found within the study area that explores the influence of local structural processes controlling/affecting these sediments pre-, syn- and post- deposition and 2nd a sequence stratigraphic model (Dionisos Flow model) unveiling stratal geometries of facies at various stages of accretion. The models present a series of sediment distribution maps, localised 3D views and cross-sections that aim to provide a novel approach to assist the geotechnical industry in predicting the likely distribution of the Harwich Formation deposits, decreasing the engineering risks associated with this stratum.Open Acces

    Foundations for programming and implementing effect handlers

    Get PDF
    First-class control operators provide programmers with an expressive and efficient means for manipulating control through reification of the current control state as a first-class object, enabling programmers to implement their own computational effects and control idioms as shareable libraries. Effect handlers provide a particularly structured approach to programming with first-class control by naming control reifying operations and separating from their handling. This thesis is composed of three strands of work in which I develop operational foundations for programming and implementing effect handlers as well as exploring the expressive power of effect handlers. The first strand develops a fine-grain call-by-value core calculus of a statically typed programming language with a structural notion of effect types, as opposed to the nominal notion of effect types that dominates the literature. With the structural approach, effects need not be declared before use. The usual safety properties of statically typed programming are retained by making crucial use of row polymorphism to build and track effect signatures. The calculus features three forms of handlers: deep, shallow, and parameterised. They each offer a different approach to manipulate the control state of programs. Traditional deep handlers are defined by folds over computation trees, and are the original con-struct proposed by Plotkin and Pretnar. Shallow handlers are defined by case splits (rather than folds) over computation trees. Parameterised handlers are deep handlers extended with a state value that is threaded through the folds over computation trees. To demonstrate the usefulness of effects and handlers as a practical programming abstraction I implement the essence of a small UNIX-style operating system complete with multi-user environment, time-sharing, and file I/O. The second strand studies continuation passing style (CPS) and abstract machine semantics, which are foundational techniques that admit a unified basis for implementing deep, shallow, and parameterised effect handlers in the same environment. The CPS translation is obtained through a series of refinements of a basic first-order CPS translation for a fine-grain call-by-value language into an untyped language. Each refinement moves toward a more intensional representation of continuations eventually arriving at the notion of generalised continuation, which admit simultaneous support for deep, shallow, and parameterised handlers. The initial refinement adds support for deep handlers by representing stacks of continuations and handlers as a curried sequence of arguments. The image of the resulting translation is not properly tail-recursive, meaning some function application terms do not appear in tail position. To rectify this the CPS translation is refined once more to obtain an uncurried representation of stacks of continuations and handlers. Finally, the translation is made higher-order in order to contract administrative redexes at translation time. The generalised continuation representation is used to construct an abstract machine that provide simultaneous support for deep, shallow, and parameterised effect handlers. kinds of effect handlers. The third strand explores the expressiveness of effect handlers. First, I show that deep, shallow, and parameterised notions of handlers are interdefinable by way of typed macro-expressiveness, which provides a syntactic notion of expressiveness that affirms the existence of encodings between handlers, but it provides no information about the computational content of the encodings. Second, using the semantic notion of expressiveness I show that for a class of programs a programming language with first-class control (e.g. effect handlers) admits asymptotically faster implementations than possible in a language without first-class control

    Classification of Anomalies in Gastrointestinal Tract Using Deep Learning

    Get PDF
    Automatic detection of diseases and anatomical landmarks in medical images by the use of computers is important and considered a challenging process that could help medical diagnosis and reduce the cost and time of investigational procedures and refine health care systems all over the world. Recently, gastrointestinal (GI) tract disease diagnosis through endoscopic image classification is an active research area in the biomedical field. Several GI tract disease classification methods based on image processing and machine learning techniques have been proposed by diverse research groups in the recent past. However, yet effective and comprehensive deep ensemble neural network-based classification model with high accuracy classification results is not available in the literature. In this thesis, we review ways and mechanisms to use deep learning techniques to research on multi-disease computer-aided detection about gastrointestinal and identify these images. We re-trained five state-of-the-art neural network architectures, VGG16, ResNet, MobileNet, Inception-v3, and Xception on the Kvasir dataset to classify eight categories that include an anatomical landmark (pylorus, z-line, cecum), a diseased state (esophagitis, ulcerative colitis, polyps), or a medical procedure (dyed lifted polyps, dyed resection margins) in the Gastrointestinal Tract. Our models have showed results with a promising accuracy which is a remarkable performance with respect to the state-of-the-art approaches. The resulting accuracies achieved using VGG, ResNet, MobileNet, Inception-v3, and Xception were 98.3%, 92.3%, 97.6%, 90% and 98.2%, respectively. As it appears, the most accurate result has been achieved when retraining VGG16 and Xception neural networks with accuracy reache to 98% due to its high performance on training on ImageNet dataset and internal structure that support classification problems

    Full stack development toward a trapped ion logical qubit

    Get PDF
    Quantum error correction is a key step toward the construction of a large-scale quantum computer, by preventing small infidelities in quantum gates from accumulating over the course of an algorithm. Detecting and correcting errors is achieved by using multiple physical qubits to form a smaller number of robust logical qubits. The physical implementation of a logical qubit requires multiple qubits, on which high fidelity gates can be performed. The project aims to realize a logical qubit based on ions confined on a microfabricated surface trap. Each physical qubit will be a microwave dressed state qubit based on 171Yb+ ions. Gates are intended to be realized through RF and microwave radiation in combination with magnetic field gradients. The project vertically integrates software down to hardware compilation layers in order to deliver, in the near future, a fully functional small device demonstrator. This thesis presents novel results on multiple layers of a full stack quantum computer model. On the hardware level a robust quantum gate is studied and ion displacement over the X-junction geometry is demonstrated. The experimental organization is optimized through automation and compressed waveform data transmission. A new quantum assembly language purely dedicated to trapped ion quantum computers is introduced. The demonstrator is aimed at testing implementation of quantum error correction codes while preparing for larger scale iterations.Open Acces

    SYSTEMS METHODS FOR ANALYSIS OF HETEROGENEOUS GLIOBLASTOMA DATASETS TOWARDS ELUCIDATION OF INTER-TUMOURAL RESISTANCE PATHWAYS AND NEW THERAPEUTIC TARGETS

    Get PDF
    In this PhD thesis is described an endeavour to compile litterature about Glioblastoma key molecular mechanisms into a directed network followin Disease Maps standards, analyse its topology and compare results with quantitative analysis of multi-omics datasets in order to investigate Glioblastoma resistance mechanisms. The work also integrated implementation of Data Management good practices and procedures
    • …
    corecore