14,249 research outputs found
The Viability and Potential Consequences of IoT-Based Ransomware
With the increased threat of ransomware and the substantial growth of the Internet of Things (IoT) market, there is significant motivation for attackers to carry out IoT-based ransomware campaigns. In this thesis, the viability of such malware is tested.
As part of this work, various techniques that could be used by ransomware developers to attack commercial IoT devices were explored. First, methods that attackers could use to communicate with the victim were examined, such that a ransom note was able to be reliably sent to a victim. Next, the viability of using "bricking" as a method of ransom was evaluated, such that devices could be remotely disabled unless the victim makes a payment to the attacker. Research was then performed to ascertain whether it was possible to remotely gain persistence on IoT devices, which would improve the efficacy of existing ransomware methods, and provide opportunities for more advanced ransomware to be created. Finally, after successfully identifying a number of persistence techniques, the viability of privacy-invasion based ransomware was analysed.
For each assessed technique, proofs of concept were developed. A range of devices -- with various intended purposes, such as routers, cameras and phones -- were used to test the viability of these proofs of concept. To test communication hijacking, devices' "channels of communication" -- such as web services and embedded screens -- were identified, then hijacked to display custom ransom notes. During the analysis of bricking-based ransomware, a working proof of concept was created, which was then able to remotely brick five IoT devices. After analysing the storage design of an assortment of IoT devices, six different persistence techniques were identified, which were then successfully tested on four devices, such that malicious filesystem modifications would be retained after the device was rebooted. When researching privacy-invasion based ransomware, several methods were created to extract information from data sources that can be commonly found on IoT devices, such as nearby WiFi signals, images from cameras, or audio from microphones. These were successfully implemented in a test environment such that ransomable data could be extracted, processed, and stored for later use to blackmail the victim.
Overall, IoT-based ransomware has not only been shown to be viable but also highly damaging to both IoT devices and their users. While the use of IoT-ransomware is still very uncommon "in the wild", the techniques demonstrated within this work highlight an urgent need to improve the security of IoT devices to avoid the risk of IoT-based ransomware causing havoc in our society. Finally, during the development of these proofs of concept, a number of potential countermeasures were identified, which can be used to limit the effectiveness of the attacking techniques discovered in this PhD research
SOFIA and ALMA Investigate Magnetic Fields and Gas Structures in Massive Star Formation: The Case of the Masquerading Monster in BYF 73
We present SOFIA+ALMA continuum and spectral-line polarisation data on the
massive molecular cloud BYF 73, revealing important details about the magnetic
field morphology, gas structures, and energetics in this unusual massive star
formation laboratory. The 154m HAWC+ polarisation map finds a highly
organised magnetic field in the densest, inner 0.550.40 pc portion of
the cloud, compared to an unremarkable morphology in the cloud's outer layers.
The 3mm continuum ALMA polarisation data reveal several more structures in the
inner domain, including a pc-long, 500 M "Streamer" around the
central massive protostellar object MIR 2, with magnetic fields mostly parallel
to the east-west Streamer but oriented north-south across MIR 2. The magnetic
field orientation changes from mostly parallel to the column density structures
to mostly perpendicular, at thresholds = 6.610
m, = 2.510 m, and =
427 nT. ALMA also mapped Goldreich-Kylafis polarisation in CO
across the cloud, which traces in both total intensity and polarised flux, a
powerful bipolar outflow from MIR 2 that interacts strongly with the Streamer.
The magnetic field is also strongly aligned along the outflow direction;
energetically, it may dominate the outflow near MIR 2, comprising rare evidence
for a magnetocentrifugal origin to such outflows. A portion of the Streamer may
be in Keplerian rotation around MIR 2, implying a gravitating mass 135050
M for the protostar+disk+envelope; alternatively, these kinematics
can be explained by gas in free fall towards a 95035 M object.
The high accretion rate onto MIR 2 apparently occurs through the Streamer/disk,
and could account for 33% of MIR 2's total luminosity via gravitational
energy release.Comment: 33 pages, 32 figures, accepted by ApJ. Line-Integral Convolution
(LIC) images and movie versions of Figures 3b, 7, and 29 are available at
https://gemelli.spacescience.org/~pbarnes/research/champ/papers
Full Resolution Deconvolution of Complex Faraday Spectra
Polarized synchrotron emission from multiple Faraday depths can be separated
by calculating the complex Fourier transform of the Stokes' parameters as a
function of the wavelength squared, known as Faraday Synthesis. As commonly
implemented, the transform introduces an additional term , which
broadens the real and imaginary spectra, but not the amplitude spectrum. We use
idealized tests to investigate whether additional information can be recovered
with a clean process restoring beam set to the narrower width of the peak in
the real ``full" resolution spectrum with . We find that the
choice makes no difference, except for the use of a smaller
restoring beam. With this smaller beam, the accuracy and phase stability are
unchanged for single Faraday components. However, using the smaller restoring
beam for multiple Faraday components we find a) better discrimination of the
components, b) significant reductions in blending of structures in tomography
images, and c) reduction of spurious features in the Faraday spectra and
tomography maps. We also discuss the limited accuracy of information on scales
comparable to the width of the amplitude spectrum peak, and note a clean-bias,
reducing the recovered amplitudes. We present examples using MeerKAT L-band
data. We also revisit the maximum width in Faraday depth to which surveys are
sensitive, and introduce the variable , the width for which the power
drops by a factor of 2. We find that most surveys cannot resolve continuous
Faraday distributions unless the narrower full restoring beam is used.Comment: 17 pages, 23 figures, accepted for publication in MNRAS, 4 April,
202
Learning disentangled speech representations
A variety of informational factors are contained within the speech signal and a single short recording of speech reveals much more than the spoken words. The best method to extract and represent informational factors from the speech signal ultimately depends on which informational factors are desired and how they will be used. In addition, sometimes methods will capture more than one informational factor at the same time such as speaker identity, spoken content, and speaker prosody.
The goal of this dissertation is to explore different ways to deconstruct the speech signal into abstract representations that can be learned and later reused in various speech technology tasks. This task of deconstructing, also known as disentanglement, is a form of distributed representation learning. As a general approach to disentanglement, there are some guiding principles that elaborate what a learned representation should contain as well as how it should function. In particular, learned representations should contain all of the requisite information in a more compact manner, be interpretable, remove nuisance factors of irrelevant information, be useful in downstream tasks, and independent of the task at hand. The learned representations should also be able to answer counter-factual questions.
In some cases, learned speech representations can be re-assembled in different ways according to the requirements of downstream applications. For example, in a voice conversion task, the speech content is retained while the speaker identity is changed. And in a content-privacy task, some targeted content may be concealed without affecting how surrounding words sound. While there is no single-best method to disentangle all types of factors, some end-to-end approaches demonstrate a promising degree of generalization to diverse speech tasks.
This thesis explores a variety of use-cases for disentangled representations including phone recognition, speaker diarization, linguistic code-switching, voice conversion, and content-based privacy masking. Speech representations can also be utilised for automatically assessing the quality and authenticity of speech, such as automatic MOS ratings or detecting deep fakes. The meaning of the term "disentanglement" is not well defined in previous work, and it has acquired several meanings depending on the domain (e.g. image vs. speech). Sometimes the term "disentanglement" is used interchangeably with the term "factorization". This thesis proposes that disentanglement of speech is distinct, and offers a viewpoint of disentanglement that can be considered both theoretically and practically
Metaphors of London fog, smoke and mist in Victorian and Edwardian Art and Literature
Julian Wolfreys has argued that after 1850 writers employed stock images of the city without allowing them to transform their texts. This thesis argues, on the contrary, that metaphorical uses of London fog were complex and subtle during the Victorian and Edwardian periods, at least until 1914. Fog represented, in particular, formlessness and the dissolution of boundaries. Examining the idea of fog in literature, verse, newspaper accounts and journal articles, as well as in the visual arts, as part of a common discourse about London and the state of its inhabitants, this thesis charts how the metaphorical appropriation of this idea changed over time. Four of Dickens's novels are used to track his use of fog as part of a discourse of the natural and unnatural in individual and society, identifying it with London in progressively more negative terms. Visual representations of fog by Constable, Turner, Whistler, Monet, Markino, O'Connor, Roberts and Wyllie and Coburn showed an increasing readiness to engage with this discourse. Social tensions in the city in the 1880s were articulated in art as well as in fiction. Authors like Hay and Barr showed the destruction of London by its fog because of its inhabitants' supposed degeneracy. As the social threat receded, apocalyptic scenarios gave way to a more optimistic view in the work of Owen and others. Henry James used fog as a metaphorical representation of the boundaries of gendered behaviour in public, and the problems faced by women who crossed them. The dissertation also examines fog and individual transgression, in novels and short stories by Lowndes, Stevenson, Conan Doyle and Joseph Conrad. After 1914, fog was no more than a crude signifier of Victorian London in literature, film and, later, television, deployed as a cliche instead of the subtle metaphorical idea discussed in this thesis
Predictive Maintenance of Critical Equipment for Floating Liquefied Natural Gas Liquefaction Process
Predictive Maintenance of Critical Equipment for Liquefied Natural Gas Liquefaction Process
Meeting global energy demand is a massive challenge, especially with the quest of more affinity towards sustainable and cleaner energy. Natural gas is viewed as a bridge fuel to a renewable energy. LNG as a processed form of natural gas is the fastest growing and cleanest form of fossil fuel. Recently, the unprecedented increased in LNG demand, pushes its exploration and processing into offshore as Floating LNG (FLNG). The offshore topsides gas processes and liquefaction has been identified as one of the great challenges of FLNG. Maintaining topside liquefaction process asset such as gas turbine is critical to profitability and reliability, availability of the process facilities. With the setbacks of widely used reactive and preventive time-based maintenances approaches, to meet the optimal reliability and availability requirements of oil and gas operators, this thesis presents a framework driven by AI-based learning approaches for predictive maintenance. The framework is aimed at leveraging the value of condition-based maintenance to minimises the failures and downtimes of critical FLNG equipment (Aeroderivative gas turbine).
In this study, gas turbine thermodynamics were introduced, as well as some factors affecting gas turbine modelling. Some important considerations whilst modelling gas turbine system such as modelling objectives, modelling methods, as well as approaches in modelling gas turbines were investigated. These give basis and mathematical background to develop a gas turbine simulated model. The behaviour of simple cycle HDGT was simulated using thermodynamic laws and operational data based on Rowen model. Simulink model is created using experimental data based on Rowen’s model, which is aimed at exploring transient behaviour of an industrial gas turbine. The results show the capability of Simulink model in capture nonlinear dynamics of the gas turbine system, although constraint to be applied for further condition monitoring studies, due to lack of some suitable relevant correlated features required by the model.
AI-based models were found to perform well in predicting gas turbines failures. These capabilities were investigated by this thesis and validated using an experimental data obtained from gas turbine engine facility. The dynamic behaviours gas turbines changes when exposed to different varieties of fuel. A diagnostics-based AI models were developed to diagnose different gas turbine engine’s failures associated with exposure to various types of fuels. The capabilities of Principal Component Analysis (PCA) technique have been harnessed to reduce the dimensionality of the dataset and extract good features for the diagnostics model development.
Signal processing-based (time-domain, frequency domain, time-frequency domain) techniques have also been used as feature extraction tools, and significantly added more correlations to the dataset and influences the prediction results obtained. Signal processing played a vital role in extracting good features for the diagnostic models when compared PCA. The overall results obtained from both PCA, and signal processing-based models demonstrated the capabilities of neural network-based models in predicting gas turbine’s failures. Further, deep learning-based LSTM model have been developed, which extract features from the time series dataset directly, and hence does not require any feature extraction tool. The LSTM model achieved the highest performance and prediction accuracy, compared to both PCA-based and signal processing-based the models.
In summary, it is concluded from this thesis that despite some challenges related to gas turbines Simulink Model for not being integrated fully for gas turbine condition monitoring studies, yet data-driven models have proven strong potentials and excellent performances on gas turbine’s CBM diagnostics. The models developed in this thesis can be used for design and manufacturing purposes on gas turbines applied to FLNG, especially on condition monitoring and fault detection of gas turbines. The result obtained would provide valuable understanding and helpful guidance for researchers and practitioners to implement robust predictive maintenance models that will enhance the reliability and availability of FLNG critical equipment.Petroleum Technology Development Funds (PTDF) Nigeri
Digital asset management via distributed ledgers
Distributed ledgers rose to prominence with the advent of Bitcoin, the first provably secure protocol to solve consensus in an open-participation setting. Following, active research and engineering efforts have proposed a multitude of applications and alternative designs, the most prominent being Proof-of-Stake (PoS). This thesis expands the scope of secure and efficient asset management over a distributed ledger around three axes: i) cryptography; ii) distributed systems; iii) game theory and economics. First, we analyze the security of various wallets. We start with a formal model of hardware wallets, followed by an analytical framework of PoS wallets, each outlining the unique properties of Proof-of-Work (PoW) and PoS respectively. The latter also provides a rigorous design to form collaborative participating entities, called stake pools. We then propose Conclave, a stake pool design which enables a group of parties to participate in a PoS system in a collaborative manner, without a central operator. Second, we focus on efficiency. Decentralized systems are aimed at thousands of users across the globe, so a rigorous design for minimizing memory and storage consumption is a prerequisite for scalability. To that end, we frame ledger maintenance as an optimization problem and design a multi-tier framework for designing wallets which ensure that updates increase the ledger’s global state only to a minimal extent, while preserving the security guarantees outlined in the security analysis. Third, we explore incentive-compatibility and analyze blockchain systems from a micro and a macroeconomic perspective. We enrich our cryptographic and systems' results by analyzing the incentives of collective pools and designing a state efficient Bitcoin fee function. We then analyze the Nash dynamics of distributed ledgers, introducing a formal model that evaluates whether rational, utility-maximizing participants are disincentivized from exhibiting undesirable infractions, and highlighting the differences between PoW and PoS-based ledgers, both in a standalone setting and under external parameters, like market price fluctuations. We conclude by introducing a macroeconomic principle, cryptocurrency egalitarianism, and then describing two mechanisms for enabling taxation in blockchain-based currency systems
Material Economies of South Yorkshire. The Organisation of Metal Production in Roman South Yorkshire.
This thesis aims to develop a model for the social organisation and production of ferrous and non-ferrous metals in South Yorkshire during the Roman period. This characterisation of the organisation of metallurgical activities is achieved through a combined methodology that will gather data from grey literature, published literature, as well as chemical, visual and microstructural analysis of metallurgical debris. The metallurgical practices in the study area are primarily rural in nature. These results are looked at through the lenses of Agency, Habitus, and the social construction of craft production. The movement of materials and people within the study area and local specialist practices are central in the interpretation of regional metalworking practices. Furthermore, models of craft production are critiqued, and an alternative modelisation process is suggested to characterise and understand the organisation of metal production in Roman South Yorkshire
Industry 4.0: product digital twins for remanufacturing decision-making
Currently there is a desire to reduce natural resource consumption and expand circular business principles whilst Industry 4.0 (I4.0) is regarded as the evolutionary and potentially disruptive movement of technology, automation, digitalisation, and data manipulation into the industrial sector. The remanufacturing industry is recognised as being vital to the circular economy (CE) as it extends the in-use life of products, but its synergy with I4.0 has had little attention thus far. This thesis documents the first investigating into I4.0 in remanufacturing for a CE contributing a design and demonstration of a model that optimises remanufacturing planning using data from different instances in a product’s life cycle.
The initial aim of this work was to identify the I4.0 technology that would enhance the stability in remanufacturing with a view to reducing resource consumption. As the project progressed it narrowed to focus on the development of a product digital twin (DT) model to support data-driven decision making for operations planning. The model’s architecture was derived using a bottom-up approach where requirements were extracted from the identified complications in production planning and control that differentiate remanufacturing from manufacturing. Simultaneously, the benefits of enabling visibility of an asset’s through-life health were obtained using a DT as the modus operandi. A product simulator and DT prototype was designed to use Internet of Things (IoT) components, a neural network for remaining life estimations and a search algorithm for operational planning optimisation. The DT was iteratively developed using case studies to validate and examine the real opportunities that exist in deploying a business model that harnesses, and commodifies, early life product data for end-of-life processing optimisation. Findings suggest that using intelligent programming networks and algorithms, a DT can enhance decision-making if it has visibility of the product and access to reliable remanufacturing process information, whilst existing IoT components provide rudimentary “smart” capabilities, but their integration is complex, and the durability of the systems over extended product life cycles needs to be further explored
- …