15,184 research outputs found

    The Viability and Potential Consequences of IoT-Based Ransomware

    Get PDF
    With the increased threat of ransomware and the substantial growth of the Internet of Things (IoT) market, there is significant motivation for attackers to carry out IoT-based ransomware campaigns. In this thesis, the viability of such malware is tested. As part of this work, various techniques that could be used by ransomware developers to attack commercial IoT devices were explored. First, methods that attackers could use to communicate with the victim were examined, such that a ransom note was able to be reliably sent to a victim. Next, the viability of using "bricking" as a method of ransom was evaluated, such that devices could be remotely disabled unless the victim makes a payment to the attacker. Research was then performed to ascertain whether it was possible to remotely gain persistence on IoT devices, which would improve the efficacy of existing ransomware methods, and provide opportunities for more advanced ransomware to be created. Finally, after successfully identifying a number of persistence techniques, the viability of privacy-invasion based ransomware was analysed. For each assessed technique, proofs of concept were developed. A range of devices -- with various intended purposes, such as routers, cameras and phones -- were used to test the viability of these proofs of concept. To test communication hijacking, devices' "channels of communication" -- such as web services and embedded screens -- were identified, then hijacked to display custom ransom notes. During the analysis of bricking-based ransomware, a working proof of concept was created, which was then able to remotely brick five IoT devices. After analysing the storage design of an assortment of IoT devices, six different persistence techniques were identified, which were then successfully tested on four devices, such that malicious filesystem modifications would be retained after the device was rebooted. When researching privacy-invasion based ransomware, several methods were created to extract information from data sources that can be commonly found on IoT devices, such as nearby WiFi signals, images from cameras, or audio from microphones. These were successfully implemented in a test environment such that ransomable data could be extracted, processed, and stored for later use to blackmail the victim. Overall, IoT-based ransomware has not only been shown to be viable but also highly damaging to both IoT devices and their users. While the use of IoT-ransomware is still very uncommon "in the wild", the techniques demonstrated within this work highlight an urgent need to improve the security of IoT devices to avoid the risk of IoT-based ransomware causing havoc in our society. Finally, during the development of these proofs of concept, a number of potential countermeasures were identified, which can be used to limit the effectiveness of the attacking techniques discovered in this PhD research

    Cosmology with one galaxy? -- The ASTRID model and robustness

    Full text link
    Recent work has pointed out the potential existence of a tight relation between the cosmological parameter Ωm\Omega_{\rm m}, at fixed Ωb\Omega_{\rm b}, and the properties of individual galaxies in state-of-the-art cosmological hydrodynamic simulations. In this paper, we investigate whether such a relation also holds for galaxies from simulations run with a different code that made use of a distinct subgrid physics: Astrid. We find that also in this case, neural networks are able to infer the value of Ωm\Omega_{\rm m} with a ∼10%\sim10\% precision from the properties of individual galaxies while accounting for astrophysics uncertainties as modeled in CAMELS. This tight relationship is present at all considered redshifts, z≤3z\leq3, and the stellar mass, the stellar metallicity, and the maximum circular velocity are among the most important galaxy properties behind the relation. In order to use this method with real galaxies, one needs to quantify its robustness: the accuracy of the model when tested on galaxies generated by codes different from the one used for training. We quantify the robustness of the models by testing them on galaxies from four different codes: IllustrisTNG, SIMBA, Astrid, and Magneticum. We show that the models perform well on a large fraction of the galaxies, but fail dramatically on a small fraction of them. Removing these outliers significantly improves the accuracy of the models across simulation codes.Comment: 16 pages, 12 figure

    Intra-annual taxonomic and phenological drivers of spectral variance in grasslands

    Get PDF
    According to the Spectral Variation Hypothesis (SVH), spectral variance has the potential to predict taxonomic composition in grasslands over time. However, in previous studies the relationship has been found to be unstable. We hypothesise that the diversity of phenological stages is also a driver of spectral variance and could act to confound the species signal. To test this concept, intra-annual repeat spectral and botanical sampling was performed at the quadrat scale at two grassland sites, one displaying high species diversity and the other low species diversity. Six botanical metrics were used, three taxonomy based and three phenology based. Using uni-temporal linear permutation models, we found that the SVH only held at the high diversity site and only for certain metrics and at particular time points. We tested the seasonal influence of the taxonomic and phenological metrics on spectral variance using linear mixed models. A significant interaction term of percent mature leaves and species diversity was found, with the most parsimonious model explaining 43% of the intra-annual change. These results indicate that the dominant canopy phenology stage is a confounding variable when examining the spectral variance -species diversity relationship. We emphasise the challenges that exist in tracking species or phenology-based metrics in grasslands using spectral variance but encourage further research that contextualises spectral variance data within seasonal plant development alongside other canopy structural and leaf traits

    Preferentialism and the conditionality of trade agreements. An application of the gravity model

    Get PDF
    Modern economic growth is driven by international trade, and the preferential trade agreement constitutes the primary fit-for-purpose mechanism of choice for establishing, facilitating, and governing its flows. However, too little attention has been afforded to the differences in content and conditionality associated with different trade agreements. This has led to an under-considered mischaracterisation of the design-flow relationship. Similarly, while the relationship between trade facilitation and trade is clear, the way trade facilitation affects other areas of economic activity, with respect to preferential trade agreements, has received considerably less attention. Particularly, in light of an increasingly globalised and interdependent trading system, the interplay between trade facilitation and foreign direct investment is of particular importance. Accordingly, this thesis explores the bilateral trade and investment effects of specific conditionality sets, as established within Preferential Trade Agreements (PTAs). Chapter one utilises recent content condition-indexes for depth, flexibility, and constraints on flexibility, established by Dür et al. (2014) and Baccini et al. (2015), within a gravity framework to estimate the average treatment effect of trade agreement characteristics across bilateral trade relationships in the Association of Southeast Asian Nations (ASEAN) from 1948-2015. This chapter finds that the composition of a given ASEAN trade agreement’s characteristic set has significantly determined the concomitant bilateral trade flows. Conditions determining the classification of a trade agreements depth are positively associated with an increase to bilateral trade; hereby representing the furthered removal of trade barriers and frictions as facilitated by deeper trade agreements. Flexibility conditions, and constraint on flexibility conditions, are also identified as significant determiners for a given trade agreement’s treatment effect of subsequent bilateral trade flows. Given the political nature of their inclusion (i.e., the appropriate address to short term domestic discontent) this influence is negative as regards trade flows. These results highlight the longer implementation and time frame requirements for trade impediments to be removed in a market with higher domestic uncertainty. Chapter two explores the incorporation of non-trade issue (NTI) conditions in PTAs. Such conditions are increasing both at the intensive and extensive margins. There is a concern from developing nations that this growth of NTI inclusions serves as a way for high-income (HI) nations to dictate the trade agenda, such that developing nations are subject to ‘principled protectionism’. There is evidence that NTI provisions are partly driven by protectionist motives but the effect on trade flows remains largely undiscussed. Utilising the Gravity Model for trade, I test Lechner’s (2016) comprehensive NTI dataset for 202 bilateral country pairs across a 32-year timeframe and find that, on average, NTIs are associated with an increase to bilateral trade. Primarily this boost can be associated with the market access that a PTA utilising NTIs facilitates. In addition, these results are aligned theoretically with the discussions on market harmonisation, shared values, and the erosion of artificial production advantages. Instead of inhibiting trade through burdensome cost, NTIs are acting to support a more stable production and trading environment, motivated by enhanced market access. Employing a novel classification to capture the power supremacy associated with shaping NTIs, this chapter highlights that the positive impact of NTIs is largely driven by the relationship between HI nations and middle-to-low-income (MTLI) counterparts. Chapter Three employs the gravity model, theoretically augmented for foreign direct investment (FDI), to estimate the effects of trade facilitation conditions utilising indexes established by Neufeld (2014) and the bilateral FDI data curated by UNCTAD (2014). The resultant dataset covers 104 countries, covering a period of 12 years (2001–2012), containing 23,640 observations. The results highlight the bilateral-FDI enhancing effects of trade facilitation conditions in the ASEAN context, aligning itself with the theoretical branch of FDI-PTA literature that has outlined how the ratification of a trade agreement results in increased and positive economic prospect between partners (Medvedev, 2012) resulting from the interrelation between trade and investment as set within an improving regulatory environment. The results align with the expectation that an enhanced trade facilitation landscape (one in which such formalities, procedures, information, and expectations around trade facilitation are conditioned for) is expected to incentivise and attract FDI

    Learning disentangled speech representations

    Get PDF
    A variety of informational factors are contained within the speech signal and a single short recording of speech reveals much more than the spoken words. The best method to extract and represent informational factors from the speech signal ultimately depends on which informational factors are desired and how they will be used. In addition, sometimes methods will capture more than one informational factor at the same time such as speaker identity, spoken content, and speaker prosody. The goal of this dissertation is to explore different ways to deconstruct the speech signal into abstract representations that can be learned and later reused in various speech technology tasks. This task of deconstructing, also known as disentanglement, is a form of distributed representation learning. As a general approach to disentanglement, there are some guiding principles that elaborate what a learned representation should contain as well as how it should function. In particular, learned representations should contain all of the requisite information in a more compact manner, be interpretable, remove nuisance factors of irrelevant information, be useful in downstream tasks, and independent of the task at hand. The learned representations should also be able to answer counter-factual questions. In some cases, learned speech representations can be re-assembled in different ways according to the requirements of downstream applications. For example, in a voice conversion task, the speech content is retained while the speaker identity is changed. And in a content-privacy task, some targeted content may be concealed without affecting how surrounding words sound. While there is no single-best method to disentangle all types of factors, some end-to-end approaches demonstrate a promising degree of generalization to diverse speech tasks. This thesis explores a variety of use-cases for disentangled representations including phone recognition, speaker diarization, linguistic code-switching, voice conversion, and content-based privacy masking. Speech representations can also be utilised for automatically assessing the quality and authenticity of speech, such as automatic MOS ratings or detecting deep fakes. The meaning of the term "disentanglement" is not well defined in previous work, and it has acquired several meanings depending on the domain (e.g. image vs. speech). Sometimes the term "disentanglement" is used interchangeably with the term "factorization". This thesis proposes that disentanglement of speech is distinct, and offers a viewpoint of disentanglement that can be considered both theoretically and practically

    Towards a more just refuge regime: quotas, markets and a fair share

    Get PDF
    The international refugee regime is beset by two problems: Responsibility for refuge falls disproportionately on a few states and many owed refuge do not get it. In this work, I explore remedies to these problems. One is a quota distribution wherein states are distributed responsibilities via allotment. Another is a marketized quota system wherein states are free to buy and sell their allotments with others. I explore these in three parts. In Part 1, I develop the prime principles upon which a just regime is built and with which alternatives can be adjudicated. The first and most important principle – ‘Justice for Refugees’ – stipulates that a just regime provides refuge for all who have a basic interest in it. The second principle – ‘Justice for States’ – stipulates that a just distribution of refuge responsibilities among states is one that is capacity considerate. In Part 2, I take up several vexing questions regarding the distribution of refuge responsibilities among states in a collective effort. First, what is a state’s ‘fair share’? The answer requires the determination of some logic – some metric – with which a distribution is determined. I argue that one popular method in the political theory literature – a GDP-based distribution – is normatively unsatisfactory. In its place, I posit several alternative metrics that are more attuned with the principles of justice but absent in the political theory literature: GDP adjusted for Purchasing Power Parity and the Human Development Index. I offer an exploration of both these. Second, are states required to ‘take up the slack’ left by defaulting peers? Here, I argue that duties of help remain intact in cases of partial compliance among states in the refuge regime, but that political concerns may require that such duties be applied with caution. I submit that a market instrument offers one practical solution to this problem, as well as other advantages. In Part 3, I take aim at marketization and grapple with its many pitfalls: That marketization is commodifying, that it is corrupting, and that it offers little advantage in providing quality protection for refugees. In addition to these, I apply a framework of moral markets developed by Debra Satz. I argue that a refuge market may satisfy Justice Among States, but that it is violative of the refugees’ welfare interest in remaining free of degrading and discriminatory treatment

    Underwater optical wireless communications in turbulent conditions: from simulation to experimentation

    Get PDF
    Underwater optical wireless communication (UOWC) is a technology that aims to apply high speed optical wireless communication (OWC) techniques to the underwater channel. UOWC has the potential to provide high speed links over relatively short distances as part of a hybrid underwater network, along with radio frequency (RF) and underwater acoustic communications (UAC) technologies. However, there are some difficulties involved in developing a reliable UOWC link, namely, the complexity of the channel. The main focus throughout this thesis is to develop a greater understanding of the effects of the UOWC channel, especially underwater turbulence. This understanding is developed from basic theory through to simulation and experimental studies in order to gain a holistic understanding of turbulence in the UOWC channel. This thesis first presents a method of modelling optical underwater turbulence through simulation that allows it to be examined in conjunction with absorption and scattering. In a stationary channel, this turbulence induced scattering is shown to cause and increase both spatial and temporal spreading at the receiver plane. It is also demonstrated using the technique presented that the relative impact of turbulence on a received signal is lower in a highly scattering channel, showing an in-built resilience of these channels. Received intensity distributions are presented confirming that fluctuations in received power from this method follow the commonly used Log-Normal fading model. The impact of turbulence - as measured using this new modelling framework - on link performance, in terms of maximum achievable data rate and bit error rate is equally investigated. Following that, experimental studies comparing both the relative impact of turbulence induced scattering on coherent and non-coherent light propagating through water and the relative impact of turbulence in different water conditions are presented. It is shown that the scintillation index increases with increasing temperature inhomogeneity in the underwater channel. These results indicate that a light beam from a non-coherent source has a greater resilience to temperature inhomogeneity induced turbulence effect in an underwater channel. These results will help researchers in simulating realistic channel conditions when modelling a light emitting diode (LED) based intensity modulation with direct detection (IM/DD) UOWC link. Finally, a comparison of different modulation schemes in still and turbulent water conditions is presented. Using an underwater channel emulator, it is shown that pulse position modulation (PPM) and subcarrier intensity modulation (SIM) have an inherent resilience to turbulence induced fading with SIM achieving higher data rates under all conditions. The signal processing technique termed pair-wise coding (PWC) is applied to SIM in underwater optical wireless communications for the first time. The performance of PWC is compared with the, state-of-the-art, bit and power loading optimisation algorithm. Using PWC, a maximum data rate of 5.2 Gbps is achieved in still water conditions

    TOWARDS AN UNDERSTANDING OF EFFORTFUL FUNDRAISING EXPERIENCES: USING INTERPRETATIVE PHENOMENOLOGICAL ANALYSIS IN FUNDRAISING RESEARCH

    Get PDF
    Physical-activity oriented community fundraising has experienced an exponential growth in popularity over the past 15 years. The aim of this study was to explore the value of effortful fundraising experiences, from the point of view of participants, and explore the impact that these experiences have on people’s lives. This study used an IPA approach to interview 23 individuals, recognising the role of participants as proxy (nonprofessional) fundraisers for charitable organisations, and the unique organisation donor dynamic that this creates. It also bought together relevant psychological theory related to physical activity fundraising experiences (through a narrative literature review) and used primary interview data to substantiate these. Effortful fundraising experiences are examined in detail to understand their significance to participants, and how such experiences influence their connection with a charity or cause. This was done with an idiographic focus at first, before examining convergences and divergences across the sample. This study found that effortful fundraising experiences can have a profound positive impact upon community fundraisers in both the short and the long term. Additionally, it found that these experiences can be opportunities for charitable organisations to create lasting meaningful relationships with participants, and foster mutually beneficial lifetime relationships with them. Further research is needed to test specific psychological theory in this context, including self-esteem theory, self determination theory, and the martyrdom effect (among others)
    • …
    corecore