23,185 research outputs found

    The Viability and Potential Consequences of IoT-Based Ransomware

    Get PDF
    With the increased threat of ransomware and the substantial growth of the Internet of Things (IoT) market, there is significant motivation for attackers to carry out IoT-based ransomware campaigns. In this thesis, the viability of such malware is tested. As part of this work, various techniques that could be used by ransomware developers to attack commercial IoT devices were explored. First, methods that attackers could use to communicate with the victim were examined, such that a ransom note was able to be reliably sent to a victim. Next, the viability of using "bricking" as a method of ransom was evaluated, such that devices could be remotely disabled unless the victim makes a payment to the attacker. Research was then performed to ascertain whether it was possible to remotely gain persistence on IoT devices, which would improve the efficacy of existing ransomware methods, and provide opportunities for more advanced ransomware to be created. Finally, after successfully identifying a number of persistence techniques, the viability of privacy-invasion based ransomware was analysed. For each assessed technique, proofs of concept were developed. A range of devices -- with various intended purposes, such as routers, cameras and phones -- were used to test the viability of these proofs of concept. To test communication hijacking, devices' "channels of communication" -- such as web services and embedded screens -- were identified, then hijacked to display custom ransom notes. During the analysis of bricking-based ransomware, a working proof of concept was created, which was then able to remotely brick five IoT devices. After analysing the storage design of an assortment of IoT devices, six different persistence techniques were identified, which were then successfully tested on four devices, such that malicious filesystem modifications would be retained after the device was rebooted. When researching privacy-invasion based ransomware, several methods were created to extract information from data sources that can be commonly found on IoT devices, such as nearby WiFi signals, images from cameras, or audio from microphones. These were successfully implemented in a test environment such that ransomable data could be extracted, processed, and stored for later use to blackmail the victim. Overall, IoT-based ransomware has not only been shown to be viable but also highly damaging to both IoT devices and their users. While the use of IoT-ransomware is still very uncommon "in the wild", the techniques demonstrated within this work highlight an urgent need to improve the security of IoT devices to avoid the risk of IoT-based ransomware causing havoc in our society. Finally, during the development of these proofs of concept, a number of potential countermeasures were identified, which can be used to limit the effectiveness of the attacking techniques discovered in this PhD research

    Sensitivity analysis for ReaxFF reparameterization using the Hilbert-Schmidt independence criterion

    Full text link
    We apply a global sensitivity method, the Hilbert-Schmidt independence criterion (HSIC), to the reparameterization of a Zn/S/H ReaxFF force field to identify the most appropriate parameters for reparameterization. Parameter selection remains a challenge in this context as high dimensional optimizations are prone to overfitting and take a long time, but selecting too few parameters leads to poor quality force fields. We show that the HSIC correctly and quickly identifies the most sensitive parameters, and that optimizations done using a small number of sensitive parameters outperform those done using a higher dimensional reasonable-user parameter selection. Optimizations using only sensitive parameters: 1) converge faster, 2) have loss values comparable to those found with the naive selection, 3) have similar accuracy in validation tests, and 4) do not suffer from problems of overfitting. We demonstrate that an HSIC global sensitivity is a cheap optimization pre-processing step that has both qualitative and quantitative benefits which can substantially simplify and speedup ReaxFF reparameterizations.Comment: author accepted manuscrip

    Likelihood Asymptotics in Nonregular Settings: A Review with Emphasis on the Likelihood Ratio

    Full text link
    This paper reviews the most common situations where one or more regularity conditions which underlie classical likelihood-based parametric inference fail. We identify three main classes of problems: boundary problems, indeterminate parameter problems -- which include non-identifiable parameters and singular information matrices -- and change-point problems. The review focuses on the large-sample properties of the likelihood ratio statistic. We emphasize analytical solutions and acknowledge software implementations where available. We furthermore give summary insight about the possible tools to derivate the key results. Other approaches to hypothesis testing and connections to estimation are listed in the annotated bibliography of the Supplementary Material

    Demystifying inertial specifications : supporting the inclusion of grid-followers

    Get PDF
    Inertia provision from converters is often separated into two categories according to the control approach. Inertia from grid-followers (GFLs) is deemed to be "synthetic" due to a slow response. In contrast, grid-forming (GFM) inertia is deemed to be "true", and more useful for frequency stability, due its faster provision. This paper analyses the distinctions between GFM and GFL inertia by carrying out parametric sweeps of each approach at different operating conditions. The analysis aims to assist the ongoing efforts to quantify grid stabilising phenomena, particularly the recent adaptation of the GB grid code to incorporate GFM converters. The optimal tuning configurations are identified, showing that the GFL can achieve fast inertial provision that can contain the grid frequency as effectively as GFM inertia on strong grids, despite the opposing consensus in the literature. The simulations also highlight the importance of voltage-source behaviours in determining the initial evolution of grid frequency, that these features should be considered more explicitly by system operators, and that GFLs should not be excluded so readily. Neglecting GFL control could limit the assets available to support the grid and inhibit the rate that the net zero transition can occur

    Advancing Model Pruning via Bi-level Optimization

    Full text link
    The deployment constraints in practical applications necessitate the pruning of large-scale deep learning models, i.e., promoting their weight sparsity. As illustrated by the Lottery Ticket Hypothesis (LTH), pruning also has the potential of improving their generalization ability. At the core of LTH, iterative magnitude pruning (IMP) is the predominant pruning method to successfully find 'winning tickets'. Yet, the computation cost of IMP grows prohibitively as the targeted pruning ratio increases. To reduce the computation overhead, various efficient 'one-shot' pruning methods have been developed, but these schemes are usually unable to find winning tickets as good as IMP. This raises the question of how to close the gap between pruning accuracy and pruning efficiency? To tackle it, we pursue the algorithmic advancement of model pruning. Specifically, we formulate the pruning problem from a fresh and novel viewpoint, bi-level optimization (BLO). We show that the BLO interpretation provides a technically-grounded optimization base for an efficient implementation of the pruning-retraining learning paradigm used in IMP. We also show that the proposed bi-level optimization-oriented pruning method (termed BiP) is a special class of BLO problems with a bi-linear problem structure. By leveraging such bi-linearity, we theoretically show that BiP can be solved as easily as first-order optimization, thus inheriting the computation efficiency. Through extensive experiments on both structured and unstructured pruning with 5 model architectures and 4 data sets, we demonstrate that BiP can find better winning tickets than IMP in most cases, and is computationally as efficient as the one-shot pruning schemes, demonstrating 2-7 times speedup over IMP for the same level of model accuracy and sparsity.Comment: Thirty-sixth Conference on Neural Information Processing Systems (NeurIPS 2022

    Qluster: An easy-to-implement generic workflow for robust clustering of health data

    Get PDF
    The exploration of heath data by clustering algorithms allows to better describe the populations of interest by seeking the sub-profiles that compose it. This therefore reinforces medical knowledge, whether it is about a disease or a targeted population in real life. Nevertheless, contrary to the so-called conventional biostatistical methods where numerous guidelines exist, the standardization of data science approaches in clinical research remains a little discussed subject. This results in a significant variability in the execution of data science projects, whether in terms of algorithms used, reliability and credibility of the designed approach. Taking the path of parsimonious and judicious choice of both algorithms and implementations at each stage, this article proposes Qluster, a practical workflow for performing clustering tasks. Indeed, this workflow makes a compromise between (1) genericity of applications (e.g. usable on small or big data, on continuous, categorical or mixed variables, on database of high-dimensionality or not), (2) ease of implementation (need for few packages, few algorithms, few parameters, ...), and (3) robustness (e.g. use of proven algorithms and robust packages, evaluation of the stability of clusters, management of noise and multicollinearity). This workflow can be easily automated and/or routinely applied on a wide range of clustering projects. It can be useful both for data scientists with little experience in the field to make data clustering easier and more robust, and for more experienced data scientists who are looking for a straightforward and reliable solution to routinely perform preliminary data mining. A synthesis of the literature on data clustering as well as the scientific rationale supporting the proposed workflow is also provided. Finally, a detailed application of the workflow on a concrete use case is provided, along with a practical discussion for data scientists. An implementation on the Dataiku platform is available upon request to the authors

    What do new performance metrics, VeDBA and Dynamic yaw, tell us about energy-intensive activities in whale sharks?

    Get PDF
    During oscillatory dives, whale sharks (Rhincodon typus) expend varying levels of energy in active ascent and passive descent. They are expected to minimise movement costs by travelling at optimum speed unless having reason to move faster, for example during feeding or evasion of danger. A proxy for power, dynamic body acceleration (DBA) has previously been used to identify whale shark movement patterns but has yet been used to identify occasions where power is elevated above minimum requirements. 59 hours of biologging data from 13 juvenile whale sharks (Ningaloo Reef, Western Australia) including depth, body pitch angle, magnetometry and DBA, was analysed to investigate minimum power requirements for dives and identify events of elevated power. Dynamic yaw (the rate of change of heading), a new proxy for power, was introduced to determine its effectiveness compared to the already-established DBA. The relationship between pitch angle and these two proxies was investigated to determine which had the stronger relationship. Dynamic yaw produced a poor relationship with pitch angle compared to DBA, and thus DBA was selected as the focus proxy for the remainder of the study. DBA was utilised to produce a minimum power trend versus body pitch angle using a convex hull analysis which allowed for the identification of proxy for power utilisation above the minimum (PAM). 16 instances of PAM were identified in 59 hours of data, which could all be considered instances where energy minimisation is not prioritised, such as feeding or avoidance. The PAM method was capable of identifying instances where energy minimisation is not prioritised, and therefore has future implications in investigations of location-specific behaviours in relation to feeding and anthropogenic disturbance

    Countermeasures for the majority attack in blockchain distributed systems

    Get PDF
    La tecnología Blockchain es considerada como uno de los paradigmas informáticos más importantes posterior al Internet; en función a sus características únicas que la hacen ideal para registrar, verificar y administrar información de diferentes transacciones. A pesar de esto, Blockchain se enfrenta a diferentes problemas de seguridad, siendo el ataque del 51% o ataque mayoritario uno de los más importantes. Este consiste en que uno o más mineros tomen el control de al menos el 51% del Hash extraído o del cómputo en una red; de modo que un minero puede manipular y modificar arbitrariamente la información registrada en esta tecnología. Este trabajo se enfocó en diseñar e implementar estrategias de detección y mitigación de ataques mayoritarios (51% de ataque) en un sistema distribuido Blockchain, a partir de la caracterización del comportamiento de los mineros. Para lograr esto, se analizó y evaluó el Hash Rate / Share de los mineros de Bitcoin y Crypto Ethereum, seguido del diseño e implementación de un protocolo de consenso para controlar el poder de cómputo de los mineros. Posteriormente, se realizó la exploración y evaluación de modelos de Machine Learning para detectar software malicioso de tipo Cryptojacking.DoctoradoDoctor en Ingeniería de Sistemas y Computació

    On Monte Carlo methods for the Dirichlet process mixture model, and the selection of its precision parameter prior

    Get PDF
    Two issues commonly faced by users of Dirichlet process mixture models are: 1) how to appropriately select a hyperprior for its precision parameter alpha, and 2) the typically slow mixing of the MCMC chain produced by conditional Gibbs samplers based on its stick-breaking representation, as opposed to marginal collapsed Gibbs samplers based on the Polya urn, which have smaller integrated autocorrelation times. In this thesis, we analyse the most common approaches to hyperprior selection for alpha, we identify their limitations, and we propose a new methodology to overcome them. To address slow mixing, we revisit three label-switching Metropolis moves from the literature (Hastie et al., 2015; Papaspiliopoulos and Roberts, 2008), improve them, and introduce a fourth move. Secondly, we revisit two i.i.d. sequential importance samplers which operate in the collapsed space (Liu, 1996; S. N. MacEachern et al., 1999), and we develop a new sequential importance sampler for the stick-breaking parameters of Dirichlet process mixtures, which operates in the stick-breaking space and which has minimal integrated autocorrelation time. Thirdly, we introduce the i.i.d. transcoding algorithm which, conditional to a partition of the data, can infer back which specific stick in the stick-breaking construction each observation originated from. We use it as a building block to develop the transcoding sampler, which removes the need for label-switching Metropolis moves in the conditional stick-breaking sampler, as it uses the better performing marginal sampler (or any other sampler) to drive the MCMC chain, and augments its exchangeable partition posterior with conditional i.i.d. stick-breaking parameter inferences after the fact, thereby inheriting its shorter autocorrelation times

    Statistical phase estimation and error mitigation on a superconducting quantum processor

    Full text link
    Quantum phase estimation (QPE) is a key quantum algorithm, which has been widely studied as a method to perform chemistry and solid-state calculations on future fault-tolerant quantum computers. Recently, several authors have proposed statistical alternatives to QPE that have benefits on early fault-tolerant devices, including shorter circuits and better suitability for error mitigation techniques. However, practical implementations of the algorithm on real quantum processors are lacking. In this paper we practically implement statistical phase estimation on Rigetti's superconducting processors. We specifically use the method of Lin and Tong [PRX Quantum 3, 010318 (2022)] using the improved Fourier approximation of Wan et al. [PRL 129, 030503 (2022)], and applying a variational compilation technique to reduce circuit depth. We then incorporate error mitigation strategies including zero-noise extrapolation and readout error mitigation with bit-flip averaging. We propose a simple method to estimate energies from the statistical phase estimation data, which is found to improve the accuracy in final energy estimates by one to two orders of magnitude with respect to prior theoretical bounds, reducing the cost to perform accurate phase estimation calculations. We apply these methods to chemistry problems for active spaces up to 4 electrons in 4 orbitals, including the application of a quantum embedding method, and use them to correctly estimate energies within chemical precision. Our work demonstrates that statistical phase estimation has a natural resilience to noise, particularly after mitigating coherent errors, and can achieve far higher accuracy than suggested by previous analysis, demonstrating its potential as a valuable quantum algorithm for early fault-tolerant devices.Comment: 24 pages, 13 figure
    corecore