9,360 research outputs found
The Viability and Potential Consequences of IoT-Based Ransomware
With the increased threat of ransomware and the substantial growth of the Internet of Things (IoT) market, there is significant motivation for attackers to carry out IoT-based ransomware campaigns. In this thesis, the viability of such malware is tested.
As part of this work, various techniques that could be used by ransomware developers to attack commercial IoT devices were explored. First, methods that attackers could use to communicate with the victim were examined, such that a ransom note was able to be reliably sent to a victim. Next, the viability of using "bricking" as a method of ransom was evaluated, such that devices could be remotely disabled unless the victim makes a payment to the attacker. Research was then performed to ascertain whether it was possible to remotely gain persistence on IoT devices, which would improve the efficacy of existing ransomware methods, and provide opportunities for more advanced ransomware to be created. Finally, after successfully identifying a number of persistence techniques, the viability of privacy-invasion based ransomware was analysed.
For each assessed technique, proofs of concept were developed. A range of devices -- with various intended purposes, such as routers, cameras and phones -- were used to test the viability of these proofs of concept. To test communication hijacking, devices' "channels of communication" -- such as web services and embedded screens -- were identified, then hijacked to display custom ransom notes. During the analysis of bricking-based ransomware, a working proof of concept was created, which was then able to remotely brick five IoT devices. After analysing the storage design of an assortment of IoT devices, six different persistence techniques were identified, which were then successfully tested on four devices, such that malicious filesystem modifications would be retained after the device was rebooted. When researching privacy-invasion based ransomware, several methods were created to extract information from data sources that can be commonly found on IoT devices, such as nearby WiFi signals, images from cameras, or audio from microphones. These were successfully implemented in a test environment such that ransomable data could be extracted, processed, and stored for later use to blackmail the victim.
Overall, IoT-based ransomware has not only been shown to be viable but also highly damaging to both IoT devices and their users. While the use of IoT-ransomware is still very uncommon "in the wild", the techniques demonstrated within this work highlight an urgent need to improve the security of IoT devices to avoid the risk of IoT-based ransomware causing havoc in our society. Finally, during the development of these proofs of concept, a number of potential countermeasures were identified, which can be used to limit the effectiveness of the attacking techniques discovered in this PhD research
In-situ crack and keyhole pore detection in laser directed energy deposition through acoustic signal and deep learning
Cracks and keyhole pores are detrimental defects in alloys produced by laser
directed energy deposition (LDED). Laser-material interaction sound may hold
information about underlying complex physical events such as crack propagation
and pores formation. However, due to the noisy environment and intricate signal
content, acoustic-based monitoring in LDED has received little attention. This
paper proposes a novel acoustic-based in-situ defect detection strategy in
LDED. The key contribution of this study is to develop an in-situ acoustic
signal denoising, feature extraction, and sound classification pipeline that
incorporates convolutional neural networks (CNN) for online defect prediction.
Microscope images are used to identify locations of the cracks and keyhole
pores within a part. The defect locations are spatiotemporally registered with
acoustic signal. Various acoustic features corresponding to defect-free
regions, cracks, and keyhole pores are extracted and analysed in time-domain,
frequency-domain, and time-frequency representations. The CNN model is trained
to predict defect occurrences using the Mel-Frequency Cepstral Coefficients
(MFCCs) of the lasermaterial interaction sound. The CNN model is compared to
various classic machine learning models trained on the denoised acoustic
dataset and raw acoustic dataset. The validation results shows that the CNN
model trained on the denoised dataset outperforms others with the highest
overall accuracy (89%), keyhole pore prediction accuracy (93%), and AUC-ROC
score (98%). Furthermore, the trained CNN model can be deployed into an
in-house developed software platform for online quality monitoring. The
proposed strategy is the first study to use acoustic signals with deep learning
for insitu defect detection in LDED process.Comment: 36 Pages, 16 Figures, accepted at journal Additive Manufacturin
Quantifying and Explaining Machine Learning Uncertainty in Predictive Process Monitoring: An Operations Research Perspective
This paper introduces a comprehensive, multi-stage machine learning
methodology that effectively integrates information systems and artificial
intelligence to enhance decision-making processes within the domain of
operations research. The proposed framework adeptly addresses common
limitations of existing solutions, such as the neglect of data-driven
estimation for vital production parameters, exclusive generation of point
forecasts without considering model uncertainty, and lacking explanations
regarding the sources of such uncertainty. Our approach employs Quantile
Regression Forests for generating interval predictions, alongside both local
and global variants of SHapley Additive Explanations for the examined
predictive process monitoring problem. The practical applicability of the
proposed methodology is substantiated through a real-world production planning
case study, emphasizing the potential of prescriptive analytics in refining
decision-making procedures. This paper accentuates the imperative of addressing
these challenges to fully harness the extensive and rich data resources
accessible for well-informed decision-making
Corporate Social Responsibility: the institutionalization of ESG
Understanding the impact of Corporate Social Responsibility (CSR) on firm performance as it relates to industries reliant on technological innovation is a complex and perpetually evolving challenge. To thoroughly investigate this topic, this dissertation will adopt an economics-based structure to address three primary hypotheses. This structure allows for each hypothesis to essentially be a standalone empirical paper, unified by an overall analysis of the nature of impact that ESG has on firm performance. The first hypothesis explores the evolution of CSR to the modern quantified iteration of ESG has led to the institutionalization and standardization of the CSR concept. The second hypothesis fills gaps in existing literature testing the relationship between firm performance and ESG by finding that the relationship is significantly positive in long-term, strategic metrics (ROA and ROIC) and that there is no correlation in short-term metrics (ROE and ROS). Finally, the third hypothesis states that if a firm has a long-term strategic ESG plan, as proxied by the publication of CSR reports, then it is more resilience to damage from controversies. This is supported by the finding that pro-ESG firms consistently fared better than their counterparts in both financial and ESG performance, even in the event of a controversy. However, firms with consistent reporting are also held to a higher standard than their nonreporting peers, suggesting a higher risk and higher reward dynamic. These findings support the theory of good management, in that long-term strategic planning is both immediately economically beneficial and serves as a means of risk management and social impact mitigation. Overall, this contributes to the literature by fillings gaps in the nature of impact that ESG has on firm performance, particularly from a management perspective
Neural Architecture Search: Insights from 1000 Papers
In the past decade, advances in deep learning have resulted in breakthroughs
in a variety of areas, including computer vision, natural language
understanding, speech recognition, and reinforcement learning. Specialized,
high-performing neural architectures are crucial to the success of deep
learning in these areas. Neural architecture search (NAS), the process of
automating the design of neural architectures for a given task, is an
inevitable next step in automating machine learning and has already outpaced
the best human-designed architectures on many tasks. In the past few years,
research in NAS has been progressing rapidly, with over 1000 papers released
since 2020 (Deng and Lindauer, 2021). In this survey, we provide an organized
and comprehensive guide to neural architecture search. We give a taxonomy of
search spaces, algorithms, and speedup techniques, and we discuss resources
such as benchmarks, best practices, other surveys, and open-source libraries
Countermeasures for the majority attack in blockchain distributed systems
La tecnología Blockchain es considerada como uno de los paradigmas informáticos más importantes posterior al Internet; en función a sus características únicas que la hacen ideal para registrar, verificar y administrar información de diferentes transacciones. A pesar de esto, Blockchain se enfrenta a diferentes problemas de seguridad, siendo el ataque del 51% o ataque mayoritario uno de los más importantes. Este consiste en que uno o más mineros tomen el control de al menos el 51% del Hash extraído o del cómputo en una red; de modo que un minero puede manipular y modificar arbitrariamente la información registrada en esta tecnología. Este trabajo se enfocó en diseñar e implementar estrategias de detección y mitigación de ataques mayoritarios (51% de ataque) en un sistema distribuido Blockchain, a partir de la caracterización del comportamiento de los mineros. Para lograr esto, se analizó y evaluó el Hash Rate / Share de los mineros de Bitcoin y Crypto Ethereum, seguido del diseño e implementación de un protocolo de consenso para controlar el poder de cómputo de los mineros. Posteriormente, se realizó la exploración y evaluación de modelos de Machine Learning para detectar software malicioso de tipo Cryptojacking.DoctoradoDoctor en Ingeniería de Sistemas y Computació
Inferring networks from time series: a neural approach
Network structures underlie the dynamics of many complex phenomena, from gene
regulation and foodwebs to power grids and social media. Yet, as they often
cannot be observed directly, their connectivities must be inferred from
observations of their emergent dynamics. In this work we present a powerful and
fast computational method to infer large network adjacency matrices from time
series data using a neural network. Using a neural network provides uncertainty
quantification on the prediction in a manner that reflects both the
non-convexity of the inference problem as well as the noise on the data. This
is useful since network inference problems are typically underdetermined, and a
feature that has hitherto been lacking from network inference methods. We
demonstrate our method's capabilities by inferring line failure locations in
the British power grid from observations of its response to a power cut. Since
the problem is underdetermined, many classical statistical tools (e.g.
regression) will not be straightforwardly applicable. Our method, in contrast,
provides probability densities on each edge, allowing the use of hypothesis
testing to make meaningful probabilistic statements about the location of the
power cut. We also demonstrate our method's ability to learn an entire cost
matrix for a non-linear model from a dataset of economic activity in Greater
London. Our method outperforms OLS regression on noisy data in terms of both
speed and prediction accuracy, and scales as where OLS is cubic. Since
our technique is not specifically engineered for network inference, it
represents a general parameter estimation scheme that is applicable to any
parameter dimension
latent Dirichlet allocation method-based nowcasting approach for prediction of silver price
Silver is a metal that offers significant value to both investors and companies. The purpose of this study is to make an estimation of the price of silver. While making this estimation, it is planned to include the frequency of searches on Google Trends for the words that affect the silver price. Thus, it is aimed to obtain a more accurate estimate. First, using the Latent Dirichlet Allocation method, the keywords to be analyzed in Google Trends were collected from various articles on the Internet. Mining data from Google Trends combined with the information obtained by LDA is the new approach this study took, to predict the price of silver. No study has been found in the literature that has adopted this approach to estimate the price of silver. The estimation was carried out with Random Forest Regression, Gaussian Process Regression, Support Vector Machine, Regression Trees and Artificial Neural Networks methods. In addition, ARIMA, which is one of the traditional methods that is widely used in time series analysis, was also used to benchmark the accuracy of the methodology. The best MSE ratio was obtained as 0,000227131 ± 0.0000235205 by the Regression Trees method. This score indicates that it would be a valid technique to estimate the price of "Silver" by using Google Trends data using the LDA method
Annals [...].
Pedometrics: innovation in tropics; Legacy data: how turn it useful?; Advances in soil sensing; Pedometric guidelines to systematic soil surveys.Evento online. Coordenado por: Waldir de Carvalho Junior, Helena Saraiva Koenow Pinheiro, Ricardo Simão Diniz Dalmolin
Cultivating Agrobiodiversity in the U.S.: Barriers and Bridges at Multiple Scales
The diversity of crops grown in the United States (U.S.) is declining, causing agricultural landscapes to become more and more simplified. This trend is concerning for the loss of important plant, insect, and animal species, as well as the pollution and degradation of our environment. Through three separate but related studies, this dissertation addresses the need to increase the diversity of these agricultural landscapes in the U.S., particularly through diversifying the type and number of crops grown. The first study uses multiple, openly accessible datasets related to agricultural land use and policies to document and visualize change over recent decades. Through this, I show that U.S. agriculture has gradually become more specialized in the crops grown, crop production is heavily concentrated in certain areas, and crop diversity is continuing to decline. Meanwhile, federal agricultural policy, while having become more influential over how U.S. agriculture operates, incentivizes this specialization. The second study uses nonlinear statistical modeling to identify and compare social, political, and ecological factors that best predict crop diversity across nine regions in the U.S. Factors of climate, prior land use, and farm inputs best predict diversity across regions, but regions show key differences in how factors are important, indicating that patterns at the regional scale constrain and enable further diversification. Finally, the third study relied on interviews with farmers and key informants in southern Idaho’s Magic Valley – a cluster of eight counties that is known to be agriculturally diverse. Interviews gauge what farmers are currently doing to manage crop diversity (the present) and how they imagine alternative landscapes (the imaginary). We found that farmers in the Magic Valley manage current diversity mainly through cover cropping and diverse crop rotations, but daily struggles and political barriers make experimenting with and imagining alternative landscapes difficult and unlikely to occur. Together, these three studies provide an integrated view of how and why U.S. agriculture landscapes simplify or diversify, as well as the barriers and bridges such pathways of diversification
- …