14,913 research outputs found
The Viability and Potential Consequences of IoT-Based Ransomware
With the increased threat of ransomware and the substantial growth of the Internet of Things (IoT) market, there is significant motivation for attackers to carry out IoT-based ransomware campaigns. In this thesis, the viability of such malware is tested.
As part of this work, various techniques that could be used by ransomware developers to attack commercial IoT devices were explored. First, methods that attackers could use to communicate with the victim were examined, such that a ransom note was able to be reliably sent to a victim. Next, the viability of using "bricking" as a method of ransom was evaluated, such that devices could be remotely disabled unless the victim makes a payment to the attacker. Research was then performed to ascertain whether it was possible to remotely gain persistence on IoT devices, which would improve the efficacy of existing ransomware methods, and provide opportunities for more advanced ransomware to be created. Finally, after successfully identifying a number of persistence techniques, the viability of privacy-invasion based ransomware was analysed.
For each assessed technique, proofs of concept were developed. A range of devices -- with various intended purposes, such as routers, cameras and phones -- were used to test the viability of these proofs of concept. To test communication hijacking, devices' "channels of communication" -- such as web services and embedded screens -- were identified, then hijacked to display custom ransom notes. During the analysis of bricking-based ransomware, a working proof of concept was created, which was then able to remotely brick five IoT devices. After analysing the storage design of an assortment of IoT devices, six different persistence techniques were identified, which were then successfully tested on four devices, such that malicious filesystem modifications would be retained after the device was rebooted. When researching privacy-invasion based ransomware, several methods were created to extract information from data sources that can be commonly found on IoT devices, such as nearby WiFi signals, images from cameras, or audio from microphones. These were successfully implemented in a test environment such that ransomable data could be extracted, processed, and stored for later use to blackmail the victim.
Overall, IoT-based ransomware has not only been shown to be viable but also highly damaging to both IoT devices and their users. While the use of IoT-ransomware is still very uncommon "in the wild", the techniques demonstrated within this work highlight an urgent need to improve the security of IoT devices to avoid the risk of IoT-based ransomware causing havoc in our society. Finally, during the development of these proofs of concept, a number of potential countermeasures were identified, which can be used to limit the effectiveness of the attacking techniques discovered in this PhD research
Conditional Adapters: Parameter-efficient Transfer Learning with Fast Inference
We propose Conditional Adapter (CoDA), a parameter-efficient transfer
learning method that also improves inference efficiency. CoDA generalizes
beyond standard adapter approaches to enable a new way of balancing speed and
accuracy using conditional computation. Starting with an existing dense
pretrained model, CoDA adds sparse activation together with a small number of
new parameters and a light-weight training phase. Our experiments demonstrate
that the CoDA approach provides an unexpectedly efficient way to transfer
knowledge. Across a variety of language, vision, and speech tasks, CoDA
achieves a 2x to 8x inference speed-up compared to the state-of-the-art Adapter
approach with moderate to no accuracy loss and the same parameter efficiency
Technical Dimensions of Programming Systems
Programming requires much more than just writing code in a programming language. It is usually done in the context of a stateful environment, by interacting with a system through a graphical user interface. Yet, this wide space of possibilities lacks a common structure for navigation. Work on programming systems fails to form a coherent body of research, making it hard to improve on past work and advance the state of the art.
In computer science, much has been said and done to allow comparison of programming languages, yet no similar theory exists for programming systems; we believe that programming systems deserve a theory too.
We present a framework of technical dimensions which capture the underlying characteristics of programming systems and provide a means for conceptualizing and comparing them.
We identify technical dimensions by examining past influential programming systems and reviewing their design principles, technical capabilities, and styles of user interaction. Technical dimensions capture characteristics that may be studied, compared and advanced independently. This makes it possible to talk about programming systems in a way that can be shared and constructively debated rather than relying solely on personal impressions.
Our framework is derived using a qualitative analysis of past programming systems. We outline two concrete ways of using our framework. First, we show how it can analyze a recently developed novel programming system. Then, we use it to identify an interesting unexplored point in the design space of programming systems.
Much research effort focuses on building programming systems that are easier to use, accessible to non-experts, moldable and/or powerful, but such efforts are disconnected. They are informal, guided by the personal vision of their authors and thus are only evaluable and comparable on the basis of individual experience using them. By providing foundations for more systematic research, we can help programming systems researchers to stand, at last, on the shoulders of giants
A Design Science Research Approach to Smart and Collaborative Urban Supply Networks
Urban supply networks are facing increasing demands and challenges and thus constitute a relevant field for research and practical development. Supply chain management holds enormous potential and relevance for society and everyday life as the flow of goods and information are important economic functions. Being a heterogeneous field, the literature base of supply chain management research is difficult to manage and navigate. Disruptive digital technologies and the implementation of cross-network information analysis and sharing drive the need for new organisational and technological approaches. Practical issues are manifold and include mega trends such as digital transformation, urbanisation, and environmental awareness.
A promising approach to solving these problems is the realisation of smart and collaborative supply networks. The growth of artificial intelligence applications in recent years has led to a wide range of applications in a variety of domains. However, the potential of artificial intelligence utilisation in supply chain management has not yet been fully exploited. Similarly, value creation increasingly takes place in networked value creation cycles that have become continuously more collaborative, complex, and dynamic as interactions in business processes involving information technologies have become more intense.
Following a design science research approach this cumulative thesis comprises the development and discussion of four artefacts for the analysis and advancement of smart and collaborative urban supply networks. This thesis aims to highlight the potential of artificial intelligence-based supply networks, to advance data-driven inter-organisational collaboration, and to improve last mile supply network sustainability. Based on thorough machine learning and systematic literature reviews, reference and system dynamics modelling, simulation, and qualitative empirical research, the artefacts provide a valuable contribution to research and practice
On the Mechanism of Building Core Competencies: a Study of Chinese Multinational Port Enterprises
This study aims to explore how Chinese multinational port enterprises (MNPEs) build
their core competencies. Core competencies are firms’special capabilities and sources
to gain sustainable competitive advantage (SCA) in marketplace, and the concept led
to extensive research and debates. However, few studies include inquiries about the
mechanisms of building core competencies in the context of Chinese MNPEs.
Accordingly, answers were sought to three research questions:
1. What are the core competencies of the Chinese MNPEs?
2. What are the mechanisms that the Chinese MNPEs use to build their core
competencies?
3. What are the paths that the Chinese MNPEs pursue to build their resources bases?
The study adopted a multiple-case study design, focusing on building mechanism of
core competencies with RBV. It selected purposively five Chinese leading MNPEs
and three industry associations as Case Companies.
The study revealed three main findings. First, it identified three generic core
competencies possessed by Case Companies, i.e., innovation in business models and
operations, utilisation of technologies, and acquisition of strategic resources. Second,
it developed the conceptual framework of the Mechanism of Building Core
Competencies (MBCC), which is a process of change of collective learning in
effective and efficient utilization of resources of a firm in response to critical events.
Third, it proposed three paths to build core competencies, i.e., enhancing collective
learning, selecting sustainable processes, and building resource base.
The study contributes to the knowledge of core competencies and RBV in three ways:
(1) presenting three generic core competencies of the Chinese MNPEs, (2) proposing
a new conceptual framework to explain how Chinese MNPEs build their core
competencies, (3) suggesting a solid anchor point (MBCC) to explain the links among
resources, core competencies, and SCA. The findings set benchmarks for Chinese
logistics industry and provide guidelines to build core competencies
AI-based Conversational Agents for Customer Service – A Study of Customer Service Representative’ Perceptions Using TAM 2
This study aimed to identify the various factors that may influence customer service representatives’ perceptions of artificial intelligence (AI)-based conversational agents (CAs) for customer service. By analyzing 180 publications, a conceptual research model is developed for identifying the factors that may influence customer service representatives’ perceptions of AI-based CAs for customer service. The underlying conceptual research model comprises ten factors. The study is grounded in the application of the Technology Acceptance Model 2 (TAM 2) approach. The research model is empirically evaluated with survey data from 128 participants. Our results show that the direct positive effect of subjective norm on customer service representatives’ perception of using AIbased CAs in customer service decreases with increasing experience. Moreover, our results reveal new insights regarding trust. The results of this study provide an overview of the predominant characteristics of the influencing factors of customer service representatives’ perceptions of AI-based CAs for customer service
Credibility of Cyber Threat Communication on Twitter – Expert Evaluation of Indicators for Automated Credibility Assessment
Computer Emergency Response Teams (CERTs) are experts responsible for managing cybersecurity incidents. To identify cyber threats, they consider a wide range of sources from official vulnerability databases to public sources such as Twitter, which has an active cybersecurity community. Due to the high number of topic-related tweets per day, credibility assessment represents an immense effort in the daily work of CERTs. Although approaches for automated credibility assessment have already been developed in previous research, these mainly take peripheral cues into account, although users with domain expertise and a high level of personal involvement also assess content-related cues. We therefore conducted interviews with CERT members to re-evaluate known indicators for automated credibility assessment from an expert perspective. In doing so, we contribute valuable insights to the development of automated approaches for credibility assessment targeting users with high domain knowledge and personal involvement
Recommended from our members
Antecedents of business intelligence system use
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University London.Organisational reliance on information has become vital for organisational competitiveness. With increasing data volumes, Business Intelligence (BI) becomes a cornerstone of the decision-support system. However, employee resistance to use Business Intelligence Systems (BIS) is evident. This creates a problem to organisations in realising the benefits of BIS. It is thus important to study the enablers of sustained use of BIS amongst employees.
This thesis identifies existing theories that can be used to study BI system use. It integrates and extends technology use theories through a framework focusing on Business Intelligence System Use (BISU). Empirical research is then conducted in Kuwait’s telecom and banking industries through a close-ended, self-administered questionnaire using a five-point Likert scale. Responses were received from 211 BI users. The data was analysed using SmartPLS to study the convergent and discriminant validity and reliability. Partial least squares structural equation modelling (PLS-SEM) was used to study the direct and indirect relationships between constructs and answer the hypotheses. In addition to SmartPLS, SPSS was used for descriptive analysis.
The results indicated that UTAUT factors consisting of performance expectancy, effort expectancy and social influence positively impact BI system use. Voluntariness of use was found to positively moderate the relationship between social influence and BI system use. Furthermore, BI system quality positively impacts both performance expectancy and effort expectancy. The BI user’s self-efficacy also positively impacts effort expectancy. In addition, social influence was found to be positively influenced by organisational factors, namely top management support and information culture.
The findings of this research contribute to literature by determining and quantifying the factors that influence BISU through the lens of employee perspectives. This thesis also explains how employees’ object-based beliefs about BI affect their behavioural beliefs, which in turn impact BISU. Limitations of this research include the omission of UTAUT’s facilitating conditions and the limited variance of respondent demographics
Um modelo para suporte automatizado ao reconhecimento, extração, personalização e reconstrução de gráficos estáticos
Data charts are widely used in our daily lives, being present in regular media,
such as newspapers, magazines, web pages, books, and many others. A well constructed
data chart leads to an intuitive understanding of its underlying data
and in the same way, when data charts have wrong design choices, a redesign
of these representations might be needed. However, in most cases, these
charts are shown as a static image, which means that the original data are not
usually available. Therefore, automatic methods could be applied to extract the
underlying data from the chart images to allow these changes. The task of
recognizing charts and extracting data from them is complex, largely due to the
variety of chart types and their visual characteristics.
Computer Vision techniques for image classification and object detection are
widely used for the problem of recognizing charts, but only in images without
any disturbance. Other features in real-world images that can make this task
difficult are not present in most literature works, like photo distortions, noise,
alignment, etc. Two computer vision techniques that can assist this task and
have been little explored in this context are perspective detection and
correction. These methods transform a distorted and noisy chart in a clear
chart, with its type ready for data extraction or other uses. The task of
reconstructing data is straightforward, as long the data is available the
visualization can be reconstructed, but the scenario of reconstructing it on the
same context is complex.
Using a Visualization Grammar for this scenario is a key component, as these
grammars usually have extensions for interaction, chart layers, and multiple
views without requiring extra development effort.
This work presents a model for automated support for custom recognition, and
reconstruction of charts in images. The model automatically performs the
process steps, such as reverse engineering, turning a static chart back into its
data table for later reconstruction, while allowing the user to make modifications
in case of uncertainties. This work also features a model-based architecture
along with prototypes for various use cases. Validation is performed step by
step, with methods inspired by the literature. This work features three use
cases providing proof of concept and validation of the model.
The first use case features usage of chart recognition methods focused on
documents in the real-world, the second use case focus on vocalization of
charts, using a visualization grammar to reconstruct a chart in audio format,
and the third use case presents an Augmented Reality application that
recognizes and reconstructs charts in the same context (a piece of paper)
overlaying the new chart and interaction widgets. The results showed that with
slight changes, chart recognition and reconstruction methods are now ready for
real-world charts, when taking time, accuracy and precision into consideration.Os gráficos de dados são amplamente utilizados na nossa vida diária, estando
presentes nos meios de comunicação regulares, tais como jornais, revistas,
páginas web, livros, e muitos outros. Um gráfico bem construÃdo leva a uma
compreensão intuitiva dos seus dados inerentes e da mesma forma, quando
os gráficos de dados têm escolhas de conceção erradas, poderá ser
necessário um redesenho destas representações. Contudo, na maioria dos
casos, estes gráficos são mostrados como uma imagem estática, o que
significa que os dados originais não estão normalmente disponÃveis. Portanto,
poderiam ser aplicados métodos automáticos para extrair os dados inerentes
das imagens dos gráficos, a fim de permitir estas alterações. A tarefa de
reconhecer os gráficos e extrair dados dos mesmos é complexa, em grande
parte devido à variedade de tipos de gráficos e à s suas caracterÃsticas visuais.
As técnicas de Visão Computacional para classificação de imagens e deteção
de objetos são amplamente utilizadas para o problema de reconhecimento de
gráficos, mas apenas em imagens sem qualquer ruÃdo. Outras caracterÃsticas
das imagens do mundo real que podem dificultar esta tarefa não estão
presentes na maioria das obras literárias, como distorções fotográficas, ruÃdo,
alinhamento, etc. Duas técnicas de visão computacional que podem ajudar
nesta tarefa e que têm sido pouco exploradas neste contexto são a deteção e
correção da perspetiva. Estes métodos transformam um gráfico distorcido e
ruidoso em um gráfico limpo, com o seu tipo pronto para extração de dados
ou outras utilizações. A tarefa de reconstrução de dados é simples, desde que
os dados estejam disponÃveis a visualização pode ser reconstruÃda, mas o
cenário de reconstrução no mesmo contexto é complexo.
A utilização de uma Gramática de Visualização para este cenário é um
componente chave, uma vez que estas gramáticas têm normalmente
extensões para interação, camadas de gráficos, e visões múltiplas sem exigir
um esforço extra de desenvolvimento.
Este trabalho apresenta um modelo de suporte automatizado para o
reconhecimento personalizado, e reconstrução de gráficos em imagens
estáticas. O modelo executa automaticamente as etapas do processo, tais
como engenharia inversa, transformando um gráfico estático novamente na
sua tabela de dados para posterior reconstrução, ao mesmo tempo que
permite ao utilizador fazer modificações em caso de incertezas. Este trabalho
também apresenta uma arquitetura baseada em modelos, juntamente com
protótipos para vários casos de utilização. A validação é efetuada passo a
passo, com métodos inspirados na literatura. Este trabalho apresenta três
casos de uso, fornecendo prova de conceito e validação do modelo.
O primeiro caso de uso apresenta a utilização de métodos de reconhecimento
de gráficos focando em documentos no mundo real, o segundo caso de uso
centra-se na vocalização de gráficos, utilizando uma gramática de visualização
para reconstruir um gráfico em formato áudio, e o terceiro caso de uso
apresenta uma aplicação de Realidade Aumentada que reconhece e reconstrói
gráficos no mesmo contexto (um pedaço de papel) sobrepondo os novos
gráficos e widgets de interação. Os resultados mostraram que com pequenas
alterações, os métodos de reconhecimento e reconstrução dos gráficos estão
agora prontos para os gráficos do mundo real, tendo em consideração o
tempo, a acurácia e a precisão.Programa Doutoral em Engenharia Informátic
- …