8,990 research outputs found
The Viability and Potential Consequences of IoT-Based Ransomware
With the increased threat of ransomware and the substantial growth of the Internet of Things (IoT) market, there is significant motivation for attackers to carry out IoT-based ransomware campaigns. In this thesis, the viability of such malware is tested.
As part of this work, various techniques that could be used by ransomware developers to attack commercial IoT devices were explored. First, methods that attackers could use to communicate with the victim were examined, such that a ransom note was able to be reliably sent to a victim. Next, the viability of using "bricking" as a method of ransom was evaluated, such that devices could be remotely disabled unless the victim makes a payment to the attacker. Research was then performed to ascertain whether it was possible to remotely gain persistence on IoT devices, which would improve the efficacy of existing ransomware methods, and provide opportunities for more advanced ransomware to be created. Finally, after successfully identifying a number of persistence techniques, the viability of privacy-invasion based ransomware was analysed.
For each assessed technique, proofs of concept were developed. A range of devices -- with various intended purposes, such as routers, cameras and phones -- were used to test the viability of these proofs of concept. To test communication hijacking, devices' "channels of communication" -- such as web services and embedded screens -- were identified, then hijacked to display custom ransom notes. During the analysis of bricking-based ransomware, a working proof of concept was created, which was then able to remotely brick five IoT devices. After analysing the storage design of an assortment of IoT devices, six different persistence techniques were identified, which were then successfully tested on four devices, such that malicious filesystem modifications would be retained after the device was rebooted. When researching privacy-invasion based ransomware, several methods were created to extract information from data sources that can be commonly found on IoT devices, such as nearby WiFi signals, images from cameras, or audio from microphones. These were successfully implemented in a test environment such that ransomable data could be extracted, processed, and stored for later use to blackmail the victim.
Overall, IoT-based ransomware has not only been shown to be viable but also highly damaging to both IoT devices and their users. While the use of IoT-ransomware is still very uncommon "in the wild", the techniques demonstrated within this work highlight an urgent need to improve the security of IoT devices to avoid the risk of IoT-based ransomware causing havoc in our society. Finally, during the development of these proofs of concept, a number of potential countermeasures were identified, which can be used to limit the effectiveness of the attacking techniques discovered in this PhD research
A systematic literature review on information systems for disaster management and proposals for its future research agenda
Emergency management information systems (EMIS) are fundamental for responding to disasters effectively since they provide and process emergency-related information. A literature stream has emerged that corresponds with the increased relevance of the wide array of different information systems that have been used in response to disasters. In addition, the discussion around systems used primarily within responder organizations broadened to systems such as social media that are open to the general public. However, a systematic review of the EMIS literature stream is still missing. This literature review presents a timeline of EMIS research from 1990 up to 2021. It shows the types of information system scholars focused on, and what disaster response functions they supported. It furthermore identifies challenges in EMIS research and proposes future research directions
Foundations for programming and implementing effect handlers
First-class control operators provide programmers with an expressive and efficient
means for manipulating control through reification of the current control state as a first-class object, enabling programmers to implement their own computational effects and
control idioms as shareable libraries. Effect handlers provide a particularly structured
approach to programming with first-class control by naming control reifying operations
and separating from their handling.
This thesis is composed of three strands of work in which I develop operational
foundations for programming and implementing effect handlers as well as exploring
the expressive power of effect handlers.
The first strand develops a fine-grain call-by-value core calculus of a statically
typed programming language with a structural notion of effect types, as opposed to the
nominal notion of effect types that dominates the literature. With the structural approach,
effects need not be declared before use. The usual safety properties of statically typed
programming are retained by making crucial use of row polymorphism to build and
track effect signatures. The calculus features three forms of handlers: deep, shallow,
and parameterised. They each offer a different approach to manipulate the control state
of programs. Traditional deep handlers are defined by folds over computation trees,
and are the original con-struct proposed by Plotkin and Pretnar. Shallow handlers are
defined by case splits (rather than folds) over computation trees. Parameterised handlers
are deep handlers extended with a state value that is threaded through the folds over
computation trees. To demonstrate the usefulness of effects and handlers as a practical
programming abstraction I implement the essence of a small UNIX-style operating
system complete with multi-user environment, time-sharing, and file I/O.
The second strand studies continuation passing style (CPS) and abstract machine
semantics, which are foundational techniques that admit a unified basis for implementing deep, shallow, and parameterised effect handlers in the same environment. The
CPS translation is obtained through a series of refinements of a basic first-order CPS
translation for a fine-grain call-by-value language into an untyped language. Each refinement moves toward a more intensional representation of continuations eventually
arriving at the notion of generalised continuation, which admit simultaneous support for
deep, shallow, and parameterised handlers. The initial refinement adds support for deep
handlers by representing stacks of continuations and handlers as a curried sequence of
arguments. The image of the resulting translation is not properly tail-recursive, meaning some function application terms do not appear in tail position. To rectify this the
CPS translation is refined once more to obtain an uncurried representation of stacks
of continuations and handlers. Finally, the translation is made higher-order in order to
contract administrative redexes at translation time. The generalised continuation representation is used to construct an abstract machine that provide simultaneous support for
deep, shallow, and parameterised effect handlers. kinds of effect handlers.
The third strand explores the expressiveness of effect handlers. First, I show that
deep, shallow, and parameterised notions of handlers are interdefinable by way of typed
macro-expressiveness, which provides a syntactic notion of expressiveness that affirms
the existence of encodings between handlers, but it provides no information about the
computational content of the encodings. Second, using the semantic notion of expressiveness I show that for a class of programs a programming language with first-class
control (e.g. effect handlers) admits asymptotically faster implementations than possible in a language without first-class control
Recommended from our members
Quantitative Character and the Composite Account of Phenomenal Content
I advance an account of quantitative character, a species of phenomenal character that presents as an intensity (cf. a quality) and includes experience dimensions such as loudness, pain intensity, and visual pop-out. I employ psychological and neuroscientific evidence to demonstrate that quantitative characters are best explained by attentional processing, and hence that they do not represent external qualities. Nonetheless, the proposed account of quantitative character is conceived as a compliment to the reductive intentionalist strategy toward qualitative states; I argue that an account of perceptual experience that combines a tracking account of qualitative character with my functionalist proposal of quantitative character permits replies to some notoriously difficult problems for tracking representationalism without sacrificing its chief virtues
Further Details on Predicting IRT Difficulty
This supplementary material serves as technical appendix of the paper When AI Difficulty is Easy: The Explanatory Power of Predicting IRT Difficulty (MartĂnez-Plumed
et al. 2022), published in The Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI-22). The following sections give detailed information about 1) data gathering for
benchmarks; 2) IRT properties and methodology followed; 3) learning models configuration and hyperparameter setting; 4) differences between difficulty prediction and class
prediction; 5) the deployment and results of alternative approaches for difficulty estimation; 6) specifics and results using a generic difficulty metric in different applications and
7) extended IRT applications.MartĂnez Plumed, F.; Castellano FalcĂłn, D.; Monserrat Aranda, C.; HernĂĄndez Orallo, J. (2022). Further Details on Predicting IRT Difficulty. http://hdl.handle.net/10251/18133
'Inventions and adventures': the work of the Stevenson engineering firm in Scotland, c. 1830 - c. 1890
This thesis examines the work of the nineteenth-century Stevenson civil engineering firm to argue
that civil engineering should be approached geographically both because it takes place in and is
shaped by particular spaces, but also because the result of such work reshapes space and the
relationship between places. Geographers have extensively analysed the ways in which humans
have worked to alter environments, but relatively little attention has been paid to engineering as a
socially and geographically transformative process, to the technical questions and to the engineering
professionals whose work brought about such change. This thesis analyses engineers as social and
technical agents of environmental change, rather than viewing their role as the simple
implementation of directives developed elsewhere and by others. It combines insights from the
history and historical geography of science, environmental history and the history of technology to
make a case for the relevance of an historical geography of engineering.
The thesis explores these issues through the work of the Stevenson family. The Stevensons
were an Edinburgh-based and internationally-renowned firm of engineers who specialised in the
construction of coastal infrastructure. The start and end dates of the thesis indicate, broadly, the
careers of David and Thomas Stevenson, who jointly managed the family firm under the name D. &
T. Stevenson between 1850 and 1886. The empirical basis for this thesis draws upon the detailed
analysis of the firmâs archival records: technical publications, project reports, diaries,
correspondence, maps, plans and diagrams.
The work of the Stevensonsâtheir engineering epistemologies, practices, and professional
identitiesâ are examined through four diverse projects undertaken by the firm in the nineteenth
century. These projects are: the training of new engineers; surveying and designing improvement
works for the rivers Tay and Clyde; the implementation of a coastal sound-based fog signal network;
and the failed attempt to expand Wick harbour through the construction of a breakwater. These
projects highlight the range of activities undertaken by nineteenth-century engineers and illustrate
the âmakingâ of engineers and the work they did by highlighting training and learning, surveying,
maintenance, testing, evaluation, repair and the explanation of failure. With reference to these
projects and by drawing upon relevant contextual material, the thesis examines the
conceptualisation of geographical space and natural forces in engineering, the relationship between
science and engineering, the nature of expertise and notions of engineering judgement, and the role
of family, legacy and reputation in securing professional credibility and status.
This approach challenges older historiographical traditions which portrayed engineers as
individual geniuses. The thesis instead understands engineering to be a combination of specialist
knowledge and tacit skill and situates engineers within their social and institutional networks of
power and authority. In pointing out that some engineering works failed, the thesis challenges the
tendency in histories of engineering works to focus on success. It makes the case for an historical
geography of engineering as a way of understanding engineering as an activity, a status and as
processes which changed human-environment relations
Effects of spatial and temporal heterogeneity on the genetic diversity of the alpine butterfly Parnassius smintheus
Genetic diversity represents a populationâs evolutionary potential, as well as its demographic and evolutionary history. Advances in DNA sequencing have allowed the development of new and potentially powerful methods to quantify this diversity. However, when using these methods best practices for sampling populations and analyzing data are still being developed. Furthermore, while effects of the landscape on spatial patterns of genetic variation have received considerable attention, we have a poorer understanding of how genetic diversity changes as a result of temporal variation in environmental and demographic variables. Here, I take advantage of advances in DNA sequencing to investigate genetic diversity at single nucleotide polymorphisms (SNPs) across space and time in a model system of the butterfly, Parnassius smintheus.
I used double digest restriction site associated DNA sequencing to genotype SNPs in P. smintheus from populations in Alberta, Canada. To develop recommendations for analyzing data, I tested the effect of varying the maximum amount of missing data (and therefore the number of SNPs) on common population genetic analyses. Most analyses were robust to varying amounts of missing data, except for population assignment tests where larger datasets (with more missing data) revealed higher-resolution population structure. I also examined the effect of sample size on the same set of analyses, finding that some (e.g., estimation of genetic differentiation) required as few as five individuals per population, while others (e.g., population assignment) required at least 15.
I used the SNP dataset to investigate factors shaping patterns of genetic diversity at different spatial scales and across time. At a larger spatial scale but a single time point, both weather (snow depth and mean minimum temperatures) and land cover (the distance between meadow patches) predicted genetic diversity and differentiation. At a smaller spatial but longer temporal scale, I used a smaller SNP dataset to show that genetic diversity is lost over repeated demographic bottlenecks driven by winter weather, and subsequently recovered through gene flow. My work contributes to understanding how genetic diversity is shaped in natural populations, and points to the importance of both land cover and weather (and specifically, variability in weather) to this process
From Nobel Prizes to Safety Risk Management: How to Identify Latent Failure Conditions in Risk Management Practices
The aim of the Chapter is to introduce readers to the Cognitive Biases found in Railway Transport Planning and Management domain. Cognitive biases in planning of railway projects lead to cost overruns, fail to achieve performance and fulfil safety objectives as well is noted in the economics, business management and risk management literature as well. Unbiased decision making is a core goal of systems engineering, encouraging careful consideration of stakeholder needs, design alternatives, and programmatic constraints and risks. However, Systems engineering practices dealing with Railway Transport Planning and Management fields do not pay attention to the human factors and organisational factors at initial stages of planning where driveability of European Railway Traffic Management System (ERTMS) Trains emerges as a concern in real time operations is noted in the Railway Transport Planning and Management domain. Therefore, there is a case for studying the Cognitive Biases in this domain. The System for Investigation of Railways (SIRI) Cybernetic Risk Model (2006), (2017) is a Systems engineering response to the internal research brief by RSSB, a GB Railways Safety Body. The SIRI Cybernetic Risk Model (2017) incorporating the âHeuristics and Biasesâ approach was published by the UK Transport Select Commission as a Written Evidence in 2016 on the occasion of the Inquiry theme of Railway Safety. The validity of the SIRI Risk Model (Swiss Cheese Model) is further illustrated through the 2019 historical survey of railway accidents and the two recent RAIB investigations of track worker fatal accident and signalling related near miss event in the form of Swiss Cheese Model. The data and information in the RAIB Reports (17/2019) and (11/2020) is supplemented by further research and the authorâs own past studies of accident analyses. The results of the study show that the Guide to Railway Investment Process (GRIP) (2019) has no provision for incorporating measures to address to deficiencies raised by the accident reports or safety analysis reports as the RSSB (2014) Taking Safe Decisions Framework does not include all Hueristics and the biases they lead in the information used for taking decisions. Thus, the Duty Holder Investment process fails to meet the requirements of the mandatory regulatory requirements of the Common Safety Method-Risk Assessment (CSM-RA) Process. The results of the Case Studies in the Chapter remain the same despite the proposed changes in the Shapps-Williams Reform Plan (2021) as the safety related matters are not yet addressed by the plan. The author hopes when the lessons that are learnt from the Case Studies are embedded in railway organisations then we may see improvements in the railway planning and management practices by considering the risk factors at the conceptual stage of the projects and meet the requirements of ISO Standard 27500 (2016) for Human Centred Organisation. National Investigations Bodies (NIB) also may be benefitted
Scalable software and models for large-scale extracellular recordings
The brain represents information about the world through the electrical activity of
populations of neurons. By placing an electrode near a neuron that is firing (spiking), it
is possible to detect the resulting extracellular action potential (EAP) that is transmitted
down an axon to other neurons. In this way, it is possible to monitor the communication
of a group of neurons to uncover how they encode and transmit information. As the
number of recorded neurons continues to increase, however, so do the data processing
and analysis challenges. It is crucial that scalable software and analysis tools are developed
and made available to the neuroscience community to keep up with the large
amounts of data that are already being gathered.
This thesis is composed of three pieces of work which I develop in order to better
process and analyze large-scale extracellular recordings. My work spans all stages of extracellular
analysis from the processing of raw electrical recordings to the development
of statistical models to reveal underlying structure in neural population activity.
In the first work, I focus on developing software to improve the comparison and adoption
of different computational approaches for spike sorting. When analyzing neural
recordings, most researchers are interested in the spiking activity of individual neurons,
which must be extracted from the raw electrical traces through a process called
spike sorting. Much development has been directed towards improving the performance
and automation of spike sorting. This continuous development, while essential,
has contributed to an over-saturation of new, incompatible tools that hinders rigorous
benchmarking and complicates reproducible analysis. To address these limitations, I
develop SpikeInterface, an open-source, Python framework designed to unify preexisting
spike sorting technologies into a single toolkit and to facilitate straightforward
benchmarking of different approaches. With this framework, I demonstrate that modern,
automated spike sorters have low agreement when analyzing the same dataset, i.e.
they find different numbers of neurons with different activity profiles; This result holds
true for a variety of simulated and real datasets. Also, I demonstrate that utilizing a
consensus-based approach to spike sorting, where the outputs of multiple spike sorters
are combined, can dramatically reduce the number of falsely detected neurons.
In the second work, I focus on developing an unsupervised machine learning approach
for determining the source location of individually detected spikes that are
recorded by high-density, microelectrode arrays. By localizing the source of individual
spikes, my method is able to determine the approximate position of the recorded neuriii
ons in relation to the microelectrode array. To allow my model to work with large-scale
datasets, I utilize deep neural networks, a family of machine learning algorithms that
can be trained to approximate complicated functions in a scalable fashion. I evaluate
my method on both simulated and real extracellular datasets, demonstrating that it is
more accurate than other commonly used methods. Also, I show that location estimates
for individual spikes can be utilized to improve the efficiency and accuracy of spike
sorting. After training, my method allows for localization of one million spikes in approximately
37 seconds on a TITAN X GPU, enabling real-time analysis of massive
extracellular datasets.
In my third and final presented work, I focus on developing an unsupervised machine
learning model that can uncover patterns of activity from neural populations
associated with a behaviour being performed. Specifically, I introduce Targeted Neural
Dynamical Modelling (TNDM), a statistical model that jointly models the neural activity
and any external behavioural variables. TNDM decomposes neural dynamics (i.e.
temporal activity patterns) into behaviourally relevant and behaviourally irrelevant dynamics;
the behaviourally relevant dynamics constitute all activity patterns required
to generate the behaviour of interest while behaviourally irrelevant dynamics may be
completely unrelated (e.g. other behavioural or brain states), or even related to behaviour
execution (e.g. dynamics that are associated with behaviour generally but are not
task specific). Again, I implement TNDM using a deep neural network to improve its
scalability and expressivity. On synthetic data and on real recordings from the premotor
(PMd) and primary motor cortex (M1) of a monkey performing a center-out reaching
task, I show that TNDM is able to extract low-dimensional neural dynamics that are
highly predictive of behaviour without sacrificing its fit to the neural data
- âŠ