183 research outputs found
Achieving network resiliency using sound theoretical and practical methods
Computer networks have revolutionized the life of every citizen in our modern intercon- nected society. The impact of networked systems spans every aspect of our lives, from financial transactions to healthcare and critical services, making these systems an attractive target for malicious entities that aim to make financial or political profit. Specifically, the past decade has witnessed an astounding increase in the number and complexity of sophisti- cated and targeted attacks, known as advanced persistent threats (APT). Those attacks led to a paradigm shift in the security and reliability communitiesâ perspective on system design; researchers and government agencies accepted the inevitability of incidents and malicious attacks, and marshaled their efforts into the design of resilient systems.
Rather than focusing solely on preventing failures and attacks, resilient systems are able to maintain an acceptable level of operation in the presence of such incidents, and then recover gracefully into normal operation. Alongside prevention, resilient system design focuses on incident detection as well as timely response. Unfortunately, the resiliency efforts of research and industry experts have been hindered by an apparent schism between theory and practice, which allows attackers to maintain the upper hand advantage. This lack of compatibility between the theory and practice of system design is attributed to the following challenges. First, theoreticians often make impractical and unjustifiable assumptions that allow for mathematical tractability while sacrificing accuracy. Second, the security and reliability communities often lack clear definitions of success criteria when comparing different system models and designs. Third, system designers often make implicit or unstated assumptions to favor practicality and ease of design. Finally, resilient systems are tested in private and isolated environments where validation and reproducibility of the results are not publicly accessible.
In this thesis, we set about showing that the proper synergy between theoretical anal- ysis and practical design can enhance the resiliency of networked systems. We illustrate the benefits of this synergy by presenting resiliency approaches that target the inter- and intra-networking levels. At the inter-networking level, we present CPuzzle as a means to protect the transport control protocol (TCP) connection establishment channel from state- exhaustion distributed denial of service attacks (DDoS). CPuzzle leverages client puzzles to limit the rate at which misbehaving users can establish TCP connections. We modeled the problem of determining the puzzle difficulty as a Stackleberg game and solve for the equilibrium strategy that balances the usersâ utilizes against CPuzzleâs resilience capabilities. Furthermore, to handle volumetric DDoS attacks, we extend CPuzzle and implement Midgard, a cooperative approach that involves end-users in the process of tolerating and neutralizing DDoS attacks. Midgard is a middlebox that resides at the edge of an Internet service providerâs network and uses client puzzles at the IP level to allocate bandwidth to its users.
At the intra-networking level, we present sShield, a game-theoretic network response engine that manipulates a networkâs connectivity in response to an attacker who is moving laterally to compromise a high-value asset. To implement such decision making algorithms, we leverage the recent advances in software-defined networking (SDN) to collect logs and security alerts about the network and implement response actions. However, the programma- bility offered by SDN comes with an increased chance for design-time bugs that can have drastic consequences on the reliability and security of a networked system. We therefore introduce BiFrost, an open-source tool that aims to verify safety and security proper- ties about data-plane programs. BiFrost translates data-plane programs into functionally equivalent sequential circuits, and then uses well-established hardware reduction, abstrac- tion, and verification techniques to establish correctness proofs about data-plane programs.
By focusing on those four key efforts, CPuzzle, Midgard, sShield, and BiFrost, we believe that this work illustrates the benefits that the synergy between theory and practice can bring into the world of resilient system design. This thesis is an attempt to pave the way for further cooperation and coordination between theoreticians and practitioners, in the hope of designing resilient networked systems
DInSAR investigation in the PĂ€rvie endglacial fault region, Lapland, Sweden
Northern Fennoscandia bears witness to the Pleistocene glaciation in the form of a series
of large faults that have been shown to have ruptured immediately after the retreat of
the ice sheet, about 9500 years ago. The largest one, known as the PĂ€rvie fault, consists
of a 155 km long linear series of fault scarps forming northânortheast-trending, that
stretch west of Kiruna, Lapland. End-glacial intra-plate faults of this extent are very
rare in the continental crust and the PĂ€rvie system represents one of the major fault
zone structures of this type in the world. Seismological evidence shows that there is
still noticeable seismic activity, roughly one event of magnitude 2 per year that can be
attributed to the fault. Nevertheless assessing its state of activity is a difficult task due
to the extent and remoteness of the area. This study is aimed at the determination of
crustal motion around the PĂ€rvie fault zone using the differential inter-ferometric synthetic
aperture radar (DInSAR) technique, based on images acquired with the European
Space Agency (ESA) satellites European Remote Sensing (ERS) 1, ERS-2, and the
Environmental Satellite (ENVISAT). We present results achieved in terms of deformation
of the crystalline bedrock along different sectors of the fault where high levels of
coherence were obtained, even from image pairs several years apart. This finding does
not exclude deformation in other segments, as observing conditions are not always as
favourable in terms of data availability
A simulational study of the indirect geometry neutron spectrometer, BIFROST at the European Spallation Source, from neutron source position to detector position
The European Spallation Source (ESS) is intended to become the most powerful
spallation neutron source in the world and the flagship of neutron science in
the upcoming decades. The exceptionally high neutron flux will provide unique
opportunities for scientific experiments, but also set high requirements for
the detectors. One of the most challenging aspects is the rate capability and
in particular the peak instantaneous rate capability, i.e. the number of
neutrons hitting the detector per channel or cm at the peak of the neutron
pulse. The primary purpose of this paper is to estimate the incident rates that
are anticipated for the BIFROST instrument planned for ESS, and also to
demonstrate the use of powerful simulation tools for the correct interpretation
of neutron transport in crystalline materials. A full simulation model of the
instrument from source to detector position, implemented with the use of
multiple simulation software packages is presented. For a single detector tube
instantaneous incident rates with a maximum of 1.7 GHz for a Bragg peak from a
single crystal, and 0.3 MHz for a vanadium sample are found. This paper also
includes the first application of a new pyrolytic graphite model, and a
comparison of different simulation tools to highlight their strengths and
weaknesses.Comment: 45 pages, 20 figure
DISPATCH: A Numerical Simulation Framework for the Exa-scale Era. I. Fundamentals
We introduce a high-performance simulation framework that permits the
semi-independent, task-based solution of sets of partial differential
equations, typically manifesting as updates to a collection of `patches' in
space-time. A hybrid MPI/OpenMP execution model is adopted, where work tasks
are controlled by a rank-local `dispatcher' which selects, from a set of tasks
generally much larger than the number of physical cores (or hardware threads),
tasks that are ready for updating. The definition of a task can vary, for
example, with some solving the equations of ideal magnetohydrodynamics (MHD),
others non-ideal MHD, radiative transfer, or particle motion, and yet others
applying particle-in-cell (PIC) methods. Tasks do not have to be grid-based,
while tasks that are, may use either Cartesian or orthogonal curvilinear
meshes. Patches may be stationary or moving. Mesh refinement can be static or
dynamic. A feature of decisive importance for the overall performance of the
framework is that time steps are determined and applied locally; this allows
potentially large reductions in the total number of updates required in cases
when the signal speed varies greatly across the computational domain, and
therefore a corresponding reduction in computing time. Another feature is a
load balancing algorithm that operates `locally' and aims to simultaneously
minimise load and communication imbalance. The framework generally relies on
already existing solvers, whose performance is augmented when run under the
framework, due to more efficient cache usage, vectorisation, local
time-stepping, plus near-linear and, in principle, unlimited OpenMP and MPI
scaling.Comment: 17 pages, 8 figures. Accepted by MNRA
The instrument suite of the European Spallation Source
An overview is provided of the 15 neutron beam instruments making up the initial instrument suite of the
European Spallation Source (ESS), and being made available to the neutron user community. The ESS neutron
source consists of a high-power accelerator and target station, providing a unique long-pulse time structure
of slow neutrons. The design considerations behind the time structure, moderator geometry and instrument
layout are presented.
The 15-instrument suite consists of two small-angle instruments, two reflectometers, an imaging beamline,
two single-crystal diffractometers; one for macromolecular crystallography and one for magnetism, two powder
diffractometers, and an engineering diffractometer, as well as an array of five inelastic instruments comprising
two chopper spectrometers, an inverse-geometry single-crystal excitations spectrometer, an instrument for vibrational
spectroscopy and a high-resolution backscattering spectrometer. The conceptual design, performance
and scientific drivers of each of these instruments are described.
All of the instruments are designed to provide breakthrough new scientific capability, not currently
available at existing facilities, building on the inherent strengths of the ESS long-pulse neutron source of high
flux, flexible resolution and large bandwidth. Each of them is predicted to provide world-leading performance
at an accelerator power of 2 MW. This technical capability translates into a very broad range of scientific
capabilities. The composition of the instrument suite has been chosen to maximise the breadth and depth
of the scientific impact o
Don't Repeat Yourself: Seamless Execution and Analysis of Extensive Network Experiments
This paper presents MACI, the first bespoke framework for the management, the
scalable execution, and the interactive analysis of a large number of network
experiments. Driven by the desire to avoid repetitive implementation of just a
few scripts for the execution and analysis of experiments, MACI emerged as a
generic framework for network experiments that significantly increases
efficiency and ensures reproducibility. To this end, MACI incorporates and
integrates established simulators and analysis tools to foster rapid but
systematic network experiments.
We found MACI indispensable in all phases of the research and development
process of various communication systems, such as i) an extensive DASH video
streaming study, ii) the systematic development and improvement of Multipath
TCP schedulers, and iii) research on a distributed topology graph pattern
matching algorithm. With this work, we make MACI publicly available to the
research community to advance efficient and reproducible network experiments
Recommended from our members
Applying Human Interaction Management Conceps to E-Mailing: A Visualized Conceptual Model
Electronic mail (e-mail) is one of the dominant IT applications used by knowledge workers and managers. Its key functionality remains the same despite being used for much more than simple messaging: document sharing and archiving for example. At the same time, e-mail is one of the main sources of information overload, which threatens the efficiency, effectiveness and health of knowledge workers. In this paper we address this problem by taking a human-driven and collaborative perspective going beyond traditional Business Process Management: we apply Human Interaction Management (HIM) concepts in constructing a conceptual model for the processing of e-mails. We subsequently use the model to build an exploratory prototype of an add-in for a popular e-mail client hence making the model more tangible and understandable. The prototype shows the implementability of the model and can serve to gain more feedback on its validity, and inspire new ideas about its other uses, such as linking e-mail to collaborative workspaces
Recommended from our members
Interferometric Methods
Future radio telescopes promise great advances in resolution and sensitivity. These
include the Square Kilometer Array, a two array instrument, in South Africa and Australia. Similarly, the next
generation Very Large Array (ngVLA) is being designed for construction in
North America. These arrays all promise exceptional advances in sensitivity,
angular resolution, and survey speed. The SKA and ngVLA are both specified to
have sensitivities at the level of Jy's. The SKA-Low instrument will consist
of a huge number of dipoles antennas in Australia which is pushing the bounds of
current FX correlator technology with scaling, where is the
number of antennas. The design proposals for these instruments include a dense
core of antennas, necessitating advances in imaging methods for these very
dense cores versus more traditionally sparse instruments.
Another ambitious experiment is the Hydrogen Epoch of Reionisation Array (HERA) in
South Africa which hopes to make the first direct detection of the Epoch of Reionisation
through the red-shifted H{\sc i} signal
which is a factor of smaller than the thermal-like noise.
In this thesis, these problems are tackled by re-examining the underlying
principles of interferometry. The first working
example of a direct imaging correlator is presented which allows images to be
formed directly from the voltages off each antenna in a dense array, without an
expensive cross-correlation operation as is typically required. A detailed discussion
is given of how standard steps in interferometric imaging differ in this new
scheme, including calibration. Additionally the first wide field direct imaging
correlator is presented, which allows the problems of non-coplanarity to be
dealt with for both sparse and dense arrays in a very efficient manner on modern GPU compute hardware. These are, to the best of the authors knowledge, the only working implementations of
a direct imaging correlator for generic arrays with no restrictions on the geometry of the
array or homogeneity of constituent receiver elements. These new approaches have been published
in the scientific literature as discussed in the Declaration.
Moving on from this, the closure phase bispectrum is presented as a way of uncovering
the cosmological Epoch of Reionisation signal from the H{\sc i} line. This is using the
HERA telescope, which consists of a dense core of parabolic antennas in a highly redundant layout.
A data reduction and processing pipeline for the HERA telescope is constructed and presented, for use with the
bispectrum. Initial results towards a cosmologial limit are reported.
The HERA telescope relies on redundancy in its antenna elements for its calibration
and measurement strategy. The bispectrum with its unique mathematical propeties, in combination with forward modelling, is shown to be a
potent tool for probing departures from the assumed reudundancy. It is shown, through
this method, that HERA
suffers significant direction-dependent non-redundancies in the dataset used for our analysis,
which are extremely difficult to calibrate out.
Finally, the problem of wide-field imaging in next generation arrays is tackled
through the development and implementation of a new scheme of wide field
imaging. This uses a new method of parallelising the
problem of wide-field imaging, and is intended for use with the very large
datasets that will be produced by upcoming instruments. Two schemes are introduced: -towers, and
Improved -towers. The latter generalises the former in combination with
advances in optimal convolution theory for the radio astronomy ``gridding'' problem.
The theory behind this approach is explored, and a high performance implementation is presented for
-towers and Improved -stacking within Improved -towers.ARM Ltd iCase Sponsorshi
- âŠ