80 research outputs found
PERFORMANCE CHARACTERISATION OF IP NETWORKS
The initial rapid expansion of the Internet, in terms of complexity and number of hosts, was
followed by an increased interest in its overall parameters and the quality the network offers.
This growth has led, in the first instance, to extensive research in the area of network monitoring,
in order to better understand the characteristics of the current Internet. In parallel, studies were
made in the area of protocol performance modelling, aiming to estimate the performance of
various Internet applications.
A key goal of this research project was the analysis of current Internet traffic performance from a
dual perspective: monitoring and prediction. In order to achieve this, the study has three main
phases. It starts by describing the relationship between data transfer performance and network
conditions, a relationship that proves to be critical when studying application performance. The
next phase proposes a novel architecture of inferring network conditions and transfer parameters
using captured traffic analysis. The final phase describes a novel alternative to current TCP
(Transmission Control Protocol) models, which provides the relationship between network, data
transfer, and client characteristics on one side, and the resulting TCP performance on the other,
while accounting for the features of current Internet transfers.
The proposed inference analysis method for network and transfer parameters uses online nonintrusive
monitoring of captured traffic from a single point. This technique overcomes
limitations of prior approaches that are typically geared towards intrusive and/or dual-point
offline analysis. The method includes several novel aspects, such as TCP timestamp analysis,
which allows bottleneck bandwidth inference and more accurate receiver-based parameter
measurement, which are not possible using traditional acknowledgment-based inference. The
the results of the traffic analysis determine the location of the eventual degradations in network
conditions relative to the position of the monitoring point. The proposed monitoring framework
infers the performance parameters of network paths conditions transited by the analysed traffic,
subject to the position of the monitoring point, and it can be used as a starting point in pro-active
network management.
The TCP performance prediction model is based on the observation that current, potentially
unknown, TCP implementations, as well as connection characteristics, are too complex for a
mathematical model. The model proposed in this thesis uses an artificial intelligence-based
analysis method to establish the relationship between the parameters that influence the evolution
of the TCP transfers and the resulting performance of those transfers. Based on preliminary tests
of classification and function approximation algorithms, a neural network analysis approach was
preferred due to its prediction accuracy.
Both the monitoring method and the prediction model are validated using a combination of
traffic traces, ranging from synthetic transfers / environments, produced using a network
simulator/emulator, to traces produced using a script-based, controlled client and uncontrolled
traces, both using real Internet traffic. The validation tests indicate that the proposed approaches
provide better accuracy in terms of inferring network conditions and predicting transfer
performance in comparison with previous methods. The non-intrusive analysis of the real
network traces provides comprehensive information on the current Internet characteristics,
indicating low-loss, low-delay, and high-bottleneck bandwidth conditions for the majority of the
studied paths.
Overall, this study provides a method for inferring the characteristics of Internet paths based on
traffic analysis, an efficient methodology for predicting TCP transfer performance, and a firm
basis for future research in the areas of traffic analysis and performance modelling
Location based transmission using a neighbour aware-cross layer MAC for ad hoc networks
In a typical Ad Hoc network, mobile nodes have scarce shared bandwidth and limited battery life resources, so optimizing the resource and enhancing the overall network performance is the ultimate aim in such network. This paper proposes anew cross layer MAC algorithm called Location Based Transmission using a Neighbour Aware ā Cross Layer MAC (LBT-NA Cross Layer MAC) that aims to reduce the transmission power when communicating with the intended receiver by exchanging location information between nodes in one hand and on the other hand the MAC uses a new random backoff values, which is based on the number of active neighbour nodes, unlike the standard IEEE 802.11 series where a random backoff value is chosen from a fixed range of 0-31. The validation test demonstrates that the proposed algorithm increases battery life, increases spatial reuse and enhances the network performance
Data Driven Approaches to Cybersecurity Governance for Board Decision-Making -- A Systematic Review
Cybersecurity governance influences the quality of strategic decision-making
to ensure cyber risks are managed effectively. Board of Directors are the
decisions-makers held accountable for managing this risk; however, they lack
adequate and efficient information necessary for making such decisions. In
addition to the myriad of challenges they face, they are often insufficiently
versed in the technology or cybersecurity terminology or not provided with the
correct tools to support them to make sound decisions to govern cybersecurity
effectively. A different approach is needed to ensure BoDs are clear on the
approach the business is taking to build a cyber resilient organization. This
systematic literature review investigates the existing risk measurement
instruments, cybersecurity metrics, and associated models for supporting BoDs.
We identified seven conceptual themes through literature analysis that form the
basis of this study's main contribution. The findings showed that, although
sophisticated cybersecurity tools exist and are developing, there is limited
information for Board of Directors to support them in terms of metrics and
models to govern cybersecurity in a language they understand. The review also
provides some recommendations on theories and models that can be further
investigated to provide support to Board of Directors
A machine-learning approach to Detect users' suspicious behaviour through the Facebook wall
Facebook represents the current de-facto choice for social media, changing
the nature of social relationships. The increasing amount of personal
information that runs through this platform publicly exposes user behaviour and
social trends, allowing aggregation of data through conventional intelligence
collection techniques such as OSINT (Open Source Intelligence). In this paper,
we propose a new method to detect and diagnose variations in overall Facebook
user psychology through Open Source Intelligence (OSINT) and machine learning
techniques. We are aggregating the spectrum of user sentiments and views by
using N-Games charts, which exhibit noticeable variations over time, validated
through long term collection. We postulate that the proposed approach can be
used by security organisations to understand and evaluate the user psychology,
then use the information to predict insider threats or prevent insider attacks.Comment: 8 page
Agent-based Vs Agent-less Sandbox for Dynamic Behavioral Analysis
Malicious software is detected and classified by either static analysis or dynamic analysis. In static analysis, malware samples are reverse engineered and analyzed so that signatures of malware can be constructed. These techniques can be easily thwarted through polymorphic, metamorphic malware, obfuscation and packing techniques, whereas in dynamic analysis malware samples are executed in a controlled environment using the sandboxing technique, in order to model the behavior of malware. In this paper, we have analyzed Petya, Spyeye, VolatileCedar, PAFISH etc. through Agent-based and Agentless dynamic sandbox systems in order to investigate and benchmark their efficiency in advanced malware detection
A blockchain secured pharmaceutical distribution system to fight counterfeiting
Counterfeiting drugs has been a global concern for years. Considering the lack of transparency within the current pharmaceutical distribution system, research has shown that blockchain technology is a promising solution for an improved supply chain system. This study aims to explore the current solution proposals for distribution systems using blockchain technology. Based on a literature review on currently proposed solutions, it is identified that the secrecy of the data within the system and nodesā reputation in decision making has not been considered. The proposed prototype uses a zero-knowledge proof protocol to ensure the integrity of the distributed data. It uses the Markov model to track each nodeās āreputation scoreā based on their interactions to predict the reliability of the nodes in consensus decision making. Analysis of the prototype demonstrates a reliable method in decision making, which concludes with overall improvements in the systemās confidentiality, integrity, and availability. The result indicates that the decision protocol must be significantly considered in a reliable distribution system. It is recommended that the pharmaceutical distribution systems adopt a relevant protocol to design their blockchain solution. Continuous research is required further to increase performance and reliability within blockchain distribution systems
Medical Systems Data Security and Biometric Authentication in Public Cloud Servers
Advances in distributed computing and virtualization allowed cloud computing to establish itself as a popular data management and storage option for organizations. However, unclear safeguards, practices, as well as the evolution of legislation around privacy and data protection, contribute to data security being one of the main concerns in adopting this paradigm. Another important aspect hindering the absolute success of cloud computing is the ability to ensure the digital identity of users and protect the virtual environment through logical access controls while avoiding the compromise of its authentication mechanism or storage medium. Therefore, this paper proposes a system that addresses data security wherein unauthorized access to data stored in a public cloud is prevented by applying a fragmentation technique and a NoSQL database. Moreover, a system for managing and authenticating users with multimodal biometrics is also suggested along with a mechanism to ensure the protection of biometric features. When compared with encryption, the proposed fragmentation method indicates better latency performance, highlighting its strong potential use-case in environments with lower latency requirements such as the healthcare IT infrastructure
White-box compression: Learning and exploiting compact table representations
We formulate a conceptual model for white-box compression, which represents the logical columns in tabular data as an openly deļ¬ned function over some actually stored physical columns. Each block of data should thus go accompanied by a header that describes this functional mapping. Because these compression functions are openly deļ¬ned, database systems can exploit them using query optimization and during execution, enabling e.g. better ļ¬lter predicate pushdown. In addition, we show that white-box compression is able to identify a broad variety of new opportunities for compression, leading to much better compression factors. These opportunities are identiļ¬ed using an automatic learning process that learns the functions from the data. We provide a recursive pattern-driven algorithm for such learning. Finally, we demonstrate the effectiveness of white-box compression on a new benchmark we contribute hereby: the Public BI benchmark provides a rich set of real-world datasets.We believe our basic prototype for white-box compression opens the way for future research into transparent compressed data representations on the one hand and database system architectures that can eļ¬ciently exploit these on the other, and should be seen as another step into the direction of data management systems that are self-learning and optimize themselves for the data they are deployed on.</p
- ā¦