279 research outputs found
Adaptive Traffic Fingerprinting for Darknet Threat Intelligence
Darknet technology such as Tor has been used by various threat actors for
organising illegal activities and data exfiltration. As such, there is a case
for organisations to block such traffic, or to try and identify when it is used
and for what purposes. However, anonymity in cyberspace has always been a
domain of conflicting interests. While it gives enough power to nefarious
actors to masquerade their illegal activities, it is also the cornerstone to
facilitate freedom of speech and privacy. We present a proof of concept for a
novel algorithm that could form the fundamental pillar of a darknet-capable
Cyber Threat Intelligence platform. The solution can reduce anonymity of users
of Tor, and considers the existing visibility of network traffic before
optionally initiating targeted or widespread BGP interception. In combination
with server HTTP response manipulation, the algorithm attempts to reduce the
candidate data set to eliminate client-side traffic that is most unlikely to be
responsible for server-side connections of interest. Our test results show that
MITM manipulated server responses lead to expected changes received by the Tor
client. Using simulation data generated by shadow, we show that the detection
scheme is effective with false positive rate of 0.001, while sensitivity
detecting non-targets was 0.016+-0.127. Our algorithm could assist
collaborating organisations willing to share their threat intelligence or
cooperate during investigations.Comment: 26 page
A survey of QoS-aware web service composition techniques
Web service composition can be briefly described as the process of aggregating services with disparate functionalities into a new composite service in order to meet increasingly complex needs of users. Service composition process has been accurate on dealing with services having disparate functionalities, however, over the years the number of web services in particular that exhibit similar functionalities and varying Quality of Service (QoS) has significantly increased. As such, the problem becomes how to select appropriate web services such that the QoS of the resulting composite service is maximized or, in some cases, minimized. This constitutes an NP-hard problem as it is complicated and difficult to solve. In this paper, a discussion of concepts of web service composition and a holistic review of current service composition techniques proposed in literature is presented. Our review spans several publications in the field that can serve as a road map for future research
Proactive threat detection for connected cars using recursive Bayesian estimation
Upcoming disruptive technologies around autonomous driving of connected cars have not yet been matched with appropriate security by design principles and lack approaches to incorporate proactive preventative measures in the wake of increased cyber-threats against such systems. In this paper, we introduce proactive anomaly detection to a use-case of hijacked connected cars to improve cyber-resilience. First, we manifest the opportunity of behavioral profiling for connected cars from recent literature covering related underpinning technologies. Then, we design and utilize a new data set file for connected cars influenced by the automatic dependent surveillance-broadcast surveillance technology used in the aerospace industry to facilitate data collection and sharing. Finally, we simulate the analysis of travel routes in real time to predict anomalies using predictive modeling. Simulations show the applicability of a Bayesian estimation technique, namely, Kalman filter. With the analysis of future state predictions based on the previous behavior, cyber-threats can be addressed with a vastly increased time window for a reaction when encountering anomalies. We discuss that detecting real-time deviations for malicious intent with the predictive profiling and behavioral algorithms can be superior in effectiveness than the retrospective comparison of known-good/known-bad behavior. When quicker action can be taken while connected cars encounter cyberattacks, more effective engagement or interception of command and control will be achieved
Fruit fly optimization algorithm for network-aware web service composition in the cloud
Service Oriented Computing (SOC) provides a framework for the realization of loosely coupled service oriented applications. Web services are central to the concept of SOC. Currently, research into how web services can be composed to yield QoS optimal composite service has gathered significant attention. However, the number and spread of web services across the cloud data centers has increased, thereby increasing the impact of the network on composite service performance experienced by the user. Recently, QoS-based web service composition techniques focus on optimizing web service QoS attributes such as cost, response time, execution time, etc. In doing so, existing approaches do not separate QoS of the network from web service QoS during service composition. In this paper, we propose a network-aware service composition approach which separates QoS of the network from QoS of web services in the Cloud. Consequently, our approach searches for composite services that are not only QoS-optimal but also have optimal QoS of the network. Our approach consists of a network model which estimates the QoS of the network in the form of network latency between services on the cloud. It also consists of a service composition technique based on fruit fly optimization algorithm which leverages the network model to search for low latency compositions without compromising service QoS levels. The approach is discussed and the results of evaluation are presented. The results indicate that the proposed approach is competitive in finding QoS optimal and low latency solutions when compared to recent techniques
AdPExT: designing a tool to assess information gleaned from browsers by online advertising platforms
The world of online advertising is directly dependent on data collection of the online browsing habits of individuals to enable effective advertisement targeting and retargeting. However, these data collection practices can cause leakage of private data belonging to website visitors (end-users) without their knowledge. The growing privacy concern of end-users is amplified by a lack of trust and understanding of what and how advertisement trackers are collecting and using their data. This paper presents an investigation to restore the trust or validate the concerns. We aim to facilitate the assessment of the actual end-user related data being collected by advertising platforms (APs) by means of a critical discussion but also the development of a new tool, AdPExT (Advertising Parameter Extraction Tool), which can be used to extract third-party parameter key-value pairs at an individual key-value level. Furthermore, we conduct a survey covering mostly United Kingdom-based frequent internet users to gather the perceived sensitivity sentiment for various representative tracking parameters. End-users have a definite concern with regards to advertisement tracking of sensitive data by global dominating platforms such as Facebook and Google
Federated blockchain-based tracking and liability attribution framework for employees and cyber-physical objects in a smart workplace
The systematic integration of the Internet of Things (IoT) and Cyber-Physical Systems (CPS) into the supply chain to increase operational efficiency and quality has also introduced new complexities to the threat landscape. The myriad of sensors could increase data collection capabilities for businesses to facilitate process automation aided by Artificial Intelligence (AI) but without adopting an appropriate Security-by-Design framework, threat detection and response are destined to fail. The emerging concept of Smart Workplace incorporates many CPS (e.g. Robots and Drones) to execute tasks alongside Employees both of which can be exploited as Insider Threats. We introduce and discuss forensic-readiness, liability attribution and the ability to track moving Smart SPS Objects to support modern Digital Forensics and Incident Response (DFIR) within a defence-in-depth strategy. We present a framework to facilitate the tracking of object behaviour within Smart Controlled Business Environments (SCBE) to support resilience by enabling proactive insider threat detection. Several components of the framework were piloted in a company to discuss a real-life case study and demonstrate anomaly detection and the emerging of behavioural patterns according to objects' movement with relation to their job role, workspace position and nearest entry or exit. The empirical data was collected from a Bluetooth-based Proximity Monitoring Solution. Furthermore, a key strength of the framework is a federated Blockchain (BC) model to achieve forensic-readiness by establishing a digital Chain-of-Custody (CoC) and a collaborative environment for CPS to qualify as Digital Witnesses (DW) to support post-incident investigations
Classification of colloquial Arabic tweets in real-time to detect high-risk floods
Twitter has eased real-time information flow for decision makers, it is also one of the key enablers for Open-source Intelligence (OSINT). Tweets mining has recently been used in the context of incident response to estimate the location and damage caused by hurricanes and earthquakes. We aim to research the detection of a specific type of high-risk natural disasters frequently occurring and causing casualties in the Arabian Peninsula, namely `floods'. Researching how we could achieve accurate classification suitable for short informal (colloquial) Arabic text (usually used on Twitter), which is highly inconsistent and received very little attention in this field. First, we provide a thorough technical demonstration consisting of the following stages: data collection (Twitter REST API), labelling, text pre-processing, data division and representation, and training models. This has been deployed using `R' in our experiment. We then evaluate classifiers' performance via four experiments conducted to measure the impact of different stemming techniques on the following classifiers SVM, J48, C5.0, NNET, NB and k-NN. The dataset used consisted of 1434 tweets in total. Our findings show that Support Vector Machine (SVM) was prominent in terms of accuracy (F1=0.933). Furthermore, applying McNemar's test shows that using SVM without stemming on Colloquial Arabic is significantly better than using stemming techniques
Network traffic analysis for threats detection in the Internet of Things
As the prevalence of the Internet of Things (IoT) continues to increase, cyber criminals are quick to exploit the security gaps that many devices are inherently designed with. Users cannot be expected to tackle this threat alone, and many current solutions available for network monitoring are simply not accessible or can be difficult to implement for the average user, which is a gap that needs to be addressed. This article presents an effective signature-based solution to monitor, analyze, and detect potentially malicious traffic for IoT ecosystems in the typical home network environment by utilizing passive network sniffing techniques and a cloud application to monitor anomalous activity. The proposed solution focuses on two attack and propagation vectors leveraged by the infamous Mirai botnet, namely DNS and Telnet. Experimental evaluation demonstrates the proposed solution can detect 98.35 percent of malicious DNS traffic and 99.33 percent of Telnet traffic for an overall detection accuracy of 98.84 percent
- …
