101 research outputs found

    Adaptive Traffic Fingerprinting for Darknet Threat Intelligence

    Full text link
    Darknet technology such as Tor has been used by various threat actors for organising illegal activities and data exfiltration. As such, there is a case for organisations to block such traffic, or to try and identify when it is used and for what purposes. However, anonymity in cyberspace has always been a domain of conflicting interests. While it gives enough power to nefarious actors to masquerade their illegal activities, it is also the cornerstone to facilitate freedom of speech and privacy. We present a proof of concept for a novel algorithm that could form the fundamental pillar of a darknet-capable Cyber Threat Intelligence platform. The solution can reduce anonymity of users of Tor, and considers the existing visibility of network traffic before optionally initiating targeted or widespread BGP interception. In combination with server HTTP response manipulation, the algorithm attempts to reduce the candidate data set to eliminate client-side traffic that is most unlikely to be responsible for server-side connections of interest. Our test results show that MITM manipulated server responses lead to expected changes received by the Tor client. Using simulation data generated by shadow, we show that the detection scheme is effective with false positive rate of 0.001, while sensitivity detecting non-targets was 0.016+-0.127. Our algorithm could assist collaborating organisations willing to share their threat intelligence or cooperate during investigations.Comment: 26 page

    A survey of QoS-aware web service composition techniques

    Get PDF
    Web service composition can be briefly described as the process of aggregating services with disparate functionalities into a new composite service in order to meet increasingly complex needs of users. Service composition process has been accurate on dealing with services having disparate functionalities, however, over the years the number of web services in particular that exhibit similar functionalities and varying Quality of Service (QoS) has significantly increased. As such, the problem becomes how to select appropriate web services such that the QoS of the resulting composite service is maximized or, in some cases, minimized. This constitutes an NP-hard problem as it is complicated and difficult to solve. In this paper, a discussion of concepts of web service composition and a holistic review of current service composition techniques proposed in literature is presented. Our review spans several publications in the field that can serve as a road map for future research

    The Dark Web: Cyber-Security Intelligence Gathering Opportunities, Risks and Rewards

    Get PDF
    We offer a partial articulation of the threats and opportunities posed by the so-called Dark Web (DW). We go on to propose a novel DW attack detection and prediction model. Signalling aspects are considered wherein the DW is seen to comprise a low cost signaling environment. This holds inherent dangers as well as rewards for investigators as well as those with criminal intent. Suspected DW perpetrators typically act entirely in their own self-interest (e.g. illicit financial gain, terrorism, propagation of extremist views, extreme forms of racism, pornography, and politics; so-called ‘radicalisation’). DWinvestigators therefore need to be suitably risk aware such that the construction of a credible legally admissible, robust evidence trail does not expose investigators to undue operational or legal risk

    Resilient Machine Learning:Advancement, Barriers, and Opportunities in the Nuclear Industry

    Get PDF
    The widespread adoption and success of Machine Learning (ML) technologies depend on thorough testing of the resilience and robustness to adversarial attacks. The testing should focus on both the model and the data. It is necessary to build robust and resilient systems to withstand disruptions and remain functional despite the action of adversaries, specifically in the security-sensitive Nuclear Industry (NI), where consequences can be fatal in terms of both human lives and assets. We analyse ML-based research works that have investigated adversaries and defence strategies in the NI . We then present the progress in the adoption of ML techniques, identify use cases where adversaries can threaten the ML-enabled systems, and finally identify the progress on building Resilient Machine Learning (rML) systems entirely focusing on the NI domain

    AdPExT: designing a tool to assess information gleaned from browsers by online advertising platforms

    Get PDF
    The world of online advertising is directly dependent on data collection of the online browsing habits of individuals to enable effective advertisement targeting and retargeting. However, these data collection practices can cause leakage of private data belonging to website visitors (end-users) without their knowledge. The growing privacy concern of end-users is amplified by a lack of trust and understanding of what and how advertisement trackers are collecting and using their data. This paper presents an investigation to restore the trust or validate the concerns. We aim to facilitate the assessment of the actual end-user related data being collected by advertising platforms (APs) by means of a critical discussion but also the development of a new tool, AdPExT (Advertising Parameter Extraction Tool), which can be used to extract third-party parameter key-value pairs at an individual key-value level. Furthermore, we conduct a survey covering mostly United Kingdom-based frequent internet users to gather the perceived sensitivity sentiment for various representative tracking parameters. End-users have a definite concern with regards to advertisement tracking of sensitive data by global dominating platforms such as Facebook and Google

    Trustworthy Cross-Border Interoperable Identity System for Developing Countries

    Full text link
    Foundational identity systems (FIDS) have been used to optimise service delivery and inclusive economic growth in developing countries. As developing nations increasingly seek to use FIDS for the identification and authentication of identity (ID) holders, trustworthy interoperability will help to develop a cross-border dimension of e-Government. Despite this potential, there has not been any significant research on the interoperability of FIDS in the African identity ecosystem. There are several challenges to this; on one hand, complex internal political dynamics have resulted in weak institutions, implying that FIDS could be exploited for political gains. On the other hand, the trust in the government by the citizens or ID holders is habitually low, in which case, data security and privacy protection concerns become paramount. In the same sense, some FIDS are technology-locked, thus interoperability is primarily ambiguous. There are also issues of cross-system compatibility, legislation, vendor-locked system design principles and unclear regulatory provisions for data sharing. Fundamentally, interoperability is an essential prerequisite for e-Government services and underpins optimal service delivery in education, social security, and financial services including gender and equality as already demonstrated by the European Union. Furthermore, cohesive data exchange through an interoperable identity system will create an ecosystem of efficient data governance and the integration of cross-border FIDS. Consequently, this research identifies the challenges, opportunities, and requirements for cross-border interoperability in an African context. Our findings show that interoperability in the African identity ecosystem is vital to strengthen the seamless authentication and verification of ID holders for inclusive economic growth and widen the dimensions of e-Government across the continent.Comment: 18 pages, 4 figures, In 2023 Trustworthy Digital Identity International Conference, Bengaluru, Indi

    Fruit fly optimization algorithm for network-aware web service composition in the cloud

    Get PDF
    Service Oriented Computing (SOC) provides a framework for the realization of loosely coupled service oriented applications. Web services are central to the concept of SOC. Currently, research into how web services can be composed to yield QoS optimal composite service has gathered significant attention. However, the number and spread of web services across the cloud data centers has increased, thereby increasing the impact of the network on composite service performance experienced by the user. Recently, QoS-based web service composition techniques focus on optimizing web service QoS attributes such as cost, response time, execution time, etc. In doing so, existing approaches do not separate QoS of the network from web service QoS during service composition. In this paper, we propose a network-aware service composition approach which separates QoS of the network from QoS of web services in the Cloud. Consequently, our approach searches for composite services that are not only QoS-optimal but also have optimal QoS of the network. Our approach consists of a network model which estimates the QoS of the network in the form of network latency between services on the cloud. It also consists of a service composition technique based on fruit fly optimization algorithm which leverages the network model to search for low latency compositions without compromising service QoS levels. The approach is discussed and the results of evaluation are presented. The results indicate that the proposed approach is competitive in finding QoS optimal and low latency solutions when compared to recent techniques

    Network traffic analysis for threats detection in the Internet of Things

    Get PDF
    As the prevalence of the Internet of Things (IoT) continues to increase, cyber criminals are quick to exploit the security gaps that many devices are inherently designed with. Users cannot be expected to tackle this threat alone, and many current solutions available for network monitoring are simply not accessible or can be difficult to implement for the average user, which is a gap that needs to be addressed. This article presents an effective signature-based solution to monitor, analyze, and detect potentially malicious traffic for IoT ecosystems in the typical home network environment by utilizing passive network sniffing techniques and a cloud application to monitor anomalous activity. The proposed solution focuses on two attack and propagation vectors leveraged by the infamous Mirai botnet, namely DNS and Telnet. Experimental evaluation demonstrates the proposed solution can detect 98.35 percent of malicious DNS traffic and 99.33 percent of Telnet traffic for an overall detection accuracy of 98.84 percent

    Classification of colloquial Arabic tweets in real-time to detect high-risk floods

    Get PDF
    Twitter has eased real-time information flow for decision makers, it is also one of the key enablers for Open-source Intelligence (OSINT). Tweets mining has recently been used in the context of incident response to estimate the location and damage caused by hurricanes and earthquakes. We aim to research the detection of a specific type of high-risk natural disasters frequently occurring and causing casualties in the Arabian Peninsula, namely `floods'. Researching how we could achieve accurate classification suitable for short informal (colloquial) Arabic text (usually used on Twitter), which is highly inconsistent and received very little attention in this field. First, we provide a thorough technical demonstration consisting of the following stages: data collection (Twitter REST API), labelling, text pre-processing, data division and representation, and training models. This has been deployed using `R' in our experiment. We then evaluate classifiers' performance via four experiments conducted to measure the impact of different stemming techniques on the following classifiers SVM, J48, C5.0, NNET, NB and k-NN. The dataset used consisted of 1434 tweets in total. Our findings show that Support Vector Machine (SVM) was prominent in terms of accuracy (F1=0.933). Furthermore, applying McNemar's test shows that using SVM without stemming on Colloquial Arabic is significantly better than using stemming techniques

    An end-process blockchain-based secure aggregation mechanism using federated machine learning

    Get PDF
    Federated Learning (FL) is a distributed Deep Learning (DL) technique that creates a global model through the local training of multiple edge devices. It uses a central server for model communication and the aggregation of post-trained models. The central server orchestrates the training process by sending each participating device an initial or pre-trained model for training. To achieve the learning objective, focused updates from edge devices are sent back to the central server for aggregation. While such an architecture and information flows can support the preservation of the privacy of participating device data, the strong dependence on the central server is a significant drawback of this framework. Having a central server could potentially lead to a single point of failure. Further, a malicious server may be able to successfully reconstruct the original data, which could impact on trust, transparency, fairness, privacy, and security. Decentralizing the FL process can successfully address these issues. Integrating a decentralized protocol such as Blockchain technology into Federated Learning techniques will help to address these issues and ensure secure aggregation. This paper proposes a Blockchain-based secure aggregation strategy for FL. Blockchain is implemented as a channel of communication between the central server and edge devices. It provides a mechanism of masking device local data for secure aggregation to prevent compromise and reconstruction of the training data by a malicious server. It enhances the scalability of the system, eliminates the threat of a single point of failure of the central server, reduces vulnerability in the system, ensures security, and transparent communication. Furthermore, our framework utilizes a fault-tolerant server to assist in handling dropouts and stragglers which can occur in federated environments. To reduce the training time, we synchronously implemented a callback or end-process mechanism once sufficient post-trained models have been returned for aggregation (threshold accuracy achieved). This mechanism resynchronizes clients with a stale and outdated model, minimizes the wastage of resources, and increases the rate of convergence of the global model
    corecore