103 research outputs found

    Adaptive Traffic Fingerprinting for Darknet Threat Intelligence

    Full text link
    Darknet technology such as Tor has been used by various threat actors for organising illegal activities and data exfiltration. As such, there is a case for organisations to block such traffic, or to try and identify when it is used and for what purposes. However, anonymity in cyberspace has always been a domain of conflicting interests. While it gives enough power to nefarious actors to masquerade their illegal activities, it is also the cornerstone to facilitate freedom of speech and privacy. We present a proof of concept for a novel algorithm that could form the fundamental pillar of a darknet-capable Cyber Threat Intelligence platform. The solution can reduce anonymity of users of Tor, and considers the existing visibility of network traffic before optionally initiating targeted or widespread BGP interception. In combination with server HTTP response manipulation, the algorithm attempts to reduce the candidate data set to eliminate client-side traffic that is most unlikely to be responsible for server-side connections of interest. Our test results show that MITM manipulated server responses lead to expected changes received by the Tor client. Using simulation data generated by shadow, we show that the detection scheme is effective with false positive rate of 0.001, while sensitivity detecting non-targets was 0.016+-0.127. Our algorithm could assist collaborating organisations willing to share their threat intelligence or cooperate during investigations.Comment: 26 page

    A survey of QoS-aware web service composition techniques

    Get PDF
    Web service composition can be briefly described as the process of aggregating services with disparate functionalities into a new composite service in order to meet increasingly complex needs of users. Service composition process has been accurate on dealing with services having disparate functionalities, however, over the years the number of web services in particular that exhibit similar functionalities and varying Quality of Service (QoS) has significantly increased. As such, the problem becomes how to select appropriate web services such that the QoS of the resulting composite service is maximized or, in some cases, minimized. This constitutes an NP-hard problem as it is complicated and difficult to solve. In this paper, a discussion of concepts of web service composition and a holistic review of current service composition techniques proposed in literature is presented. Our review spans several publications in the field that can serve as a road map for future research

    The Dark Web: Cyber-Security Intelligence Gathering Opportunities, Risks and Rewards

    Get PDF
    We offer a partial articulation of the threats and opportunities posed by the so-called Dark Web (DW). We go on to propose a novel DW attack detection and prediction model. Signalling aspects are considered wherein the DW is seen to comprise a low cost signaling environment. This holds inherent dangers as well as rewards for investigators as well as those with criminal intent. Suspected DW perpetrators typically act entirely in their own self-interest (e.g. illicit financial gain, terrorism, propagation of extremist views, extreme forms of racism, pornography, and politics; so-called ‘radicalisation’). DWinvestigators therefore need to be suitably risk aware such that the construction of a credible legally admissible, robust evidence trail does not expose investigators to undue operational or legal risk

    Resilient Machine Learning:Advancement, Barriers, and Opportunities in the Nuclear Industry

    Get PDF
    The widespread adoption and success of Machine Learning (ML) technologies depend on thorough testing of the resilience and robustness to adversarial attacks. The testing should focus on both the model and the data. It is necessary to build robust and resilient systems to withstand disruptions and remain functional despite the action of adversaries, specifically in the security-sensitive Nuclear Industry (NI), where consequences can be fatal in terms of both human lives and assets. We analyse ML-based research works that have investigated adversaries and defence strategies in the NI . We then present the progress in the adoption of ML techniques, identify use cases where adversaries can threaten the ML-enabled systems, and finally identify the progress on building Resilient Machine Learning (rML) systems entirely focusing on the NI domain

    Trustworthy Cross-Border Interoperable Identity System for Developing Countries

    Full text link
    Foundational identity systems (FIDS) have been used to optimise service delivery and inclusive economic growth in developing countries. As developing nations increasingly seek to use FIDS for the identification and authentication of identity (ID) holders, trustworthy interoperability will help to develop a cross-border dimension of e-Government. Despite this potential, there has not been any significant research on the interoperability of FIDS in the African identity ecosystem. There are several challenges to this; on one hand, complex internal political dynamics have resulted in weak institutions, implying that FIDS could be exploited for political gains. On the other hand, the trust in the government by the citizens or ID holders is habitually low, in which case, data security and privacy protection concerns become paramount. In the same sense, some FIDS are technology-locked, thus interoperability is primarily ambiguous. There are also issues of cross-system compatibility, legislation, vendor-locked system design principles and unclear regulatory provisions for data sharing. Fundamentally, interoperability is an essential prerequisite for e-Government services and underpins optimal service delivery in education, social security, and financial services including gender and equality as already demonstrated by the European Union. Furthermore, cohesive data exchange through an interoperable identity system will create an ecosystem of efficient data governance and the integration of cross-border FIDS. Consequently, this research identifies the challenges, opportunities, and requirements for cross-border interoperability in an African context. Our findings show that interoperability in the African identity ecosystem is vital to strengthen the seamless authentication and verification of ID holders for inclusive economic growth and widen the dimensions of e-Government across the continent.Comment: 18 pages, 4 figures, In 2023 Trustworthy Digital Identity International Conference, Bengaluru, Indi

    AdPExT: designing a tool to assess information gleaned from browsers by online advertising platforms

    Get PDF
    The world of online advertising is directly dependent on data collection of the online browsing habits of individuals to enable effective advertisement targeting and retargeting. However, these data collection practices can cause leakage of private data belonging to website visitors (end-users) without their knowledge. The growing privacy concern of end-users is amplified by a lack of trust and understanding of what and how advertisement trackers are collecting and using their data. This paper presents an investigation to restore the trust or validate the concerns. We aim to facilitate the assessment of the actual end-user related data being collected by advertising platforms (APs) by means of a critical discussion but also the development of a new tool, AdPExT (Advertising Parameter Extraction Tool), which can be used to extract third-party parameter key-value pairs at an individual key-value level. Furthermore, we conduct a survey covering mostly United Kingdom-based frequent internet users to gather the perceived sensitivity sentiment for various representative tracking parameters. End-users have a definite concern with regards to advertisement tracking of sensitive data by global dominating platforms such as Facebook and Google

    Improving safety claims in digital health interventions using the digital health assessment method

    Get PDF
    Objective: Establish a relationship between digital health intervention (DHI) and health system challenges (HSCs), as defined by the World Health Organization; within the context of hazard identification (HazID), leading to safety claims. To improve the justification of safety of DHIs and provide a standardised approach to hazard assessment through common terminology, ontology and simplification of safety claims. Articulation of results, to provide guidance for health strategy and regulatory/standards-based compliance. Methods: We categorise and analyse hazards using a qualitative HazID study. This method utilises a synergy between simplicity of DHI intended use and the interaction from a multidisciplinary team (technologists and health informaticians) in the hazard analysis of the subject under assessment as an influencing factor. Although there are other methodologies available for hazard assessment. We examine the hazards identified and associated failures to articulate the improvements in the quality of safety claims. Results: Applying the method provides the hazard assessment and helps generate the assurance case. Justification of safety is made and elicits confidence in safety claim. Controls to hazards contribute to meeting the HSC. Conclusions: This method of hazard assessment, analysis and the use of ontologies (DHI & HSC) improves the justification of safety claim and evidence articulation

    Fruit fly optimization algorithm for network-aware web service composition in the cloud

    Get PDF
    Service Oriented Computing (SOC) provides a framework for the realization of loosely coupled service oriented applications. Web services are central to the concept of SOC. Currently, research into how web services can be composed to yield QoS optimal composite service has gathered significant attention. However, the number and spread of web services across the cloud data centers has increased, thereby increasing the impact of the network on composite service performance experienced by the user. Recently, QoS-based web service composition techniques focus on optimizing web service QoS attributes such as cost, response time, execution time, etc. In doing so, existing approaches do not separate QoS of the network from web service QoS during service composition. In this paper, we propose a network-aware service composition approach which separates QoS of the network from QoS of web services in the Cloud. Consequently, our approach searches for composite services that are not only QoS-optimal but also have optimal QoS of the network. Our approach consists of a network model which estimates the QoS of the network in the form of network latency between services on the cloud. It also consists of a service composition technique based on fruit fly optimization algorithm which leverages the network model to search for low latency compositions without compromising service QoS levels. The approach is discussed and the results of evaluation are presented. The results indicate that the proposed approach is competitive in finding QoS optimal and low latency solutions when compared to recent techniques

    Classification of colloquial Arabic tweets in real-time to detect high-risk floods

    Get PDF
    Twitter has eased real-time information flow for decision makers, it is also one of the key enablers for Open-source Intelligence (OSINT). Tweets mining has recently been used in the context of incident response to estimate the location and damage caused by hurricanes and earthquakes. We aim to research the detection of a specific type of high-risk natural disasters frequently occurring and causing casualties in the Arabian Peninsula, namely `floods'. Researching how we could achieve accurate classification suitable for short informal (colloquial) Arabic text (usually used on Twitter), which is highly inconsistent and received very little attention in this field. First, we provide a thorough technical demonstration consisting of the following stages: data collection (Twitter REST API), labelling, text pre-processing, data division and representation, and training models. This has been deployed using `R' in our experiment. We then evaluate classifiers' performance via four experiments conducted to measure the impact of different stemming techniques on the following classifiers SVM, J48, C5.0, NNET, NB and k-NN. The dataset used consisted of 1434 tweets in total. Our findings show that Support Vector Machine (SVM) was prominent in terms of accuracy (F1=0.933). Furthermore, applying McNemar's test shows that using SVM without stemming on Colloquial Arabic is significantly better than using stemming techniques

    Network traffic analysis for threats detection in the Internet of Things

    Get PDF
    As the prevalence of the Internet of Things (IoT) continues to increase, cyber criminals are quick to exploit the security gaps that many devices are inherently designed with. Users cannot be expected to tackle this threat alone, and many current solutions available for network monitoring are simply not accessible or can be difficult to implement for the average user, which is a gap that needs to be addressed. This article presents an effective signature-based solution to monitor, analyze, and detect potentially malicious traffic for IoT ecosystems in the typical home network environment by utilizing passive network sniffing techniques and a cloud application to monitor anomalous activity. The proposed solution focuses on two attack and propagation vectors leveraged by the infamous Mirai botnet, namely DNS and Telnet. Experimental evaluation demonstrates the proposed solution can detect 98.35 percent of malicious DNS traffic and 99.33 percent of Telnet traffic for an overall detection accuracy of 98.84 percent
    corecore