142 research outputs found

    Security Engineering of Patient-Centered Health Care Information Systems in Peer-to-Peer Environments: Systematic Review

    Get PDF
    Background: Patient-centered health care information systems (PHSs) enable patients to take control and become knowledgeable about their own health, preferably in a secure environment. Current and emerging PHSs use either a centralized database, peer-to-peer (P2P) technology, or distributed ledger technology for PHS deployment. The evolving COVID-19 decentralized Bluetooth-based tracing systems are examples of disease-centric P2P PHSs. Although using P2P technology for the provision of PHSs can be flexible, scalable, resilient to a single point of failure, and inexpensive for patients, the use of health information on P2P networks poses major security issues as users must manage information security largely by themselves. Objective: This study aims to identify the inherent security issues for PHS deployment in P2P networks and how they can be overcome. In addition, this study reviews different P2P architectures and proposes a suitable architecture for P2P PHS deployment. Methods: A systematic literature review was conducted following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) reporting guidelines. Thematic analysis was used for data analysis. We searched the following databases: IEEE Digital Library, PubMed, Science Direct, ACM Digital Library, Scopus, and Semantic Scholar. The search was conducted on articles published between 2008 and 2020. The Common Vulnerability Scoring System was used as a guide for rating security issues. Results: Our findings are consolidated into 8 key security issues associated with PHS implementation and deployment on P2P networks and 7 factors promoting them. Moreover, we propose a suitable architecture for P2P PHSs and guidelines for the provision of PHSs while maintaining information security. Conclusions: Despite the clear advantages of P2P PHSs, the absence of centralized controls and inconsistent views of the network on some P2P systems have profound adverse impacts in terms of security. The security issues identified in this study need to be addressed to increase patients\u27 intention to use PHSs on P2P networks by making them safe to use

    Salattujen komento- ja ohjauskanavien havaitseminen verkkosormenjälkien avulla

    Get PDF
    The threat landscape of the Internet has evolved drastically into an environment where malware are increasingly developed by financially motivated cybercriminal groups who mirror legitimate businesses in their structure and processes. These groups develop sophisticated malware with the aim of transforming persistent control over large numbers of infected machines into profit. Recent developments have shown that malware authors seek to hide their Command and Control channels by implementing custom application layer protocols and using custom encryption algorithms. This technique effectively thwarts conventional pattern-based detection mechanisms. This thesis presents network fingerprints, a novel way of performing network-based detection of encrypted Command and Control channels. The goal of the work was to produce a proof of concept system that is able to generate accurate and reliable network signatures for this purpose. The thesis presents and explains the individual phases of an analysis pipeline that was built to process and analyze malware network traffic and to produce network fingerprint signatures. The analysis system was used to generate network fingerprints that were deployed to an intrusion detection system in real-world networks for a test period of 17 days. The experimental phase produced 71 true positive detections and 9 false positive detections, and therefore proved that the established technique is capable of performing detection of targeted encrypted Command and Control channels. Furthermore, the effects on the performance of the underlying intrusion detection system were measured. These results showed that network fingerprints induce an increase of 2-9% to the packet loss and a small increase to the overall computational load of the intrusion detection system.Internetin uhkaympäristön radikaalin kehittymisen myötä edistyksellisiä haittaohjelmia kehittävät kyberrikollisryhmät ovat muuttuneet järjestäytyneiksi ja taloudellista voittoa tavoitteleviksi organisaatioiksi. Nämä rakenteiltaan ja prosesseiltaan laillisia yrityksiä muistuttavat organisaatiot pyrkivät saastuttamaan suuria määriä tietokoneita ja saavuttamaan yhtämittaisen hallintakyvyn. Tutkimukset ovat osoittaneet, että tuntemattomien salausmenetelmien ja uusien sovellustason protokollien käyttö haittaohjelmien komento- ja hallintakanavien piilottamiseksi tietoverkoissa ovat kasvussa. Tämän kaltaiset tekniikat vaikeuttavat oleellisesti perinteisiä toistuviin kuvioihin perustuvia havaitsemismenetelmiä. Tämä työ esittelee salattujen komento- ja hallintakanavien havaitsemiseen suunnitellun uuden konseptin, verkkosormenjäljet. Työn tavoitteena oli toteuttaa prototyyppijärjestelmä, joka analysoi ja prosessoi haittaohjelmaliikennettä, sekä kykenee tuottamaan tarkkoja ja tehokkaita haittaohjelmakohtaisia verkkosormenjälkitunnisteita. Työ selittää verkkosormenjälkien teorian ja käy yksityiskohtaisesti läpi kehitetyn järjestelmän eri osiot ja vaiheet. Järjestelmästä tuotetut verkkosormenjäljet asennettiin 17 päiväksi oikeisiin tietoverkkoihin osaksi tunkeilijan havaitsemisjärjestelmää. Testijakso tuotti yhteensä 71 oikeaa haittaohjelmahavaintoa sekä 9 väärää havaintoa. Menetelmän käyttöönoton vaikutukset tunkeilijan havaitsemisjärjestelmän suorituskykyyn olivat 2 – 9 % kasvu pakettihäviössä ja pieni nousu laskennallisessa kokonaiskuormituksessa. Tulokset osoittavat, että kehitetty järjestelmä kykenee onnistuneesti analysoimaan haittaohjelmaliikennettä sekä tuottamaan salattuja komento- ja hallintakanavia havaitsevia verkkosormenjälkiä

    Federated Agentless Detection of Endpoints Using Behavioral and Characteristic Modeling

    Get PDF
    During the past two decades computer networks and security have evolved that, even though we use the same TCP/IP stack, network traffic behaviors and security needs have significantly changed. To secure modern computer networks, complete and accurate data must be gathered in a structured manner pertaining to the network and endpoint behavior. Security operations teams struggle to keep up with the ever-increasing number of devices and network attacks daily. Often the security aspect of networks gets managed reactively instead of providing proactive protection. Data collected at the backbone are becoming inadequate during security incidents. Incident response teams require data that is reliably attributed to each individual endpoint over time. With the current state of dissociated data collected from networks using different tools it is challenging to correlate the necessary data to find origin and propagation of attacks within the network. Critical indicators of compromise may go undetected due to the drawbacks of current data collection systems leaving endpoints vulnerable to attacks. Proliferation of distributed organizations demand distributed federated security solutions. Without robust data collection systems that are capable of transcending architectural and computational challenges, it is becoming increasingly difficult to provide endpoint protection at scale. This research focuses on reliable agentless endpoint detection and traffic attribution in federated networks using behavioral and characteristic modeling for incident response

    Data-driven curation, learning and analysis for inferring evolving IoT botnets in the wild

    Get PDF
    The insecurity of the Internet-of-Things (IoT) paradigm continues to wreak havoc in consumer and critical infrastructure realms. Several challenges impede addressing IoT security at large, including, the lack of IoT-centric data that can be collected, analyzed and correlated, due to the highly heterogeneous nature of such devices and their widespread deployments in Internet-wide environments. To this end, this paper explores macroscopic, passive empirical data to shed light on this evolving threat phenomena. This not only aims at classifying and inferring Internet-scale compromised IoT devices by solely observing such one-way network traffic, but also endeavors to uncover, track and report on orchestrated "in the wild" IoT botnets. Initially, to prepare the effective utilization of such data, a novel probabilistic model is designed and developed to cleanse such traffic from noise samples (i.e., misconfiguration traffic). Subsequently, several shallow and deep learning models are evaluated to ultimately design and develop a multi-window convolution neural network trained on active and passive measurements to accurately identify compromised IoT devices. Consequently, to infer orchestrated and unsolicited activities that have been generated by well-coordinated IoT botnets, hierarchical agglomerative clustering is deployed by scrutinizing a set of innovative and efficient network feature sets. By analyzing 3.6 TB of recent darknet traffic, the proposed approach uncovers a momentous 440,000 compromised IoT devices and generates evidence-based artifacts related to 350 IoT botnets. While some of these detected botnets refer to previously documented campaigns such as the Hide and Seek, Hajime and Fbot, other events illustrate evolving threats such as those with cryptojacking capabilities and those that are targeting industrial control system communication and control services

    A framework for malicious host fingerprinting using distributed network sensors

    Get PDF
    Numerous software agents exist and are responsible for increasing volumes of malicious traffic that is observed on the Internet today. From a technical perspective the existing techniques for monitoring malicious agents and traffic were not developed to allow for the interrogation of the source of malicious traffic. This interrogation or reconnaissance would be considered active analysis as opposed to existing, mostly passive analysis. Unlike passive analysis, the active techniques are time-sensitive and their results become increasingly inaccurate as time delta between observation and interrogation increases. In addition to this, some studies had shown that the geographic separation of hosts on the Internet have resulted in pockets of different malicious agents and traffic targeting victims. As such it would be important to perform any kind of data collection over various source and in distributed IP address space. The data gathering and exposure capabilities of sensors such as honeypots and network telescopes were extended through the development of near-realtime Distributed Sensor Network modules that allowed for the near-realtime analysis of malicious traffic from distributed, heterogeneous monitoring sensors. In order to utilise the data exposed by the near-realtime Distributed Sensor Network modules an Automated Reconnaissance Framework was created, this framework was tasked with active and passive information collection and analysis of data in near-realtime and was designed from an adapted Multi Sensor Data Fusion model. The hypothesis was made that if sufficiently different characteristics of a host could be identified; combined they could act as a unique fingerprint for that host, potentially allowing for the re-identification of that host, even if its IP address had changed. To this end the concept of Latency Based Multilateration was introduced, acting as an additional metric for remote host fingerprinting. The vast amount of information gathered by the AR-Framework required the development of visualisation tools which could illustrate this data in near-realtime and also provided various degrees of interaction to accommodate human interpretation of such data. Ultimately the data collected through the application of the near-realtime Distributed Sensor Network and AR-Framework provided a unique perspective of a malicious host demographic. Allowing for new correlations to be drawn between attributes such as common open ports and operating systems, location, and inferred intent of these malicious hosts. The result of which expands our current understanding of malicious hosts on the Internet and enables further research in the area

    Data Exfiltration:A Review of External Attack Vectors and Countermeasures

    Get PDF
    AbstractContext One of the main targets of cyber-attacks is data exfiltration, which is the leakage of sensitive or private data to an unauthorized entity. Data exfiltration can be perpetrated by an outsider or an insider of an organization. Given the increasing number of data exfiltration incidents, a large number of data exfiltration countermeasures have been developed. These countermeasures aim to detect, prevent, or investigate exfiltration of sensitive or private data. With the growing interest in data exfiltration, it is important to review data exfiltration attack vectors and countermeasures to support future research in this field. Objective This paper is aimed at identifying and critically analysing data exfiltration attack vectors and countermeasures for reporting the status of the art and determining gaps for future research. Method We have followed a structured process for selecting 108 papers from seven publication databases. Thematic analysis method has been applied to analyse the extracted data from the reviewed papers. Results We have developed a classification of (1) data exfiltration attack vectors used by external attackers and (2) the countermeasures in the face of external attacks. We have mapped the countermeasures to attack vectors. Furthermore, we have explored the applicability of various countermeasures for different states of data (i.e., in use, in transit, or at rest). Conclusion This review has revealed that (a) most of the state of the art is focussed on preventive and detective countermeasures and significant research is required on developing investigative countermeasures that are equally important; (b) Several data exfiltration countermeasures are not able to respond in real-time, which specifies that research efforts need to be invested to enable them to respond in real-time (c) A number of data exfiltration countermeasures do not take privacy and ethical concerns into consideration, which may become an obstacle in their full adoption (d) Existing research is primarily focussed on protecting data in ‘in use’ state, therefore, future research needs to be directed towards securing data in ‘in rest’ and ‘in transit’ states (e) There is no standard or framework for evaluation of data exfiltration countermeasures. We assert the need for developing such an evaluation framework

    Using Context to Improve Network-based Exploit Kit Detection

    Get PDF
    Today, our computers are routinely compromised while performing seemingly innocuous activities like reading articles on trusted websites (e.g., the NY Times). These compromises are perpetrated via complex interactions involving the advertising networks that monetize these sites. Web-based compromises such as exploit kits are similar to any other scam -- the attacker wants to lure an unsuspecting client into a trap to steal private information, or resources -- generating 10s of millions of dollars annually. Exploit kits are web-based services specifically designed to capitalize on vulnerabilities in unsuspecting client computers in order to install malware without a user's knowledge. Sadly, it only takes a single successful infection to ruin a user's financial life, or lead to corporate breaches that result in millions of dollars of expense and loss of customer trust. Exploit kits use a myriad of techniques to obfuscate each attack instance, making current network-based defenses such as signature-based network intrusion detection systems far less effective than in years past. Dynamic analysis or honeyclient analysis on these exploits plays a key role in identifying new attacks for signature generation, but provides no means of inspecting end-user traffic on the network to identify attacks in real time. As a result, defenses designed to stop such malfeasance often arrive too late or not at all resulting in high false positive and false negative (error) rates. In order to deal with these drawbacks, three new detection approaches are presented. To deal with the issue of a high number of errors, a new technique for detecting exploit kit interactions on a network is proposed. The technique capitalizes on the fact that an exploit kit leads its potential victim through a process of exploitation by forcing the browser to download multiple web resources from malicious servers. This process has an inherent structure that can be captured in HTTP traffic and used to significantly reduce error rates. The approach organizes HTTP traffic into tree-like data structures, and, using a scalable index of exploit kit traces as samples, models the detection process as a subtree similarity search problem. The technique is evaluated on 3,800 hours of web traffic on a large enterprise network, and results show that it reduces false positive rates by four orders of magnitude over current state-of-the-art approaches. While utilizing structure can vastly improve detection rates over current approaches, it does not go far enough in helping defenders detect new, previously unseen attacks. As a result, a new framework that applies dynamic honeyclient analysis directly on network traffic at scale is proposed. The framework captures and stores a configurable window of reassembled HTTP objects network wide, uses lightweight content rendering to establish the chain of requests leading up to a suspicious event, then serves the initial response content back to the honeyclient in an isolated network. The framework is evaluated on a diverse collection of exploit kits as they evolve over a 1 year period. The empirical evaluation suggests that the approach offers significant operational value, and a single honeyclient can support a campus deployment of thousands of users. While the above approaches attempt to detect exploit kits before they have a chance to infect the client, they cannot protect a client that has already been infected. The final technique detects signs of post infection behavior by intrusions that abuses the domain name system (DNS) to make contact with an attacker. Contemporary detection approaches utilize the structure of a domain name and require hundreds of DNS messages to detect such malware. As a result, these detection mechanisms cannot detect malware in a timely manner and are susceptible to high error rates. The final technique, based on sequential hypothesis testing, uses the DNS message patterns of a subset of DNS traffic to detect malware in as little as four DNS messages, and with orders of magnitude reduction in error rates. The results of this work can make a significant operational impact on network security analysis, and open several exciting future directions for network security research.Doctor of Philosoph

    Advances in Data Mining Knowledge Discovery and Applications

    Get PDF
    Advances in Data Mining Knowledge Discovery and Applications aims to help data miners, researchers, scholars, and PhD students who wish to apply data mining techniques. The primary contribution of this book is highlighting frontier fields and implementations of the knowledge discovery and data mining. It seems to be same things are repeated again. But in general, same approach and techniques may help us in different fields and expertise areas. This book presents knowledge discovery and data mining applications in two different sections. As known that, data mining covers areas of statistics, machine learning, data management and databases, pattern recognition, artificial intelligence, and other areas. In this book, most of the areas are covered with different data mining applications. The eighteen chapters have been classified in two parts: Knowledge Discovery and Data Mining Applications
    corecore