1,432 research outputs found

    Software-implemented attack tolerance for critical information retrieval

    Get PDF
    The fast-growing reliance of our daily life upon online information services often demands an appropriate level of privacy protection as well as highly available service provision. However, most existing solutions have attempted to address these problems separately. This thesis investigates and presents a solution that provides both privacy protection and fault tolerance for online information retrieval. A new approach to Attack-Tolerant Information Retrieval (ATIR) is developed based on an extension of existing theoretical results for Private Information Retrieval (PIR). ATIR uses replicated services to protect a user's privacy and to ensure service availability. In particular, ATIR can tolerate any collusion of up to t servers for privacy violation and up to ƒ faulty (either crashed or malicious) servers in a system with k replicated servers, provided that k ≥ t + ƒ + 1 where t ≥ 1 and ƒ ≤ t. In contrast to other related approaches, ATIR relies on neither enforced trust assumptions, such as the use of tanker-resistant hardware and trusted third parties, nor an increased number of replicated servers. While the best solution known so far requires k (≥ 3t + 1) replicated servers to cope with t malicious servers and any collusion of up to t servers with an O(n^*^) communication complexity, ATIR uses fewer servers with a much improved communication cost, O(n1/2)(where n is the size of a database managed by a server).The majority of current PIR research resides on a theoretical level. This thesis provides both theoretical schemes and their practical implementations with good performance results. In a LAN environment, it takes well under half a second to use an ATIR service for calculations over data sets with a size of up to 1MB. The performance of the ATIR systems remains at the same level even in the presence of server crashes and malicious attacks. Both analytical results and experimental evaluation show that ATIR offers an attractive and practical solution for ever-increasing online information applications

    Developing a Formal Navy Knowledge Management Process

    Get PDF
    Prepared for: Chief of Naval Operations, N1Organization tacit and explicit knowledge are required for high performance, and it is imperative for such knowledge to be managed to ensure that it flows rapidly, reliably and energetically. The Navy N1 organization has yet to develop a formal process for knowledge management (KM). This places N1 in a position of competitive disadvantage, particularly as thousands of people change jobs every day, often taking their hard earned job knowledge out the door with them and leaving their replacements with the need to learn such knowledge anew. Building upon initial efforts to engage with industry and conceptualize a Navy KM strategy, the research described in this study employs a combination of Congruence Model analysis, Knowledge Flow Theory, and qualitative methods to outline an approach for embedding a formal Navy KM process. This work involves surveying best tools and practices in the industry, government and nonprofit sectors, augmented by in depth field research to examine two specific Navy organizations in detail. Results are highly promising, and they serve to illuminate a path toward improving Navy knowledge flows as well as continued research along these lines.Chief of Naval Operations, N1Chief of Naval Operations, N1.Approved for public release; distribution is unlimited

    Cyber Threat Predictive Analytics for Improving Cyber Supply Chain Security

    Get PDF
    Cyber Supply Chain (CSC) system is complex which involves different sub-systems performing various tasks. Security in supply chain is challenging due to the inherent vulnerabilities and threats from any part of the system which can be exploited at any point within the supply chain. This can cause a severe disruption on the overall business continuity. Therefore, it is paramount important to understand and predicate the threats so that organization can undertake necessary control measures for the supply chain security. Cyber Threat Intelligence (CTI) provides an intelligence analysis to discover unknown to known threats using various properties including threat actor skill and motivation, Tactics, Techniques, and Procedure (TT and P), and Indicator of Compromise (IoC). This paper aims to analyse and predicate threats to improve cyber supply chain security. We have applied Cyber Threat Intelligence (CTI) with Machine Learning (ML) techniques to analyse and predict the threats based on the CTI properties. That allows to identify the inherent CSC vulnerabilities so that appropriate control actions can be undertaken for the overall cybersecurity improvement. To demonstrate the applicability of our approach, CTI data is gathered and a number of ML algorithms, i.e., Logistic Regression (LG), Support Vector Machine (SVM), Random Forest (RF), and Decision Tree (DT), are used to develop predictive analytics using the Microsoft Malware Prediction dataset. The experiment considers attack and TTP as input parameters and vulnerabilities and Indicators of compromise (IoC) as output parameters. The results relating to the prediction reveal that Spyware/Ransomware and spear phishing are the most predictable threats in CSC. We have also recommended relevant controls to tackle these threats. We advocate using CTI data for the ML predicate model for the overall CSC cyber security improvement

    Defesa por ataque: simulando ataques para promover fortes políticas de segurança organizacional

    Get PDF
    Cyber crime is continuously growing in current times due to the constant digitization of everyday activities. Recently, after the world was hit with the COVID-19 pandemic, this effect was even more noticeable. With more digital activity, cyber crime has a tendency to also increase. The simulation of adversaries as a testing tool is one of the most important instruments when evaluating an organization’s security. Penetration tests are not enough, as attackers resort to many other methods such as social engineering and its techniques (phishing, impersonation, tailgating, etc.). By simulating a full scale attack with minimal restrictions, "red teaming" is introduced. There was an attempt to perform a red team assessment to the University of Aveiro in order to evaluate, test and improve the security policies of the organization. However, due to legal and bureaucratic restrictions related mostly to data protection policies and other privacy measures, the plan was cut short to merely the planning of the red team. The TIBER-EU Framework was also introduced, representing the state of the art guidelines to red teaming in Europe. This framework was followed during the planning of the assessment, which allowed me, the author of this thesis and also the emulated red team, to find a couple of flaws in the University’s security by executing brief threat intelligence analysis sessions.O cibercrime está continuamente a crescer nos tempos atuais devido à constante digitalização das atividades do quotidiano. Recentemente, após a pandemia de COVID-19 ter atingido o planeta, este efeito foi ainda mais acentuado. Com mais atividade digital, o cibercrime tem também uma tendência a aumentar. A simulação de adversário como ferramenta de testagem é um dos instrumentos mais importantes quando se avalia a segurança de uma organização. Testes de intrusão não são suficientes, pois os atacantes recorrem a muitos outros métodos como à engenharia social e às respetivas técnicas (phishing, personificação, tailgating, etc.). O conceito "red teaming" é introduzido através da simulação de um ataque de larga escala com restrições mínimas. Nesta dissertação houve uma tentativa de executar um teste de red team à Universidade de Aveiro com o objetivo de avaliar, testar e melhorar as políticas de segurança da organização. No entanto, devido a restrições legais e bureocráticas relacionadas maioritariamente com políticas de proteção de dados e outras medidas a favor da privacidade, o plano inicial ficou apenas pelo planeamento de um teste red team. O TIBER-EU Framework foi também introduzido, contendo as normas consideradas como estado da arte no que toca a red teaming na Europa. Estas diretrizes foram seguidas durante o planeamento do teste, o que me permitiu, como autor da dissertação e único membro da red team simulada, encontrar algumas falhas de segurança na Universidade através de breves sessões de análise de threat intelligence.Mestrado em Ciberseguranç

    Framework For Modeling Attacker Capabilities with Deception

    Get PDF
    In this research we built a custom experimental range using opensource emulated and custom pure honeypots designed to detect or capture attacker activity. The focus is to test the effectiveness of a deception in its ability to evade detection coupled with attacker skill levels. The range consists of three zones accessible via virtual private networking. The first zone houses varying configurations of opensource emulated honeypots, custom built pure honeypots, and real SSH servers. The second zone acts as a point of presence for attackers. The third zone is for administration and monitoring. Using the range, both a control and participant-based experiment were conducted. We conducted control experiments to baseline and empirically explore honeypot detectability amongst other systems through adversarial testing. We executed a series of tests such as network service sweep, enumeration scanning, and finally manual execution. We also selected participants to serve as cyber attackers against the experiment range of varying skills having unique tactics, techniques and procedures in attempting to detect the honeypots. We have concluded the experiments and performed data analysis. We measure the anticipated threat by presenting the Attacker Bias Perception Profile model. Using this model, each participant is ranked based on their overall threat classification and impact. This model is applied to the results of the participants which helps align the threat to likelihood and impact of a honeypot being detected. The results indicate the pure honeypots are significantly difficult to detect. Emulated honeypots are grouped in different categories based on the detection and skills of the attackers. We developed a framework abstracting the deceptive process, the interaction with system elements, the use of intelligence, and the relationship with attackers. The framework is illustrated by our experiment case studies and the attacker actions, the effects on the system, and impact to the success

    Preserving the Quality of Architectural Tactics in Source Code

    Get PDF
    In any complex software system, strong interdependencies exist between requirements and software architecture. Requirements drive architectural choices while also being constrained by the existing architecture and by what is economically feasible. This makes it advisable to concurrently specify the requirements, to devise and compare alternative architectural design solutions, and ultimately to make a series of design decisions in order to satisfy each of the quality concerns. Unfortunately, anecdotal evidence has shown that architectural knowledge tends to be tacit in nature, stored in the heads of people, and lost over time. Therefore, developers often lack comprehensive knowledge of underlying architectural design decisions and inadvertently degrade the quality of the architecture while performing maintenance activities. In practice, this problem can be addressed through preserving the relationships between the requirements, architectural design decisions and their implementations in the source code, and then using this information to keep developers aware of critical architectural aspects of the code. This dissertation presents a novel approach that utilizes machine learning techniques to recover and preserve the relationships between architecturally significant requirements, architectural decisions and their realizations in the implemented code. Our approach for recovering architectural decisions includes the two primary stages of training and classification. In the first stage, the classifier is trained using code snippets of different architectural decisions collected from various software systems. During this phase, the classifier learns the terms that developers typically use to implement each architectural decision. These ``indicator terms\u27\u27 represent method names, variable names, comments, or the development APIs that developers inevitably use to implement various architectural decisions. A probabilistic weight is then computed for each potential indicator term with respect to each type of architectural decision. The weight estimates how strongly an indicator term represents a specific architectural tactics/decisions. For example, a term such as \emph{pulse} is highly representative of the heartbeat tactic but occurs infrequently in the authentication. After learning the indicator terms, the classifier can compute the likelihood that any given source file implements a specific architectural decision. The classifier was evaluated through several different experiments including classical cross-validation over code snippets of 50 open source projects and on the entire source code of a large scale software system. Results showed that classifier can reliably recognize a wide range of architectural decisions. The technique introduced in this dissertation is used to develop the Archie tool suite. Archie is a plug-in for Eclipse and is designed to detect wide range of architectural design decisions in the code and to protect them from potential degradation during maintenance activities. It has several features for performing change impact analysis of architectural concerns at both the code and design level and proactively keep developers informed of underlying architectural decisions during maintenance activities. Archie is at the stage of technology transfer at the US Department of Homeland Security where it is purely used to detect and monitor security choices. Furthermore, this outcome is integrated into the Department of Homeland Security\u27s Software Assurance Market Place (SWAMP) to advance research and development of secure software systems

    STIXnet: entity and relation extraction from unstructured CTI reports

    Get PDF
    The increased frequency of cyber attacks against organizations and their potentially devastating effects has raised awareness on the severity of these threats. In order to proactively harden their defences, organizations have started to invest in Cyber Threat Intelligence (CTI), the field of Cybersecurity that deals with the collection, analysis and organization of intelligence on the attackers and their techniques. By being able to profile the activity of a particular threat actor, thus knowing the types of organizations that it targets and the kind of vulnerabilities that it exploits, it is possible not only to mitigate their attacks, but also to prevent them. Although the sharing of this type of intelligence is facilitated by several standards such as STIX (Structured Threat Information eXpression), most of the data still consists of reports written in natural language. This particular format can be highly time-consuming for Cyber Threat Intelligence analysts, which may need to read the entire report and label entities and relations in order to generate an interconnected graph from which the intel can be extracted. In this thesis, done in collaboration with Leonardo S.p.A., we provide a modular and extensible system called STIXnet for the extraction of entities and relations from natural language CTI reports. The tool is embedded in a larger platform, developed by Leonardo, called Cyber Threat Intelligence System (CTIS) and therefore inherits some of its features, such as an extensible knowledge base which also acts as a database for the entities to extract. STIXnet uses techniques from Natural Language Processing (NLP), the branch of computer science that studies the ability of a computer program to process and analyze natural language data. This field of study has been recently revolutionized by the increasing popularity of Machine Learning, which allows for more efficient algorithms and better results. After looking for known entities retrieved from the knowledge base, STIXnet analyzes the semantic structure of the sentences in order to extract new possible entities and predicts Tactics, Techniques, and Procedures (TTPs) used by the attacker. Finally, an NLP model extracts relations between these entities and converts them to be compliant with the STIX 2.1 standard, thus generating an interconnected graph which can be exported and shared. STIXnet is also able to be constantly and automatically improved with some feedback from a human analyzer, which by highlighting false positives and false negatives in the processing of the report, can trigger a fine-tuning process that will increase the tool's overall accuracy and precision. This framework can help defenders to immediately know at a glace all the gathered intelligence on a particular threat actor and thus deploy effective threat detection, perform attack simulations and strengthen their defenses, and together with the Cyber Threat Intelligence System platform organizations can be always one step ahead of the attacker and be secure against Advanced Persistent Threats (APTs).The increased frequency of cyber attacks against organizations and their potentially devastating effects has raised awareness on the severity of these threats. In order to proactively harden their defences, organizations have started to invest in Cyber Threat Intelligence (CTI), the field of Cybersecurity that deals with the collection, analysis and organization of intelligence on the attackers and their techniques. By being able to profile the activity of a particular threat actor, thus knowing the types of organizations that it targets and the kind of vulnerabilities that it exploits, it is possible not only to mitigate their attacks, but also to prevent them. Although the sharing of this type of intelligence is facilitated by several standards such as STIX (Structured Threat Information eXpression), most of the data still consists of reports written in natural language. This particular format can be highly time-consuming for Cyber Threat Intelligence analysts, which may need to read the entire report and label entities and relations in order to generate an interconnected graph from which the intel can be extracted. In this thesis, done in collaboration with Leonardo S.p.A., we provide a modular and extensible system called STIXnet for the extraction of entities and relations from natural language CTI reports. The tool is embedded in a larger platform, developed by Leonardo, called Cyber Threat Intelligence System (CTIS) and therefore inherits some of its features, such as an extensible knowledge base which also acts as a database for the entities to extract. STIXnet uses techniques from Natural Language Processing (NLP), the branch of computer science that studies the ability of a computer program to process and analyze natural language data. This field of study has been recently revolutionized by the increasing popularity of Machine Learning, which allows for more efficient algorithms and better results. After looking for known entities retrieved from the knowledge base, STIXnet analyzes the semantic structure of the sentences in order to extract new possible entities and predicts Tactics, Techniques, and Procedures (TTPs) used by the attacker. Finally, an NLP model extracts relations between these entities and converts them to be compliant with the STIX 2.1 standard, thus generating an interconnected graph which can be exported and shared. STIXnet is also able to be constantly and automatically improved with some feedback from a human analyzer, which by highlighting false positives and false negatives in the processing of the report, can trigger a fine-tuning process that will increase the tool's overall accuracy and precision. This framework can help defenders to immediately know at a glace all the gathered intelligence on a particular threat actor and thus deploy effective threat detection, perform attack simulations and strengthen their defenses, and together with the Cyber Threat Intelligence System platform organizations can be always one step ahead of the attacker and be secure against Advanced Persistent Threats (APTs)

    Game Theory and Prescriptive Analytics for Naval Wargaming Battle Management Aids

    Get PDF
    NPS NRP Technical ReportThe Navy is taking advantage of advances in computational technologies and data analytic methods to automate and enhance tactical decisions and support warfighters in highly complex combat environments. Novel automated techniques offer opportunities to support the tactical warfighter through enhanced situational awareness, automated reasoning and problem-solving, and faster decision timelines. This study will investigate how game theory and prescriptive analytics methods can be used to develop real-time wargaming capabilities to support warfighters in their ability to explore and evaluate the possible consequences of different tactical COAs to improve tactical missions. This study will develop a conceptual design of a real-time tactical wargaming capability. This study will explore data analytic methods including game theory, prescriptive analytics, and artificial intelligence (AI) to evaluate their potential to support real-time wargaming.N2/N6 - Information WarfareThis research is supported by funding from the Naval Postgraduate School, Naval Research Program (PE 0605853N/2098). https://nps.edu/nrpChief of Naval Operations (CNO)Approved for public release. Distribution is unlimited.

    Improving Dependability of Networks with Penalty and Revocation Mechanisms

    Get PDF
    Both malicious and non-malicious faults can dismantle computer networks. Thus, mitigating faults at various layers is essential in ensuring efficient and fair network resource utilization. In this thesis we take a step in this direction and study several ways to deal with faults by means of penalties and revocation mechanisms in networks that are lacking a centralized coordination point, either because of their scale or design. Compromised nodes can pose a serious threat to infrastructure, end-hosts and services. Such malicious elements can undermine the availability and fairness of networked systems. To deal with such nodes, we design and analyze protocols enabling their removal from the network in a fast and a secure way. We design these protocols for two different environments. In the former setting, we assume that there are multiple, but independent trusted points in the network which coordinate other nodes in the network. In the latter, we assume that all nodes play equal roles in the network and thus need to cooperate to carry out common functionality. We analyze these solutions and discuss possible deployment scenarios. Next we turn our attention to wireless edge networks. In this context, some nodes, without being malicious, can still behave in an unfair manner. To deal with the situation, we propose several self-penalty mechanisms. We implement the proposed protocols employing a commodity hardware and conduct experiments in real-world environments. The analysis of data collected in several measurement rounds revealed improvements in terms of higher fairness and throughput. We corroborate the results with simulations and an analytic model. And finally, we discuss how to measure fairness in dynamic settings, where nodes can have heterogeneous resource demands

    Perceptions of Expert Practice by Active Licensed Registered Nurse Therapeutic Touch® Practitioners

    Get PDF
    Therapeutic Touch® (TT) is a nursing modality, developed in 1972, with a long history of research completion. It is also one of the leading complementary and alternative medicine (CAM) therapies. A comprehensive review of the literature (over 350 studies) from the 1960s to 2015 demonstrated a gap related to delineating expertise related to clinical practice from the view of the practitioner. This study examined the state of expert practice as envisioned by those who themselves qualified as experts in the discipline of TT. This study utilized a qualitative descriptive independent focus group methodology (Krueger, 1994, 2006; Krueger & Casey, 2001, 2009). This methodology has become popular in nursing studies. The choice of a synchronous method to collect data was made to provide a unique environment supported by the online environment with the university-supported platform. Focus groups were used as a stand-alone and self-contained method to conduct the study (Hupcey, 2005; Morgan, 1997). The sample consisted of 12 expert, registered nurse (RN) TT practitioners (TTPs), with a minimum of three years of TT experience. They also had attended a minimum of three TT workshops/courses, which included advanced training in the discipline. The use of electronic media facilitated a sample drawn from three countries across two continents. Six very small, synchronous, online focus groups (Toner, 2009) were conducted to reach data saturation and minimum sample size acquisition. Rich data were collected from these experienced practitioners. Parameters explored were the practitioners\u27 description of expert practice, their own expertise, how research impacted their practice, and the direction TT is headed in the future. Findings were supported by the expert practice literature. Krieger\u27s (2002) concept of transformation was especially apparent in the lives of many of the participants in this study. Respondents described how TT had become an integral part of their lives and influenced their lives immeasurably. The importance of practice as one factor leading to expertise was very apparent among the participants. Many of the studies stress the need for practice in order to gain expertise in specialty practice. TT is a form of specialty practice by nurses, supported in a holistic framework and caring environment. Sharing, which includes mentorship, collaboration, and teaching, is an important part of an advanced practice model, and is apparent in the practice of these advanced TTPs. Expert practice includes the components of expert practice knowledge, which is a necessary prequel to the ability to share it with others. It is also a necessary component to provide leadership to others, to conduct research in the field, and to further one\u27s own practice goals
    corecore