3,659 research outputs found

    An Argumentation-Based Reasoner to Assist Digital Investigation and Attribution of Cyber-Attacks

    Full text link
    We expect an increase in the frequency and severity of cyber-attacks that comes along with the need for efficient security countermeasures. The process of attributing a cyber-attack helps to construct efficient and targeted mitigating and preventive security measures. In this work, we propose an argumentation-based reasoner (ABR) as a proof-of-concept tool that can help a forensics analyst during the analysis of forensic evidence and the attribution process. Given the evidence collected from a cyber-attack, our reasoner can assist the analyst during the investigation process, by helping him/her to analyze the evidence and identify who performed the attack. Furthermore, it suggests to the analyst where to focus further analyses by giving hints of the missing evidence or new investigation paths to follow. ABR is the first automatic reasoner that can combine both technical and social evidence in the analysis of a cyber-attack, and that can also cope with incomplete and conflicting information. To illustrate how ABR can assist in the analysis and attribution of cyber-attacks we have used examples of cyber-attacks and their analyses as reported in publicly available reports and online literature. We do not mean to either agree or disagree with the analyses presented therein or reach attribution conclusions

    Applying Forensic Analysis Factors to Construct a Systems Dynamics Model for Failed Software Projects

    Get PDF
    Forensic analysis of failed software projects can aid in managerial understanding of the issues and challenges of delivering a successful project. The factors and their interrelationships causing software project failure are not well understood or researched with a strong forensic-analytic approach. Previous papers have not adequately explored how dynamic interaction of multiple factors can lead to critical events that ultimately portend eventual failure. This paper proposes the development of a System Dynamics (SD) model that will represent the key factors, their dynamic interactions, and the influence of exogenous events in causing software project failure. Forensic data will be used as inputs to the SD model to assist managers in understanding the factor interactions, the importance of individual factor metrics, as well as the sequence of interactions in causing possible software project failure. Outcomes from the model will include a likelihood of software project failure, possible factor sequences leading to failure, and suggestions of remediation activities that might mitigate eventual failure

    IMPLEMENTASI NETWORK ADDRESS TRANSLATION (NAT) MENGGUNAKAN KERIO CONTROL VERSI 7.4.1 DI PUSAT PENELITIAN BIOTEKNOLOGI – LIPI

    Get PDF
    Biotechnology Research Center as an institution engaged in the research, the need for data and updated information is an absolute thing. This requirement can be met one of them is the use of information technology either by using the internet. With the increasing number of Internet users in Biotechnology Research Center and order for all users who are members of the Local Area Network (LAN) can utilize the internet needed a good network management. While, during the network management Indonesian Institute of Sciences (LIPI), under the supervision of the Joint Network Team (TGJ – LIPI ) only provide one local IP segment which is limited to only 250 users. While number of internet users in Biotechnology Research Center and more than that number will continue to grow. To overcome these problems, then the LAN network at the Research Center for Biotechnology service facility utilizes Network Address Translation (NAT) contained in Kerio Control software version 7.4.1. The subject of this article is limited to the implementation and configuration of NAT using Kerio Control 7.4.1 version. From these results, we concluded that the implementation of NAT helps all users in the network LAN Biotechnology Research Center get Public IP, so that it can connect to the internet properly

    A systematic survey of online data mining technology intended for law enforcement

    Get PDF
    As an increasing amount of crime takes on a digital aspect, law enforcement bodies must tackle an online environment generating huge volumes of data. With manual inspections becoming increasingly infeasible, law enforcement bodies are optimising online investigations through data-mining technologies. Such technologies must be well designed and rigorously grounded, yet no survey of the online data-mining literature exists which examines their techniques, applications and rigour. This article remedies this gap through a systematic mapping study describing online data-mining literature which visibly targets law enforcement applications, using evidence-based practices in survey making to produce a replicable analysis which can be methodologically examined for deficiencies

    Um método supervisionado para encontrar variáveis discriminantes na análise de problemas complexos : estudos de caso em segurança do Android e em atribuição de impressora fonte

    Get PDF
    Orientadores: Ricardo Dahab, Anderson de Rezende RochaDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: A solução de problemas onde muitos componentes atuam e interagem simultaneamente requer modelos de representação nem sempre tratáveis pelos métodos analíticos tradicionais. Embora em muitos caso se possa prever o resultado com excelente precisão através de algoritmos de aprendizagem de máquina, a interpretação do fenómeno requer o entendimento de quais são e em que proporção atuam as variáveis mais importantes do processo. Esta dissertação apresenta a aplicação de um método onde as variáveis discriminantes são identificadas através de um processo iterativo de ranqueamento ("ranking") por eliminação das que menos contribuem para o resultado, avaliando-se em cada etapa o impacto da redução de características nas métricas de acerto. O algoritmo de florestas de decisão ("Random Forest") é utilizado para a classificação e sua propriedade de importância das características ("Feature Importance") para o ranqueamento. Para a validação do método, dois trabalhos abordando sistemas complexos de natureza diferente foram realizados dando origem aos artigos aqui apresentados. O primeiro versa sobre a análise das relações entre programas maliciosos ("malware") e os recursos requisitados pelos mesmos dentro de um ecossistema de aplicações no sistema operacional Android. Para realizar esse estudo, foram capturados dados, estruturados segundo uma ontologia definida no próprio artigo (OntoPermEco), de 4.570 aplicações (2.150 malware, 2.420 benignas). O modelo complexo produziu um grafo com cerca de 55.000 nós e 120.000 arestas, o qual foi transformado usando-se a técnica de bolsa de grafos ("Bag Of Graphs") em vetores de características de cada aplicação com 8.950 elementos. Utilizando-se apenas os dados do manifesto atingiu-se com esse modelo 88% de acurácia e 91% de precisão na previsão do comportamento malicioso ou não de uma aplicação, e o método proposto foi capaz de identificar 24 características relevantes na classificação e identificação de famílias de malwares, correspondendo a 70 nós no grafo do ecosistema. O segundo artigo versa sobre a identificação de regiões em um documento impresso que contém informações relevantes na atribuição da impressora laser que o imprimiu. O método de identificação de variáveis discriminantes foi aplicado sobre vetores obtidos a partir do uso do descritor de texturas (CTGF-"Convolutional Texture Gradient Filter") sobre a imagem scaneada em 600 DPI de 1.200 documentos impressos em 10 impressoras. A acurácia e precisão médias obtidas no processo de atribuição foram de 95,6% e 93,9% respectivamente. Após a atribuição da impressora origem a cada documento, 8 das 10 impressoras permitiram a identificação de variáveis discriminantes associadas univocamente a cada uma delas, podendo-se então visualizar na imagem do documento as regiões de interesse para uma análise pericial. Os objetivos propostos foram atingidos mostrando-se a eficácia do método proposto na análise de dois problemas em áreas diferentes (segurança de aplicações e forense digital) com modelos complexos e estruturas de representação bastante diferentes, obtendo-se um modelo reduzido interpretável para ambas as situaçõesAbstract: Solving a problem where many components interact and affect results simultaneously requires models which sometimes are not treatable by traditional analytic methods. Although in many cases the result is predicted with excellent accuracy through machine learning algorithms, the interpretation of the phenomenon requires the understanding of how the most relevant variables contribute to the results. This dissertation presents an applied method where the discriminant variables are identified through an iterative ranking process. In each iteration, a classifier is trained and validated discarding variables that least contribute to the result and evaluating in each stage the impact of this reduction in the classification metrics. Classification uses the Random Forest algorithm, and the discarding decision applies using its feature importance property. The method handled two works approaching complex systems of different nature giving rise to the articles presented here. The first article deals with the analysis of the relations between \textit{malware} and the operating system resources requested by them within an ecosystem of Android applications. Data structured according to an ontology defined in the article (OntoPermEco) were captured to carry out this study from 4,570 applications (2,150 malware, 2,420 benign). The complex model produced a graph of about 55,000 nodes and 120,000 edges, which was transformed using the Bag of Graphs technique into feature vectors of each application with 8,950 elements. The work accomplished 88% of accuracy and 91% of precision in predicting malicious behavior (or not) for an application using only the data available in the application¿s manifest, and the proposed method was able to identify 24 relevant features corresponding to only 70 nodes of the entire ecosystem graph. The second article is about to identify regions in a printed document that contains information relevant to the attribution of the laser printer that printed it. The discriminant variable determination method achieved average accuracy and precision of 95.6% and 93.9% respectively in the source printer attribution using a dataset of 1,200 documents printed on ten printers. Feature vectors were obtained from the scanned image at 600 DPI applying the texture descriptor Convolutional Texture Gradient Filter (CTGF). After the assignment of the source printer to each document, eight of the ten printers allowed the identification of discriminant variables univocally associated to each one of them, and it was possible to visualize in document's image the regions of interest for expert analysis. The work in both articles accomplished the objective of reducing a complex system into an interpretable streamlined model demonstrating the effectiveness of the proposed method in the analysis of two problems in different areas (application security and digital forensics) with complex models and entirely different representation structuresMestradoCiência da ComputaçãoMestre em Ciência da Computaçã

    Review of human decision-making during computer security incident analysis

    Get PDF
    We review practical advice on decision-making during computer security incident response. Scope includes standards from the IETF, ISO, FIRST, and the US intelligence community. To focus on human decision-making, the scope is the evidence collection, analysis, and reporting phases of response. The results indicate both strengths and gaps. A strength is available advice on how to accomplish many specific tasks. However, there is little guidance on how to prioritize tasks in limited time or how to interpret, generalize, and convincingly report results. Future work should focus on these gaps in explication and specification of decision-making during incident analysis

    An Overview on the Generation and Detection of Synthetic and Manipulated Satellite Images

    Get PDF
    Due to the reduction of technological costs and the increase of satellites launches, satellite images are becoming more popular and easier to obtain. Besides serving benevolent purposes, satellite data can also be used for malicious reasons such as misinformation. As a matter of fact, satellite images can be easily manipulated relying on general image editing tools. Moreover, with the surge of Deep Neural Networks (DNNs) that can generate realistic synthetic imagery belonging to various domains, additional threats related to the diffusion of synthetically generated satellite images are emerging. In this paper, we review the State of the Art (SOTA) on the generation and manipulation of satellite images. In particular, we focus on both the generation of synthetic satellite imagery from scratch, and the semantic manipulation of satellite images by means of image-transfer technologies, including the transformation of images obtained from one type of sensor to another one. We also describe forensic detection techniques that have been researched so far to classify and detect synthetic image forgeries. While we focus mostly on forensic techniques explicitly tailored to the detection of AI-generated synthetic contents, we also review some methods designed for general splicing detection, which can in principle also be used to spot AI manipulate imagesComment: 25 pages, 17 figures, 5 tables, APSIPA 202

    On Identities in Modern Networks

    Get PDF
    Communicating parties inside computer networks use different kind of identifiers. Some of these identifiers are stable, e.g., logins used to access a specific service, some are only temporary, e.g., dynamically assigned IP addresses. This paper tackles several challenges of lawful interception that emerged in modern networks. The main contribution is the graph model that links identities learnt from various sources distributed in a network. The inferred identities result into an interception of more detailed data in conformance with the issued court order. The approach deals with network address translation, short-lived identifiers and simultaneous usage of different identities. The approach was evaluated to be viable during real network testing based on various means to learn identities of users connected to a network

    Encrochat:The hacker with a warrant and fair trials?

    Get PDF
    This paper introduces the Encrochat operation as an example of the technological, cross-border, and cross-disciplinary complexity of one contemporary digital investigation. The use of encryption for large-scale criminal activity and organized crime requires law enforcement to act pro-actively to secure evidence, to rely on cross-border evidence exchange, and to use more efficient digital forensic techniques for decryption, data acquisition, and analysis of volumized evidence. The Encrochat investigation also poses the question whether the traditional fair trial principle can still ensure minimum state intrusion and upholding of legitimacy in the new ubiquitous investigation process, where digital forensics methods and tools for hacking and data acquisition are used to identify and arrest thousands of suspects and collect evidence in real-time during criminal activity. The operation is examined through the lens of the right to a fair trial, as codified in Art. 6 ECHR, in order to exemplify three challenging aspects. Firstly, in cross-border investigations there are no binding digital forensics standards in criminal proceedings or forensic reports exchange policy which demands reliability and compliance with Art. 6 ECHR-based evidence rules. Secondly, the defense's stand is not sufficiently addressed in current digital evidence legislation or mutual trust-based instruments at the EU level. Finally, the judicial process lacks scalable procedures to scrutinize digital evidence processing and reliability and is exposed to technology dependences. The identified gaps and their practical impact require a novel approach to digital evidence governance.</p

    The Law of Attribution: Rules for Attribution the Source of a Cyber-Attack

    Get PDF
    State-sponsored cyber-attacks are on the rise and show no signs of abating. Despite the threats posed by these attacks, the states responsible frequently escape with impunity because of the difficulty in attributing cyber-attacks to their source. As a result, current scholarship has focused almost exclusively on overcoming the technological barriers to attribution
    corecore