12 research outputs found

    Exploiting Generational Garbage Collection: Using Data Remnants to Improve Memory Analysis and Digital Forensics

    Get PDF
    Malware authors employ sophisticated tools and infrastructure to undermine information security and steal data on a daily basis. When these attacks or infrastructure are discovered, digital forensics attempts to reconstruct the events from evidence left over on file systems, network drives, and system memory dumps. In the last several years, malware authors have been observed used the Java managed runtimes to commit criminal theft [1, 2] and conduct espionage [3, 4, 5]. Fortunately for forensic analysts, the most prevalent versions of Java uses generational garbage collection to help improve runtime performance. The memory system allocates me mory fro m a managed heap. When memory is exhausted in this heap, the JVM will sweep over partitions reclaiming memory from dead objects. This memory is not sanitized or zero’ed. Hence, latent secrets and object data persist until it is overwritten. For example, sockets and open file recovery are possible even after resources are closed and purged from the OS kernel memory. This research measures the lifetime of latent data and implements a Python framework that can be used to recover this object data. Latent secret lifetimes are experimentally measured using TLS keys in a Java application. An application is configured to be very active and minimally active. The application also utilizes raw Java sockets and Apache HTTPClient to determine whether or not a Java framework impacts latent secret lifetimes. Depending on the heap size(512MiB to 16GiB), between 10-40% of the TLS keys are recoverable from the heap, which correlates directly to memory pressure. This research also exploi ts prope rties to identify and recover evidence from the Java heap. The RecOOP framework helps locate all the loaded types, identify the managed Java heaps, and scan for potential objects [6]. The framework then lifts these objects into Python where they can be analyzed further. One key findings include the fact that IO streams for processes started from within Java remained in memory, and the data in these buffers could be used to infer the program executed. Socket and data could also be recovered even when the socket structures were missing from the OS’s kernel memory

    Proactive cybersecurity tailoring through deception techniques

    Get PDF
    Dissertação de natureza científica para obtenção do grau de Mestre em Engenharia Informática e de ComputadoresUma abordagem proativa à cibersegurança pode complementar uma postura reativa ajudando as empresas a lidar com incidentes de segurança em fases iniciais. As organizações podem proteger-se ativamente contra a assimetria inerente à guerra cibernética através do uso de técnicas proativas, como por exemplo a ciber deception. A implantação intencional de artefactos enganosos para construir uma infraestrutura que permite a investigação em tempo real dos padrões e abordagens de um atacante sem comprometer a rede principal da organização é o propósito da deception cibernética. Esta metodologia pode revelar vulnerabilidades por descobrir, conhecidas como vulnerabilidades de dia-zero, sem interferir com as atividades de rotina da organização. Além disso, permite às empresas a extração de informações vitais sobre o atacante que, de outra forma, seriam difíceis de adquirir. No entanto, colocar estes conceitos em prática em circunstâncias reais constitui problemas de grande ordem. Este estudo propõe uma arquitetura para um sistema informático de deception, que culmina numa implementação que implanta e adapta dinamicamente uma rede enganosa através do uso de técnicas de redes definidas por software e de virtualização de rede. A rede ilusora é uma rede de ativos virtuais com uma topologia e especificações pré-planeadas, coincidentes com uma estratégia de deception. O sistema pode rastrear e avaliar a atividade do atacante através da monitorização contínua dos artefactos da rede. O refinamento em tempo real do plano de deception pode exigir alterações na topologia e nos artefactos da rede, possíveis devido às capacidades de modificação dinâmica das redes definidas por software. As organizações podem maximizar as suas capacidades de deception ao combinar estes processos com componentes avançados de deteção e classificação de ataques informáticos. A eficácia da solução proposta é avaliada usando vários casos de estudo que demonstram a sua utilidade.A proactive approach to cybersecurity can supplement a reactive posture by helping businesses to handle security incidents in the early phases of an attack. Organizations can actively protect against the inherent asymmetry of cyber warfare by using proactive techniques such as cyber deception. The intentional deployment of misleading artifacts to construct an infrastructure that allows real-time investigation of an attacker's patterns and approaches without compromising the organization's principal network is what cyber deception entails. This method can reveal previously undiscovered vulnerabilities, referred to as zero-day vulnerabilities, without interfering with routine corporate activities. Furthermore, it enables enterprises to collect vital information about the attacker that would otherwise be difficult to access. However, putting such concepts into practice in real-world circumstances involves major problems. This study proposes an architecture for a deceptive system, culminating in an implementation that deploys and dynamically customizes a deception grid using Software-Defined Networking (SDN) and network virtualization techniques. The deception grid is a network of virtual assets with a topology and specifications that are pre-planned to coincide with a deception strategy. The system can trace and evaluate the attacker's activity by continuously monitoring the artifacts within the deception grid. Real-time refinement of the deception plan may necessitate changes to the grid's topology and artifacts, which can be assisted by software-defined networking's dynamic modification capabilities. Organizations can maximize their deception capabilities by merging these processes with advanced cyber-attack detection and classification components. The effectiveness of the given solution is assessed using numerous use cases that demonstrate its utility.N/

    Signaling and Reciprocity:Robust Decentralized Information Flows in Social, Communication, and Computer Networks

    Get PDF
    Complex networks exist for a number of purposes. The neural, metabolic and food networks ensure our survival, while the social, economic, transportation and communication networks allow us to prosper. Independently of the purposes and particularities of the physical embodiment of the networks, one of their fundamental functions is the delivery of information from one part of the network to another. Gossip and diseases diffuse in the social networks, electrochemical signals propagate in the neural networks and data packets travel in the Internet. Engineering networks for robust information flows is a challenging task. First, the mechanism through which the network forms and changes its topology needs to be defined. Second, within a given topology, the information must be routed to the appropriate recipients. Third, both the network formation and the routing mechanisms need to be robust against a wide spectrum of failures and adversaries. Fourth, the network formation, routing and failure recovery must operate under the resource constraints, either intrinsic or extrinsic to the network. Finally, the autonomously operating parts of the network must be incentivized to contribute their resources to facilitate the information flows. This thesis tackles the above challenges within the context of several types of networks: 1) peer-to-peer overlays – computers interconnected over the Internet to form an overlay in which participants provide various services to one another, 2) mobile ad-hoc networks – mobile nodes distributed in physical space communicating wirelessly with the goal of delivering data from one part of the network to another, 3) file-sharing networks – networks whose participants interconnect over the Internet to exchange files, 4) social networks – humans disseminating and consuming information through the network of social relationships. The thesis makes several contributions. Firstly, we propose a general algorithm, which given a set of nodes embedded in an arbitrary metric space, interconnects them into a network that efficiently routes information. We apply the algorithm to the peer-to-peer overlays and experimentally demonstrate its high performance, scalability as well as resilience to continuous peer arrivals and departures. We then shift our focus to the problem of the reliability of routing in the peer-to-peer overlays. Each overlay peer has limited resources and when they are exhausted this ultimately leads to delayed or lost overlay messages. All the solutions addressing this problem rely on message redundancy, which significantly increases the resource costs of fault-tolerance. We propose a bandwidth-efficient single-path Forward Feedback Protocol (FFP) for overlay message routing in which successfully delivered messages are followed by a feedback signal to reinforce the routing paths. Internet testbed evaluation shows that FFP uses 2-5 times less network bandwidth than the existing protocols relying on message redundancy, while achieving comparable fault-tolerance levels under a variety of failure scenarios. While the Forward Feedback Protocol is robust to message loss and delays, it is vulnerable to malicious message injection. We address this and other security problems by proposing Castor, a variant of FFP for mobile ad-hoc networks (MANETs). In Castor, we use the same general mechanism as in FFP; each time a message is routed, the routing path is either enforced or weakened by the feedback signal depending on whether the routing succeeded or not. However, unlike FFP, Castor employs cryptographic mechanisms for ensuring the integrity and authenticity of the messages. We compare Castor to four other MANET routing protocols. Despite Castor's simplicity, it achieves up to 40% higher packet delivery rates than the other protocols and recovers at least twice as fast as the other protocols in a wide range of attacks and failure scenarios. Both of our protocols, FFP and Castor, rely on simple signaling to improve the routing robustness in peer-to-peer and mobile ad-hoc networks. Given the success of the signaling mechanism in shaping the information flows in these two types of networks, we examine if signaling plays a similar crucial role in the on-line social networks. We characterize the propagation of URLs in the social network of Twitter. The data analysis uncovers several statistical regularities in the user activity, the social graph, the structure of the URL cascades as well as the communication and signaling dynamics. Based on these results, we propose a propagation model that accurately predicts which users are likely to mention which URLs. We outline a number of applications where the social network information flow modelling would be crucial: content ranking and filtering, viral marketing and spam detection. Finally, we consider the problem of freeriding in peer-to-peer file-sharing applications, when users can download data from others, but never reciprocate by uploading. To address the problem, we propose a variant of the BitTorrent system in which two peers are only allowed to connect if their owners know one another in the real world. When the users know which other users their BitTorrent client connects to, they are more likely to cooperate. The social network becomes the content distribution network and the freeriding problem is solved by leveraging the social norms and reciprocity to stabilize cooperation rather than relying on technological means. Our extensive simulation shows that the social network topology is an efficient and scalable content distribution medium, while at the same time provides robustness to freeriding

    Authentication and Data Protection under Strong Adversarial Model

    Get PDF
    We are interested in addressing a series of existing and plausible threats to cybersecurity where the adversary possesses unconventional attack capabilities. Such unconventionality includes, in our exploration but not limited to, crowd-sourcing, physical/juridical coercion, substantial (but bounded) computational resources, malicious insiders, etc. Our studies show that unconventional adversaries can be counteracted with a special anchor of trust and/or a paradigm shift on a case-specific basis. Complementing cryptography, hardware security primitives are the last defense in the face of co-located (physical) and privileged (software) adversaries, hence serving as the special trust anchor. Examples of hardware primitives are architecture-shipped features (e.g., with CPU or chipsets), security chips or tokens, and certain features on peripheral/storage devices. We also propose changes of paradigm in conjunction with hardware primitives, such as containing attacks instead of counteracting, pretended compliance, and immunization instead of detection/prevention. In this thesis, we demonstrate how our philosophy is applied to cope with several exemplary scenarios of unconventional threats, and elaborate on the prototype systems we have implemented. Specifically, Gracewipe is designed for stealthy and verifiable secure deletion of on-disk user secrets under coercion; Hypnoguard protects in-RAM data when a computer is in sleep (ACPI S3) in case of various memory/guessing attacks; Uvauth mitigates large-scale human-assisted guessing attacks by receiving all login attempts in an indistinguishable manner, i.e., correct credentials in a legitimate session and incorrect ones in a plausible fake session; Inuksuk is proposed to protect user files against ransomware or other authorized tampering. It augments the hardware access control on self-encrypting drives with trusted execution to achieve data immunization. We have also extended the Gracewipe scenario to a network-based enterprise environment, aiming to address slightly different threats, e.g., malicious insiders. We believe the high-level methodology of these research topics can contribute to advancing the security research under strong adversarial assumptions, and the promotion of software-hardware orchestration in protecting execution integrity therein

    Classification and Management of Computational Resources of Robotic Swarms and the Overcoming of their Constraints

    Get PDF
    Swarm robotics is a relatively new and multidisciplinary research field with many potential applications (e.g., collective exploration or precision agriculture). Nevertheless, it has not been able to transition from the academic environment to the real world. While there are many potential reasons, one reason is that many robots are designed to be relatively simple, which often results in reduced communication and computation capabilities. However, the investigation of such limitations has largely been overlooked. This thesis looks into one such constraint, the computational constraint of swarm robots (i.e., swarm robotics platform). To achieve this, this work first proposes a computational index that quantifies computational resources. Based on the computational index, a quantitative study of 5273 devices shows that swarm robots provide fewer resources than many other robots or devices. In the next step, an operating system with a novel dual-execution model is proposed, and it has been shown that it outperforms the two other robotic system software. Moreover, results show that the choice of system software determines the computational overhead and, therefore, how many resources are available to robotic software. As communication can be a key aspect of a robot's behaviour, this work demonstrates the modelling, implementing, and studying of an optical communication system with a novel dynamic detector. Its detector improves the quality of service by orders of magnitude (i.e., makes the communication more reliable). In addition, this work investigates general communication properties, such as scalability or the effects of mobility, and provides recommendations for the use of such optical communication systems for swarm robotics. Finally, an approach is shown by which computational constraints of individual robots can be overcome by distributing data and processing across multiple robots

    The architecture of eReuse.org and the development of DeviceHub

    Get PDF
    En este proyecto desarrollamos una solución para gestionar dispositivos con el objetivo principal de asegurar la reutilización y la recolección final. El proyecto es parte de eReuse.org, un programa financiado por la Unión Europea con el objetivo de alcanzar la economía circular en la electrónica.In this project, we design a solution to manage devices with the main objective of ensuring reuse and final collection. It is part of eReuse.org, a program funded by the European Union with the mission of reaching the circular economy of electronics

    Embedded System Design

    Get PDF
    A unique feature of this open access textbook is to provide a comprehensive introduction to the fundamental knowledge in embedded systems, with applications in cyber-physical systems and the Internet of things. It starts with an introduction to the field and a survey of specification models and languages for embedded and cyber-physical systems. It provides a brief overview of hardware devices used for such systems and presents the essentials of system software for embedded systems, including real-time operating systems. The author also discusses evaluation and validation techniques for embedded systems and provides an overview of techniques for mapping applications to execution platforms, including multi-core platforms. Embedded systems have to operate under tight constraints and, hence, the book also contains a selected set of optimization techniques, including software optimization techniques. The book closes with a brief survey on testing. This fourth edition has been updated and revised to reflect new trends and technologies, such as the importance of cyber-physical systems (CPS) and the Internet of things (IoT), the evolution of single-core processors to multi-core processors, and the increased importance of energy efficiency and thermal issues

    Embedded System Design

    Get PDF
    A unique feature of this open access textbook is to provide a comprehensive introduction to the fundamental knowledge in embedded systems, with applications in cyber-physical systems and the Internet of things. It starts with an introduction to the field and a survey of specification models and languages for embedded and cyber-physical systems. It provides a brief overview of hardware devices used for such systems and presents the essentials of system software for embedded systems, including real-time operating systems. The author also discusses evaluation and validation techniques for embedded systems and provides an overview of techniques for mapping applications to execution platforms, including multi-core platforms. Embedded systems have to operate under tight constraints and, hence, the book also contains a selected set of optimization techniques, including software optimization techniques. The book closes with a brief survey on testing. This fourth edition has been updated and revised to reflect new trends and technologies, such as the importance of cyber-physical systems (CPS) and the Internet of things (IoT), the evolution of single-core processors to multi-core processors, and the increased importance of energy efficiency and thermal issues
    corecore