885 research outputs found

    An OS-agnostic approach to memory forensics

    Get PDF
    The analysis of memory dumps presents unique challenges, as operating systems use a variety of (often undocumented) ways to represent data in memory. To solve this problem, forensics tools maintain collections of models that precisely describe the kernel data structures used by a handful of operating systems. However, these models cannot be generalized and developing new models may require a very long and tedious reverse engineering effort for closed source systems. In the last years, the tremendous increase in the number of IoT devices, smart-home appliances and cloud-hosted VMs resulted in a growing number of OSs which are not supported by current forensics tools. The way we have been doing memory forensics until today, based on handwritten models and rules, cannot simply keep pace with this variety of systems. To overcome this problem, in this paper we introduce the new concept of OS-agnostic memory forensics, which is based on techniques that can recover certain forensics information without any knowledge of the internals of the underlying OS. Our approach allows to automatically identify different types of data structures by using only their topological constraints and then supports two modes of investigation. In the first, it allows to traverse the recovered structures by starting from predetermined seeds, i.e., pieces of forensics-relevant information (such as a process name or an IP address) that an analyst knows a priori or that can be easily identified in the dump. Our experiments show that even a single seed can be sufficient to recover the entire list of processes and other important forensics data structures in dumps obtained from 14 different OSs, without any knowledge of the underlying kernels. In the second mode of operation, our system requires no seed but instead uses a set of heuristics to rank all memory data structures and present to the analysts only the most ‘promising’ ones. Even in this case, our experiments show that an analyst can use our approach to easily identify forensics-relevant structured information in a truly OS-agnostic scenario

    Confucius Queue Management: Be Fair But Not Too Fast

    Full text link
    When many users and unique applications share a congested edge link (e.g., a home network), everyone wants their own application to continue to perform well despite contention over network resources. Traditionally, network engineers have focused on fairness as the key objective to ensure that competing applications are equitably and led by the switch, and hence have deployed fair queueing mechanisms. However, for many network workloads today, strict fairness is directly at odds with equitable application performance. Real-time streaming applications, such as videoconferencing, suffer the most when network performance is volatile (with delay spikes or sudden and dramatic drops in throughput). Unfortunately, "fair" queueing mechanisms lead to extremely volatile network behavior in the presence of bursty and multi-flow applications such as Web traffic. When a sudden burst of new data arrives, fair queueing algorithms rapidly shift resources away from incumbent flows, leading to severe stalls in real-time applications. In this paper, we present Confucius, the first practical queue management scheme to effectively balance fairness against volatility, providing performance outcomes that benefit all applications sharing the contended link. Confucius outperforms realistic queueing schemes by protecting the real-time streaming flows from stalls in competing with more than 95% of websites. Importantly, Confucius does not assume the collaboration of end-hosts, nor does it require manual parameter tuning to achieve good performance

    Design and implementation of embedded adaptive controller using ARM processor.

    Get PDF
    This thesis is concerned with development of embedded adaptive controllers for industrial applications. Many industrial processes present challenging control problems such as high nonlinearity, time-varying dynamic behaviors, and unpredictable external disturbances. Conventional controllers are too limited to successfully resolve these problems. Therefore, the adaptive control strategy, an advanced control theory, is applied to overcome deficiencies of the conventional controllers

    Design of Automation Environment for Analyzing Various IoT Malware

    Get PDF
    With the increasing proliferation of IoT systems, the security of IoT systems has become very important to individuals and businesses. IoT malware has been increasing exponentially since the emergence of Mirai in 2016. Because the IoT system environment is diverse, IoT malware also has various environments. In the case of existing analysis systems, there is no environment for dynamic analysis by running IoT malware of various architectures. It is inefficient in terms of time and cost to build an environment that analyzes malware one by one for analysis. The purpose of this paper is to improve the problems and limitations of the existing analysis system and provide an environment to analyze a large amount of IoT malware. Using existing open source analysis tools suitable for various IoT malicious codes and QEMU, a virtualization software, the environment in which the actual malicious code will run is built, and the library or system call that is actually called is statically and dynamically analyzed. In the text, the analysis system is applied to the actual collected malicious code to check whether it is analyzed and derive statistics. Information on the architecture of malicious code, attack method, command used, and access path can be checked, and this information can be used as a basis for malicious code detection research or classification research. The advantages are described of the system designed compared to the most commonly used automated analysis tools and improvements to existing limitations

    XenITH: Xen in the Hand

    Get PDF
    Usability and portability have been key commercial drivers for increasingly capable handheld devices, which have been enabled by advances in Moore’s Law and well as in wireless systems. The nature of such devices makes them extremely personal, and yet they offer an untapped resource for new forms of peer-to-peer and cooperative communications relaying. Taking advantage of such capabilities requires concurrent resource control of the handheld’s computational and communications capacities. Virtualization platforms, such as the Xen system, have opened the possibility of multiplexing a handheld device in useful and unobtrusive ways, as personal applications can be used while additional services such as decentralized communications are also in operation. The purpose of this project is to experimentally demonstrate the ability of modern smartphone units to support a programmable network environment. We attempt to validate the system with a series of measurement experiments which demonstrate concurrent use of two operating systems, each using computational and network resources, in two virtual machines. Moreover, we demonstrate an acceptable level of user performance while maintaining a MANET using a programmable network router

    What the History of Linux Says About the Future of Cryptocurrencies

    Get PDF
    Since Bitcoin’s meteoric rise, hundreds of cryptocurrencies that people now publicly trade have emerged. As such, the question naturally arises: how have cryptocurrencies evolved over time? Drawing on the theory of polycentric information commons and cryptocurrencies’ historical similarities with another popular information commons (namely, Linux), we make predictions regarding what cryptocurrencies may look like in the future. Specifically, we focus on four important historical similarities: 1) support from online hacker communities, 2) pursuit of freedom, 3) criticism about features and use, and 4) proliferation of forks. We then predict that: 1) cryptocurrencies will become more pragmatic rather than ideological, 2) cryptocurrencies will become more diverse in terms of not only the underlying technology but also the intended audience, and 3) the core technology behind cryptocurrencies, called blockchain, will be successfully used beyond cryptocurrencies

    Formalization and Detection of Host-Based Code Injection Attacks in the Context of Malware

    Get PDF
    The Host-Based Code Injection Attack (HBCIAs) is a technique that malicious software utilizes in order to avoid detection or steal sensitive information. In a nutshell, this is a local attack where code is injected across process boundaries and executed in the context of a victim process. Malware employs HBCIAs on several operating systems including Windows, Linux, and macOS. This thesis investigates the topic of HBCIAs in the context of malware. First, we conduct basic research on this topic. We formalize HBCIAs in the context of malware and show in several measurements, amongst others, the high prevelance of HBCIA-utilizing malware. Second, we present Bee Master, a platform-independent approach to dynamically detect HBCIAs. This approach applies the honeypot paradigm to operating system processes. Bee Master deploys fake processes as honeypots, which are attacked by malicious software. We show that Bee Master reliably detects HBCIAs on Windows and Linux. Third, we present Quincy, a machine learning-based system to detect HBCIAs in post-mortem memory dumps. It utilizes up to 38 features including memory region sparseness, memory region protection, and the occurence of HBCIA-related strings. We evaluate Quincy with two contemporary detection systems called Malfind and Hollowfind. This evaluation shows that Quincy outperforms them both. It is able to increase the detection performance by more than eight percent
    • 

    corecore