35 research outputs found

    DeltaFS: Pursuing Zero Update Overhead via Metadata-Enabled Delta Compression for Log-structured File System on Mobile Devices

    Full text link
    Data compression has been widely adopted to release mobile devices from intensive write pressure. Delta compression is particularly promising for its high compression efficacy over conventional compression methods. However, this method suffers from non-trivial system overheads incurred by delta maintenance and read penalty, which prevents its applicability on mobile devices. To this end, this paper proposes DeltaFS, a metadata-enabled Delta compression on log-structured File System for mobile devices, to achieve utmost compressing efficiency and zero hardware costs. DeltaFS smartly exploits the out-of-place updating ability of Log-structured File System (LFS) to alleviate the problems of write amplification, which is the key bottleneck for delta compression implementation. Specifically, DeltaFS utilizes the inline area in file inodes for delta maintenance with zero hardware cost, and integrates an inline area manage strategy to improve the utilization of constrained inline area. Moreover, a complimentary delta maintenance strategy is incorporated, which selectively maintains delta chunks in the main data area to break through the limitation of constrained inline area. Experimental results show that DeltaFS substantially reduces write traffics by up to 64.8\%, and improves the I/O performance by up to 37.3\%

    Redesigning Transaction Processing Systems for Non-Volatile Memory

    Get PDF
    Department of Computer Science and EngineeringTransaction Processing Systems are widely used because they make the user be able to manage their data more efficiently. However, they suffer performance bottleneck due to the redundant I/O for guaranteeing data consistency. In addition to the redundant I/O, slow storage device makes the performance more degraded. Leveraging non-volatile memory is one of the promising solutions the performance bottleneck in Transaction Processing Systems. However, since the I/O granularity of legacy storage devices and non-volatile memory is not equal, traditional Transaction Processing System cannot fully exploit the performance of persistent memory. The goal of this dissertation is to fully exploit non-volatile memory for improving the performance of Transaction Processing Systems. Write amplification between Transaction Processing System is pointed out as a performance bottleneck. As first approach, we redesigned Transaction Processing Systems to minimize the redundant I/O between the Transaction Processing Systems. We present LS-MVBT that integrates recovery information into the main database file to remove temporary files for recovery. The LS-MVBT also employs five optimizations to reduce the write traffics in single fsync() calls. We also exploit the persistent memory to reduce the performance bottleneck from slow storage devices. However, since the traditional recovery method is for slow storage devices, we develop byte-addressable differential logging, user-level heap manager, and transaction-aware persistence to fully exploit the persistent memory. To minimize the redundant I/O for guarantee data consistency, we present the failure-atomic slotted paging with persistent buffer cache. Redesigning indexing structure is the second approach to exploit the non-volatile memory fully. Since the B+-tree is originally designed for block granularity, It generates excessive I/O traffics in persistent memory. To mitigate this traffic, we develop cache line friendly B+-tree which aligns its node size to cache line size. It can minimize the write traffic. Moreover, with hardware transactional memory, it can update its single node atomically without any additional redundant I/O for guaranteeing data consistency. It can also adapt Failure-Atomic Shift and Failure-Atomic In-place Rebalancing to eliminate unnecessary I/O. Furthermore, We improved the persistent memory manager that exploit traditional memory heap structure with free-list instead of segregated lists for small memory allocations to minimize the memory allocation overhead. Our performance evaluation shows that our improved version that consider I/O granularity of non-volatile memory can efficiently reduce the redundant I/O traffic and improve the performance by large of a margin.ope

    SQLite Optimization with Phase Change Memory for Mobile Applications

    Get PDF
    ABSTRACT Given its pervasive use in smart mobile platforms, there is a compelling need to optimize the performance of sluggish SQLite databases. Popular mobile applications such as messenger, email and social network services rely on SQLite for their data management need. Those mobile applications tend to execute relatively short transactions in the autocommit mode for transactional consistency in databases. This often has adverse effect on the flash memory storage in mobile devices because the small random updates cause high write amplification and high write latency. In order to address this problem, we propose a new optimization strategy, called per-page logging (PPL), for mobile data management, and have implemented the key functions in SQLite/PPL. The hardware component of SQLite/PPL includes phase change memory (PCM) with a byte-addressable, persistent memory abstraction. By capturing an update in a physiological log record and adding it to the PCM log sector, SQLite/PPL can replace a multitude of successive page writes made to the same logical page with much smaller log writes done to PCM much more efficiently. We have observed that SQLite/PPL would potentially improve the performance of mobile applications by an order of magnitude while supporting transactional atomicity and durability

    Doubleheader Logging: Eliminating Journal Write Overhead for Mobile DBMS

    Get PDF
    Department of Computer Science and EngineeringVarious transactional systems use out-of-place updates such as logging or copy-on-write mechanisms to update data in a failure-atomic manner. Such out-of-place update methods double the I/O traffic due to back-up copies in the database layer and quadruple the I/Otraffic due to the file system journaling. In mobile systems, transaction sizes of mobile apps are known to be tiny and transactionsrun at low concurrency. For such mobile transactions, legacy out-of-place update methods such as WAL are sub-optimal. In this work,we propose a crash consistent in-place update logging method -doubleheader logging(DHL) for SQLite. DHL prevents previous consistent records from being lost by performing a copy-on-write inside the database page and co-locating the metadata-only journal information within the page. This is done, in turn, with minimal sacrifice to page utilization. DHL is similar to when journaling is disabled, in the sense that it incurs almost no additional overhead in terms of both I/O and computation. Our experimental results show thatDHL outperforms other logging methods such as out-of-place update write-ahead logging (WAL) and in-place update multi-version B-tree (MVBT).clos

    An investigation into dynamical bandwidth management and bandwidth redistribution using a pool of cooperating interfacing gateways and a packet sniffer in mobile cloud computing

    Get PDF
    Mobile communication devices are increasingly becoming an essential part of almost every aspect of our daily life. However, compared to conventional communication devices such as laptops, notebooks, and personal computers, mobile devices still lack in terms of resources such as processor, storage and network bandwidth. Mobile Cloud Computing is intended to augment the capabilities of mobile devices by moving selected workloads away from resource-limited mobile devices to resource-intensive servers hosted in the cloud. Services hosted in the cloud are accessed by mobile users on-demand via the Internet using standard thick or thin applications installed on their devices. Nowadays, users of mobile devices are no longer satisfied with best-effort service and demand QoS when accessing and using applications and services hosted in the cloud. The Internet was originally designed to provide best-effort delivery of data packets, with no guarantee on packet delivery. Quality of Service has been implemented successfully in provider and private networks since the Internet Engineering Task Force introduced the Integrated Services and Differentiated Services models. These models have their legacy but do not adequately address the Quality of Service needs in Mobile Cloud Computing where users are mobile, traffic differentiation is required per user, device or application, and packets are routed across several network domains which are independently administered. This study investigates QoS and bandwidth management in Mobile Cloud Computing and considers a scenario where a virtual test-bed made up of GNS3 network software emulator, Cisco IOS image, Wireshark packet sniffer, Solar-Putty, and Firefox web browser appliance is set up on a laptop virtualized with VMware Workstation 15 Pro. The virtual test-bed is in turn connected to the real world Internet via the host laptop's Ethernet Network Interface Card. Several virtual Firefox appliances are set up as endusers and generate traffic by launching web applications such as video streaming, file download and Internet browsing. The traffic generated by the end-users and bandwidth used is measured, monitored, and tracked using a Wireshark packet sniffer installed on all interfacing gateways that connect the end-users to the cloud. Each gateway aggregates the demand of connected hosts and delivers Quality of Service to connected users based on the Quality of Service policies and mechanisms embedded in the gateway. Analysis of the results shows that a packet sniffer deployed at a suitable point in the network can identify, measure and track traffic usage per user, device or application in real-time. The study has also demonstrated that when deployed in the gateway connecting users to the cloud, it provides network-wide monitoring and traffic statistics collected can be fed to other functional components of the gateway where a dynamical bandwidth management scheme can be applied to instantaneously allocate and redistribute bandwidth to target users as they roam around the network from one location to another. This approach is however limited and ensuring end-to-end Quality of Service requires mechanisms and policies to be extended across all network layers along the traffic path between the user and the cloud in order to guarantee a consistent treatment of traffic

    Internet of Things From Hype to Reality

    Get PDF
    The Internet of Things (IoT) has gained significant mindshare, let alone attention, in academia and the industry especially over the past few years. The reasons behind this interest are the potential capabilities that IoT promises to offer. On the personal level, it paints a picture of a future world where all the things in our ambient environment are connected to the Internet and seamlessly communicate with each other to operate intelligently. The ultimate goal is to enable objects around us to efficiently sense our surroundings, inexpensively communicate, and ultimately create a better environment for us: one where everyday objects act based on what we need and like without explicit instructions

    Building the Infrastructure for Cloud Security

    Get PDF
    Computer scienc

    Towards understanding and mitigating attacks leveraging zero-day exploits

    Get PDF
    Zero-day vulnerabilities are unknown and therefore not addressed with the result that they can be exploited by attackers to gain unauthorised system access. In order to understand and mitigate against attacks leveraging zero-days or unknown techniques, it is necessary to study the vulnerabilities, exploits and attacks that make use of them. In recent years there have been a number of leaks publishing such attacks using various methods to exploit vulnerabilities. This research seeks to understand what types of vulnerabilities exist, why and how these are exploited, and how to defend against such attacks by either mitigating the vulnerabilities or the method / process of exploiting them. By moving beyond merely remedying the vulnerabilities to defences that are able to prevent or detect the actions taken by attackers, the security of the information system will be better positioned to deal with future unknown threats. An interesting finding is how attackers exploit moving beyond the observable bounds to circumvent security defences, for example, compromising syslog servers, or going down to lower system rings to gain access. However, defenders can counter this by employing defences that are external to the system preventing attackers from disabling them or removing collected evidence after gaining system access. Attackers are able to defeat air-gaps via the leakage of electromagnetic radiation as well as misdirect attribution by planting false artefacts for forensic analysis and attacking from third party information systems. They analyse the methods of other attackers to learn new techniques. An example of this is the Umbrage project whereby malware is analysed to decide whether it should be implemented as a proof of concept. Another important finding is that attackers respect defence mechanisms such as: remote syslog (e.g. firewall), core dump files, database auditing, and Tripwire (e.g. SlyHeretic). These defences all have the potential to result in the attacker being discovered. Attackers must either negate the defence mechanism or find unprotected targets. Defenders can use technologies such as encryption to defend against interception and man-in-the-middle attacks. They can also employ honeytokens and honeypots to alarm misdirect, slow down and learn from attackers. By employing various tactics defenders are able to increase their chance of detecting and time to react to attacks, even those exploiting hitherto unknown vulnerabilities. To summarize the information presented in this thesis and to show the practical importance thereof, an examination is presented of the NSA's network intrusion of the SWIFT organisation. It shows that the firewalls were exploited with remote code execution zerodays. This attack has a striking parallel in the approach used in the recent VPNFilter malware. If nothing else, the leaks provide information to other actors on how to attack and what to avoid. However, by studying state actors, we can gain insight into what other actors with fewer resources can do in the future
    corecore