8 research outputs found

    Data remanence and digital forensic investigation for CUDA Graphics Processing Units

    Get PDF
    This paper investigates the practicality of memory attacks on commercial Graphics Processing Units (GPUs). With recent advances in the performance and viability of using GPUs for various highly-parallelised data processing tasks, a number of security challenges are raised. Unscrupulous software running subsequently on the same GPU, either by the same user, or another user, in a multi-user system, may be able to gain access to the contents of the GPU memory. This contains data from previous program executions. In certain use-cases, where the GPU is used to offload intensive parallel processing such as pattern matching for an intrusion detection system, financial systems, or cryptographic algorithms, it may be possible for the GPU memory to contain privileged data, which would ordinarily be inaccessible to an unprivileged application running on the host computer. With GPUs potentially yielding access to confidential information, existing research in the field is built upon, to investigate the practicality of extracting data from global, shared and texture memory, and retrieving this data for further analysis. These techniques are also implemented on various GPUs using three different Nvidia CUDA versions. A novel methodology for digital forensic examination of GPU memory for remanent data is then proposed, along with some suggestions and considerations towards countermeasures and anti-forensic technique

    Analysis of GPU Memory Vulnerabilities

    Get PDF
    Graphics processing units (GPUs) have become a widely used technology for various purposes. While their intended use is accelerating graphics rendering, their parallel computing capabilities have expanded their use into other areas. They are used in computer gaming, deep learning for artificial intelligence and mining cryptocurrencies. Their rise in popularity led to research involving several security aspects, including this paper’s focus, memory vulnerabilities. Research documented many vulnerabilities, including GPUs not implementing address space layout randomization, not zeroing out memory after deallocation, and not initializing newly allocated memory. These vulnerabilities can lead to a victim’s sensitive data being leaked to an attacker, an impactful threat considering the usages of GPU computing presented. In this paper, we attempt to implement these vulnerabilities on an NVIDIA GPU to determine if any advancements in memory architecture have been made since the documentation of such vulnerabilities. This work demonstrates that the lack of attention to security in early GPU development has been adjusted to appropriately match a level of concern for a computing component that numerous industries rely on

    Vulnerable GPU Memory Management: Towards Recovering Raw Data from GPU

    Get PDF
    In this paper, we present that security threats coming with existing GPU memory management strategy are overlooked, which opens a back door for adversaries to freely break the memory isolation: they enable adversaries without any privilege in a computer to recover the raw memory data left by previous processes directly. More importantly, such attacks can work on not only normal multi-user operating systems, but also cloud computing platforms. To demonstrate the seriousness of such attacks, we recovered original data directly from GPU memory residues left by exited commodity applications, including Google Chrome, Adobe Reader, GIMP, Matlab. The results show that, because of the vulnerable memory management strategy, commodity applications in our experiments are all affected

    Stealing Webpages Rendered on Your Browser by Exploiting GPU Vulnerabilities

    No full text
    1

    Undermining User Privacy on Mobile Devices Using AI

    Full text link
    Over the past years, literature has shown that attacks exploiting the microarchitecture of modern processors pose a serious threat to the privacy of mobile phone users. This is because applications leave distinct footprints in the processor, which can be used by malware to infer user activities. In this work, we show that these inference attacks are considerably more practical when combined with advanced AI techniques. In particular, we focus on profiling the activity in the last-level cache (LLC) of ARM processors. We employ a simple Prime+Probe based monitoring technique to obtain cache traces, which we classify with Deep Learning methods including Convolutional Neural Networks. We demonstrate our approach on an off-the-shelf Android phone by launching a successful attack from an unprivileged, zeropermission App in well under a minute. The App thereby detects running applications with an accuracy of 98% and reveals opened websites and streaming videos by monitoring the LLC for at most 6 seconds. This is possible, since Deep Learning compensates measurement disturbances stemming from the inherently noisy LLC monitoring and unfavorable cache characteristics such as random line replacement policies. In summary, our results show that thanks to advanced AI techniques, inference attacks are becoming alarmingly easy to implement and execute in practice. This once more calls for countermeasures that confine microarchitectural leakage and protect mobile phone applications, especially those valuing the privacy of their users

    A Survey of Techniques for Improving Security of GPUs

    Full text link
    Graphics processing unit (GPU), although a powerful performance-booster, also has many security vulnerabilities. Due to these, the GPU can act as a safe-haven for stealthy malware and the weakest `link' in the security `chain'. In this paper, we present a survey of techniques for analyzing and improving GPU security. We classify the works on key attributes to highlight their similarities and differences. More than informing users and researchers about GPU security techniques, this survey aims to increase their awareness about GPU security vulnerabilities and potential countermeasures

    Availability of Datasets for Digital Forensics–And What is Missing

    Get PDF
    This paper targets two main goals. First, we want to provide an overview of available datasets that can be used by researchers and where to find them. Second, we want to stress the importance of sharing datasets to allow researchers to replicate results and improve the state of the art. To answer the first goal, we analyzed 715 peer-reviewed research articles from 2010 to 2015 with focus and relevance to digital forensics to see what datasets are available and focused on three major aspects: (1) the origin of the dataset (e.g., real world vs. synthetic), (2) if datasets were released by researchers and (3) the types of datasets that exist. Additionally, we broadened our results to include the outcome of online search results.We also discuss what we think is missing. Overall, our results show that the majority of datasets are experiment generated (56.4%) followed by real world data (36.7%). On the other hand, 54.4% of the articles use existing datasets while the rest created their own. In the latter case, only 3.8% actually released their datasets. Finally, we conclude that there are many datasets for use out there but finding them can be challenging
    corecore