34 research outputs found

    Digital forensic analysis of the private mode of browsers on Android

    Get PDF
    The smartphone has become an essential electronic device in our daily lives. We carry our most precious and important data on it, from family videos of the last few years to credit card information so that we can pay with our phones. In addition, in recent years, mobile devices have become the preferred device for surfing the web, already representing more than 50% of Internet traffic. As one of the devices we spend the most time with throughout the day, it is not surprising that we are increasingly demanding a higher level of privacy. One of the measures introduced to help us protect our data by isolating certain activities on the Internet is the private mode integrated in most modern browsers. Of course, this feature is not new, and has been available on desktop platforms for more than a decade. Reviewing the literature, one can find several studies that test the correct functioning of the private mode on the desktop. However, the number of studies conducted on mobile devices is incredibly small. And not only is it small, but also most of them perform the tests using various emulators or virtual machines running obsolete versions of Android. Therefore, in this paper we apply the methodology we presented in a previous work to Google Chrome, Brave, Mozilla Firefox, and Tor Browser running on a tablet with Android 13 and on two virtual devices created with Android Emulator. The results confirm that these browsers do not store information about the browsing performed in private mode in the file system. However, the analysis of the volatile memory made it possible to recover the username and password used to log in to a website or the keywords typed in a search engine, even after the devices had been rebootedThis work has received financial support from the Consellería de Cultura, Educación e Ordenación Universitaria of the Xunta de Galicia (accreditation 2019- 2022 ED431G-2019/04, reference competitive group 2022-2024, ED431C 2022/16) and the European Regional Development Fund (ERDF), which acknowledges the CiTIUS-Research Center in Intelligent Technologies of the University of Santiago de Compostela as a Research Center of the Galician University System. This work was also supported by the Ministry of Economy and Competitiveness, Government of Spain (Grant No. PID2019-104834 GB-I00). X. Fernández-Fuentes is supported by the Ministerio de Universidades, Spain under the FPU national plan (FPU18/04605)S

    Model Performance Prediction for Hyperparameter Optimization of Deep Learning Models Using High Performance Computing and Quantum Annealing

    Full text link
    Hyperparameter Optimization (HPO) of Deep Learning-based models tends to be a compute resource intensive process as it usually requires to train the target model with many different hyperparameter configurations. We show that integrating model performance prediction with early stopping methods holds great potential to speed up the HPO process of deep learning models. Moreover, we propose a novel algorithm called Swift-Hyperband that can use either classical or quantum support vector regression for performance prediction and benefit from distributed High Performance Computing environments. This algorithm is tested not only for the Machine-Learned Particle Flow model used in High Energy Physics, but also for a wider range of target models from domains such as computer vision and natural language processing. Swift-Hyperband is shown to find comparable (or better) hyperparameters as well as using less computational resources in all test cases

    Model Performance Prediction for Hyperparameter Optimization of Deep Learning Models Using High Performance Computing and Quantum Annealing

    Get PDF
    Hyperparameter Optimization (HPO) of Deep Learning (DL)-based models tends to be a compute resource intensive process as it usually requires to train the target model with many different hyperparameter configurations. We show that integrating model performance prediction with early stopping methods holds great potential to speed up the HPO process of deep learning models. Moreover, we propose a novel algorithm called Swift-Hyperband that can use either classical or quantum Support Vector Regression (SVR) for performance prediction and benefit from distributed High Performance Computing (HPC) environments. This algorithm is tested not only for the Machine-Learned Particle Flow (MLPF), model used in High-Energy Physics (HEP), but also for a wider range of target models from domains such as computer vision and natural language processing. Swift-Hyperband is shown to find comparable (or better) hyperparameters as well as using less computational resources in all test cases

    TOWARDS LARGE SCALE ENVIRONMENTAL DATA PROCESSING WITH APACHE SPARK

    No full text
    Currently available environmental datasets are either manually constructed by professionals or automatically generated from the observations provided by sensing devices. Usually, the former are modelled and recorded with traditional general-purpose relational technologies, whereas the latter require more specific scientific array formats and tools. Declarative data processing technologies are available both for relational and array data, however, the efficient declarative integrated processing of array and relational environmental data is a problem for which a satisfactory solution has still not been provided. Due to the above, an integrated data processing language called MAPAL has been proposed. This paper provides a brief description of the design decisions and challenges, related to data storage and data processing that arise during the ongoing implementation of MAPAL on top of the Apache Spark large scale data processing framework

    Cloud computing for climate modelling: evaluation, challenges and benefits

    Get PDF
    Cloud computing is a mature technology that has already shown benefits for a wide range of academic research domains that, in turn, utilize a wide range of application design models. In this paper, we discuss the use of cloud computing as a tool to improve the range of resources available for climate science, presenting the evaluation of two different climate models. Each was customized in a different way to run in public cloud computing environments (hereafter cloud computing) provided by three different public vendors: Amazon, Google and Microsoft. The adaptations and procedures necessary to run the models in these environments are described. The computational performance and cost of each model within this new type of environment are discussed, and an assessment is given in qualitative terms. Finally, we discuss how cloud computing can be used for geoscientific modelling, including issues related to the allocation of resources by funding bodies. We also discuss problems related to computing security, reliability and scientific reproducibility.European Regional Development Fund | Ref. ED431C 2017/64-GRCMinisterio de Economía y Competitividad | Ref. RYC-2013-1456

    SparkBWA: Speeding Up the Alignment of High-Throughput DNA Sequencing Data.

    No full text
    Next-generation sequencing (NGS) technologies have led to a huge amount of genomic data that need to be analyzed and interpreted. This fact has a huge impact on the DNA sequence alignment process, which nowadays requires the mapping of billions of small DNA sequences onto a reference genome. In this way, sequence alignment remains the most time-consuming stage in the sequence analysis workflow. To deal with this issue, state of the art aligners take advantage of parallelization strategies. However, the existent solutions show limited scalability and have a complex implementation. In this work we introduce SparkBWA, a new tool that exploits the capabilities of a big data technology as Spark to boost the performance of one of the most widely adopted aligner, the Burrows-Wheeler Aligner (BWA). The design of SparkBWA uses two independent software layers in such a way that no modifications to the original BWA source code are required, which assures its compatibility with any BWA version (future or legacy). SparkBWA is evaluated in different scenarios showing noticeable results in terms of performance and scalability. A comparison to other parallel BWA-based aligners validates the benefits of our approach. Finally, an intuitive and flexible API is provided to NGS professionals in order to facilitate the acceptance and adoption of the new tool. The source code of the software described in this paper is publicly available at https://github.com/citiususc/SparkBWA, with a GPL3 license
    corecore