9 research outputs found

    PPAndroid-Benchmarker: Benchmarking Privacy Protection Systems on Android Devices

    Get PDF
    Mobile devices are ubiquitous in today's digital world. While people enjoy the convenience brought by mobile devices, it has been proven that many mobile apps leak personal information without user consent or even awareness. That can occur due to many reasons, such as careless programming errors, intention of developers to collect private information, infection of innocent apps by malware, etc. Thus, the research community has proposed many methods and systems to detect privacy leakage and prevent such detected leakage on mobile devices. This is a to do note at margin While it is obviously essential to evaluate the accuracy and effectiveness of privacy protection systems, we are not aware of any automated system that can benchmark performance of privacy protection systems on Android devices. In this paper, we report PPAndroid-Benchmarker, the first system of this kind, which can fairly benchmark any privacy protection systems dynamically (i.e., in run time) or statically. PPAndroid-Benchmarker has been released as an open-source tool and we believe that it will help the research community, developers and even end users to analyze, improve, and choose privacy protection systems on Android devices. We applied PPAndroid-Benchmarker in dynamic mode to 165 Android apps with some privacy protection features, selected from variant app markets and the research community, and showed effectiveness of the tool. We also illustrate two components of PPAndroid-Benchmarker on the design level, which are Automatic Test Apps Generator for benchmarking static analysis based tools and Reconfigurability Engine that allows any instance of PPAndroid-Benchmarker to be reconfigured including but not limited to adding and removing information sources and sinks. Furthermore, we give some insights about current status of mobile privacy protection and prevention in app markets based upon our analysis

    Communication-optimal parallel minimum spanning tree algorithms

    No full text
    Lower and upper bounds for finding a minimum spanning tree (MST) in a weighted undirected graph on the BSP model are presented. We provide the first non-trivial lower bounds on the communication volume required to solve the MST problem. Let p denote the number of processors, n the number of nodes of the input graph, and m the number of edges of the input graph. We show that in the worst case, a total of #OMEGA#(#kappa# x min(m, pn)) bits need to be communicated in order to solve the MST problem, where #kappa# is the number of bits required to represent a single edge weight. This implies that if each message communicates at most #kappa# bits, any BSP algorithm for finding an MST requires communication time #OMEGA#(g x min(m/p,n)), where g is the gap parameter of the BSP model. In addition, we present two algorithms with communication requirements that match our lower bound in different situations. Both algorithms perform linear work for appropriate values of n, m and p, and use a number of supersteps that is bounded for arbitrarily large input sizes. The first algorithms is simple but can employ at most m/n processors efficiently. Hence, it should be applied in situations where the input graph is relatively dense. The second algorithm is a randomized algorithm that performs linear work with high probability, provided that m#>=#n x log p. (orig.)SIGLEAvailable from TIB Hannover: RR 6673(59) / FIZ - Fachinformationszzentrum Karlsruhe / TIB - Technische InformationsbibliothekDEGerman

    Immersive augmented reality system for the training of pattern classification control with a myoelectric prosthesis

    No full text
    Abstract Background Hand amputation can have a truly debilitating impact on the life of the affected person. A multifunctional myoelectric prosthesis controlled using pattern classification can be used to restore some of the lost motor abilities. However, learning to control an advanced prosthesis can be a challenging task, but virtual and augmented reality (AR) provide means to create an engaging and motivating training. Methods In this study, we present a novel training framework that integrates virtual elements within a real scene (AR) while allowing the view from the first-person perspective. The framework was evaluated in 13 able-bodied subjects and a limb-deficient person divided into intervention (IG) and control (CG) groups. The IG received training by performing simulated clothespin task and both groups conducted a pre- and posttest with a real prosthesis. When training with the AR, the subjects received visual feedback on the generated grasping force. The main outcome measure was the number of pins that were successfully transferred within 20 min (task duration), while the number of dropped and broken pins were also registered. The participants were asked to score the difficulty of the real task (posttest), fun-factor and motivation, as well as the utility of the feedback. Results The performance (median/interquartile range) consistently increased during the training sessions (4/3 to 22/4). While the results were similar for the two groups in the pretest, the performance improved in the posttest only in IG. In addition, the subjects in IG transferred significantly more pins (28/10.5 versus 14.5/11), and dropped (1/2.5 versus 3.5/2) and broke (5/3.8 versus 14.5/9) significantly fewer pins in the posttest compared to CG. The participants in IG assigned (mean ± std) significantly lower scores to the difficulty compared to CG (5.2 ± 1.9 versus 7.1 ± 0.9), and they highly rated the fun factor (8.7 ± 1.3) and usefulness of feedback (8.5 ± 1.7). Conclusion The results demonstrated that the proposed AR system allows for the transfer of skills from the simulated to the real task while providing a positive user experience. The present study demonstrates the effectiveness and flexibility of the proposed AR framework. Importantly, the developed system is open source and available for download and further development

    Immersive augmented reality system for the training of pattern classification control with a myoelectric prosthesis

    No full text
    Abstract Background Hand amputation can have a truly debilitating impact on the life of the affected person. A multifunctional myoelectric prosthesis controlled using pattern classification can be used to restore some of the lost motor abilities. However, learning to control an advanced prosthesis can be a challenging task, but virtual and augmented reality (AR) provide means to create an engaging and motivating training. Methods In this study, we present a novel training framework that integrates virtual elements within a real scene (AR) while allowing the view from the first-person perspective. The framework was evaluated in 13 able-bodied subjects and a limb-deficient person divided into intervention (IG) and control (CG) groups. The IG received training by performing simulated clothespin task and both groups conducted a pre- and posttest with a real prosthesis. When training with the AR, the subjects received visual feedback on the generated grasping force. The main outcome measure was the number of pins that were successfully transferred within 20 min (task duration), while the number of dropped and broken pins were also registered. The participants were asked to score the difficulty of the real task (posttest), fun-factor and motivation, as well as the utility of the feedback. Results The performance (median/interquartile range) consistently increased during the training sessions (4/3 to 22/4). While the results were similar for the two groups in the pretest, the performance improved in the posttest only in IG. In addition, the subjects in IG transferred significantly more pins (28/10.5 versus 14.5/11), and dropped (1/2.5 versus 3.5/2) and broke (5/3.8 versus 14.5/9) significantly fewer pins in the posttest compared to CG. The participants in IG assigned (mean ± std) significantly lower scores to the difficulty compared to CG (5.2 ± 1.9 versus 7.1 ± 0.9), and they highly rated the fun factor (8.7 ± 1.3) and usefulness of feedback (8.5 ± 1.7). Conclusion The results demonstrated that the proposed AR system allows for the transfer of skills from the simulated to the real task while providing a positive user experience. The present study demonstrates the effectiveness and flexibility of the proposed AR framework. Importantly, the developed system is open source and available for download and further development

    Performing tasks on synchronous restartable message-passing processors

    No full text
    We consider the problem of performing t tasks in a distributed system of p faultprone processors. This problem, called DO-ALL herein, was introduced by Dwork, Halpern and Waarts. Our work deals with a synchronous message-passing distributed system with processor stop-failures and restarts. We present two new algorithms based on a new aggressive coordination paradigm by which multiple coordinators may be active as the result of failures. The first algorithm is tolerant of f<p stop-failures and it does not allow restarts. It has available processor steps (work) complexity S=O((t+p log p/log log p) log f) and message complexity M=O(t+p log p/log log p+fp). Unlike prior solutions, our algorithm uses redundant broadcasts when encountering failures and, for p=t and large f, it achieves better work complexity. This algorithm is used as the basis for another algorithm which tolerates stop-failures and restarts. This new algorithm is the first solution for the DO-ALL problem that efficiently deals with processor restarts. Its available processor steps complexity is S=O((t+p log p+f).min#left brace#log p, log f#right brace#), and its message complexity is M=O(t+p log p+fp), where f is the total number of failures. (orig.)SIGLEAvailable from TIB Hannover: RR 6673(61) / FIZ - Fachinformationszzentrum Karlsruhe / TIB - Technische InformationsbibliothekDEGerman
    corecore