10 research outputs found
Collaborative verification of information flow for a high-assurance app store.
ABSTRACT Current app stores distribute some malware to unsuspecting users, even though the app approval process may be costly and timeconsuming. High-integrity app stores must provide stronger guarantees that their apps are not malicious. We propose a verification model for use in such app stores to guarantee that the apps are free of malicious information flows. In our model, the software vendor and the app store auditor collaborate -each does tasks that are easy for her/him, reducing overall verification cost. The software vendor provides a behavioral specification of information flow (at a finer granularity than used by current app stores) and source code annotated with information-flow type qualifiers. A flow-sensitive, context-sensitive information-flow type system checks the information flow type qualifiers in the source code and proves that only information flows in the specification can occur at run time. The app store auditor uses the vendor-provided source code to manually verify declassifications. We have implemented the information-flow type system for Android apps written in Java, and we evaluated both its effectiveness at detecting information-flow violations and its usability in practice. In an adversarial Red Team evaluation, we analyzed 72 apps (576,000 LOC) for malware. The 57 Trojans among these had been written specifically to defeat a malware analysis such as ours. Nonetheless, our information-flow type system was effective: it detected 96% of malware whose malicious behavior was related to information flow and 82% of all malware. In addition to the adversarial evaluation, we evaluated the practicality of using the collaborative model. The programmer annotation burden is low: 6 annotations per 100 LOC. Every sound analysis requires a human to review potential false alarms, and in our experiments, this took 30 minutes per 1,000 LOC for an auditor unfamiliar with the app
Recommended from our members
73. ETIOLOGY OF UNABLE TO ASSESS ENTRUSTABLE PROFESSIONAL ACTIVITIES IN A NATIONAL STUDY
Score-P: Scalable performance measurement infrastructure for parallel codes (v8.0)
The Score-P measurement infrastructure is a highly scalable and easy-to-use tool suite for profiling, event tracing, and online analysis of HPC applications. Score-P offers the user a maximum of convenience by supporting a number of analysis tools. Currently, it works with CubeGUI, Scalasca trace tools, Vampir, Tau, and Extra-P and is open for other tools. Score-P comes together with the new Open Trace Format Version 2, the Cube4 profiling format and the Opari2 instrumenter. Score-P is available under the 3-clause BSD Open Source license
Score-P: Scalable performance measurement infrastructure for parallel codes (v8.3)
The instrumentation and measurement framework Score-P, together with analysis tools build on top of its output formats, provides insight into massively parallel HPC applications, their communication, synchronization, I/O, and scaling behaviour to pinpoint performance bottlenecks and their causes. Score-P is a highly scalable and easy-to-use tool suite for profiling (summarizing program execution) and event tracing (capturing events in chronological order) of HPC applications. The scorep instrumentation command adds instrumentation hooks into a user's application by either prepending or replacing the compile and link commands. C, C++, Fortran, and Python codes as well as contemporary HPC programming models (MPI, threading, GPUs, I/O) are supported. When running an instrumented application, measurement event data is provided by the instrumentation hooks to the measurement core. There, the events are augmented with high-accuracy timestamps and potentially hardware counters (a plugin-API allows querying additional metric sources). The augmented events are then passed to one or both of the built-in event consumers, profiling and tracing (a plugin-API allows creation of additional event consumers) which finally provide output in the formats CUBE4 and OTF2, respectively. Score-P is available under the 3-clause BSD Open Source license
Recommended from our members
Longitudinal Assessment of Resident Performance Using Entrustable Professional Activities
Question What is the progression of performance for entrustable professional activities (EPAs) throughout pediatric residency training and at graduation? Findings This multisite cohort study of 1987 pediatric residents found that developmental growth curves can be established for EPAs. When generated to reflect the results in this study, at least 90% of trainees achieved the level of unsupervised practice at the end of residency for only 8 of the 17 EPAs studied. Meaning This study suggests that gaps exist between observed practice readiness and standards needed to produce physicians able to meet the health needs of the patient populations they serve based on the general pediatrics EPAs.
Importance Entrustable professional activities (EPAs) are an emerging workplace-based, patient-oriented assessment approach with limited empirical evidence. Objective To measure the development of pediatric trainees' clinical skills over time using EPA-based assessment data. Design, Setting, and Participants Prospective cohort study of categorical pediatric residents over 3 academic years (2015-2016, 2016-2017, and 2017-2018) assessed on 17 American Board of Pediatrics EPAs. Residents in training at 23 pediatric residency programs in the Association of Pediatric Program Directors Longitudinal Educational Assessment Research Network were included. Assessment was conducted by clinical competency committee members, who made summative assessment decisions regarding levels of supervision required for each resident and each EPA. Data were collected from May 2016 to November 2018 and analyzed from November to December 2018. Interventions Longitudinal, prospective assessment using EPAs. Main Outcomes and Measures Trajectories of supervision levels by EPA during residency training and how often graduating residents were deemed ready for unsupervised practice in each EPA. Results Across the 5 data collection cycles, 1987 residents from all 3 postgraduate years in 23 residency programs were assigned 25503 supervision level reports for the 17 general pediatrics EPAs. The 4 EPAs that required the most supervision across training were EPA 14 (quality improvement) on the 5-level scale (estimated mean level at graduation, 3.7; 95% CI, 3.6-3.7) and EPAs 8 (transition to adult care; mean, 7.0; 95% CI, 7.0-7.1), 9 (behavioral and mental health; mean, 6.6; 95% CI, 6.5-6.6), and 10 (resuscitate and stabilize; mean, 6.9; 95% CI, 6.8-7.0) on the expanded 5-level scale. At the time of graduation (36 months), the percentage of trainees who were rated at a supervision level corresponding to "unsupervised practice" varied by EPA from 53% to 98%. If performance standards were set to align with 90% of trainees achieving the level of unsupervised practice, this standard would be met for only 8 of the 17 EPAs (although 89% met this standard for EPA 17, performing the common procedures of the general pediatrician). Conclusions and Relevance This study presents initial evidence for empirically derived practice readiness and sets the stage for identifying curricular gaps that contribute to discrepancy between observed practice readiness and standards needed to produce physicians able to meet the health needs of the patient populations they serve. Future work should compare these findings with postgraduation outcomes data as a means of seeking validity evidence.
This cohort study measures the development of pediatric resident clinical skills using assessments based on entrustable professional activities