423 research outputs found

    Retention Characteristics And Policy As Suggested By California School Administrators And Teachers

    Get PDF
    Problem. There has been no definite conclusions in the literature as to the benefit or harm of retaining students in grade. With the California Legislative Mandate of SB 813, school districts are now required to have policies in effect for the promotion or nonpromotion of students. This study reviewed retention characteristics currently used in retention policy, those mentioned in literature, and the perceptions of administrators and teachers as to the value of these characteristics in retention. A model retention policy was developed from the study. Purpose. The purpose of the study was to determine if there were differences between teachers and administrators regarding their perceptions of the importance of specific characteristics used in retention policy. Based on the available research, a model policy that suggests guidelines for determining the retention of a student in grade was developed. Procedure. Questionnaires were sent to 93 California school districts. Ninety-three administrators and 372 teachers were surveyed. Three hundred and five questionnaires were returned. The survey results were analyzed to compare administrator and teacher responses to the importance of retention characteristics. Comparisons were also made between urban, rural and suburban school districts. The Chi Square statistics were used for all comparisons with the .05 level of confidence chosen for all inferential tests. Findings. Administrators and teachers consistently agreed on the five most common reasons that should be considered in a retention policy. These were academic achievement, teacher evaluation of student progress, emotional maturity, previous retention and parental support to the recommendation for retention. Overall, there was no significant difference between teachers and administrators in their perceptions of the importance of individual retention characteristics. The items that had significant differences were low importance items. There was no significant difference between teachers and administrators by districts. Recommendations. This study should be replicated since many teachers did not indicate their grade levels on the questionnaires. A study should be made to help classify educational terms such as academic achievement and emotional maturity. Long term studies should be done to follow up students who have been retained to determine if the retention was beneficial. A study should be done to better determine the entry age of students and the effect entry age has on retentions

    Leaf: Modularity for Temporary Sharing in Separation Logic (Extended Version)

    Full text link
    In concurrent verification, separation logic provides a strong story for handling both resources that are owned exclusively and resources that are shared persistently (i.e., forever). However, the situation is more complicated for temporarily shared state, where state might be shared and then later reclaimed as exclusive. We believe that a framework for temporarily-shared state should meet two key goals not adequately met by existing techniques. One, it should allow and encourage users to verify new sharing strategies. Two, it should provide an abstraction where users manipulate shared state in a way agnostic to the means with which it is shared. We present Leaf, a library in the Iris separation logic which accomplishes both of these goals by introducing a novel operator, which we call guarding, that allows one proposition to represent a shared version of another. We demonstrate that Leaf meets these two goals through a modular case study: we verify a reader-writer lock that supports shared state, and a hash table built on top of it that uses shared state

    How to run POSIX apps in a minimal picoprocess

    Get PDF
    Abstract We envision a future where Web, mobile, and desktop applications are delivered as isolated, complete software stacks to a minimal, secure client host. This shift imbues app vendors with full autonomy to maintain their apps' integrity. Achieving this goal requires shifting complexity out of the client platform and replacing the required behavior inside the vendors' isolated apps. We ported rich, interactive POSIX apps, such as Gimp and Inkscape, to a spartan host platform. We describe this effort in sufficient detail to support reproducibility

    How to run POSIX apps in a minimal picoprocess

    Get PDF
    Abstract We envision a future where Web, mobile, and desktop applications are delivered as isolated, complete software stacks to a minimal, secure client host. This shift imbues app vendors with full autonomy to maintain their apps' integrity. Achieving this goal requires shifting complex behavior out of the client platform and into the vendors' isolated apps. We ported rich, interactive POSIX apps, such as Gimp and Inkscape, to a spartan host platform. We describe this effort in sufficient detail to support reproducibility

    How to run POSIX apps in a minimal picoprocess

    Get PDF
    Abstract We envision a future where Web, mobile, and desktop applications are delivered as isolated, complete software stacks to a minimal, secure client host. This shift imbues app vendors with full autonomy to maintain their apps' integrity. Achieving this goal requires shifting complex behavior out of the client platform and into the vendors' isolated apps. We ported rich, interactive POSIX apps, such as Gimp and Inkscape, to a spartan host platform. We describe this effort in sufficient detail to support reproducibility

    Geppetto: Versatile Verifiable Computation

    Get PDF
    Cloud computing sparked interest in Verifiable Computation protocols, which allow a weak client to securely outsource computations to remote parties. Recent work has dramatically reduced the client’s cost to verify the correctness of results, but the overhead to produce proofs largely remains impractical. Geppetto introduces complementary techniques for reducing prover overhead and increasing prover flexibility. With Multi-QAPs, Geppetto reduces the cost of sharing state between computations (e.g., for MapReduce) or within a single computation by up to two orders of magnitude. Via a careful instantiation of cryptographic primitives, Geppetto also brings down the cost of verifying outsourced cryptographic computations (e.g., verifiably computing on signed data); together with Geppetto’s notion of bounded proof bootstrapping, Geppetto improves on prior bootstrapped systems by five orders of magnitude, albeit at some cost in universality. Geppetto also supports qualitatively new properties like verifying the correct execution of proprietary (i.e., secret) algorithms. Finally, Geppetto’s use of energy-saving circuits brings the prover’s costs more in line with the program’s actual (rather than worst-case) execution time. Geppetto is implemented in a full-fledged, scalable compiler that consumes LLVM code generated from a variety of apps, as well as a large cryptographic library

    Pinocchio: Nearly practical verifiable computation

    Get PDF
    Abstract To instill greater confidence in computations outsourced to the cloud, clients should be able to verify the correctness of the results returned. To this end, we introduce Pinocchio, a built system for efficiently verifying general computations while relying only on cryptographic assumptions. With Pinocchio, the client creates a public evaluation key to describe her computation; this setup is proportional to evaluating the computation once. The worker then evaluates the computation on a particular input and uses the evaluation key to produce a proof of correctness. The proof is only 288 bytes, regardless of the computation performed or the size of the inputs and outputs. Anyone can use a public verification key to check the proof. Crucially, our evaluation on seven applications demonstrates that Pinocchio is efficient in practice too. Pinocchio's verification time is typically 10ms: 5-7 orders of magnitude less than previous work; indeed Pinocchio is the first general-purpose system to demonstrate verification cheaper than native execution (for some apps). Pinocchio also reduces the worker's proof effort by an additional 19-60×. As an additional feature, Pinocchio generalizes to zero-knowledge proofs at a negligible cost over the base protocol. Finally, to aid development, Pinocchio provides an end-to-end toolchain that compiles a subset of C into programs that implement the verifiable computation protocol

    Verus: Verifying Rust Programs using Linear Ghost Types (extended version)

    Full text link
    The Rust programming language provides a powerful type system that checks linearity and borrowing, allowing code to safely manipulate memory without garbage collection and making Rust ideal for developing low-level, high-assurance systems. For such systems, formal verification can be useful to prove functional correctness properties beyond type safety. This paper presents Verus, an SMT-based tool for formally verifying Rust programs. With Verus, programmers express proofs and specifications using the Rust language, allowing proofs to take advantage of Rust's linear types and borrow checking. We show how this allows proofs to manipulate linearly typed permissions that let Rust code safely manipulate memory, pointers, and concurrent resources. Verus organizes proofs and specifications using a novel mode system that distinguishes specifications, which are not checked for linearity and borrowing, from executable code and proofs, which are checked for linearity and borrowing. We formalize Verus' linearity, borrowing, and modes in a small lambda calculus, for which we prove type safety and termination of specifications and proofs. We demonstrate Verus on a series of examples, including pointer-manipulating code (an xor-based doubly linked list), code with interior mutability, and concurrent code

    Discerning the clinical relevance of biomarkers in early stage breast cancer

    Get PDF
    Purpose Prior data suggest that breast cancer patients accept significant toxicity for small benefit. It is unclear whether personalized estimations of risk or benefit likelihood that could be provided by biomarkers alter treatment decisions in the curative setting. Methods A choice-based conjoint (CBC) survey was conducted in 417 HER2-negative breast cancer patients who received chemotherapy in the curative setting. The survey presented pairs of treatment choices derived from common taxane- and anthracycline-based regimens, varying in degree of benefit by risk of recurrence and in toxicity profile, including peripheral neuropathy (PN) and congestive heart failure (CHF). Hypothetical biomarkers shifting benefit and toxicity risk were modeled to determine whether this knowledge alters choice. Previously identified biomarkers were evaluated using this model. Results Based on CBC analysis, a non-anthracycline regimen was the most preferred. Patients with prior PN had a similar preference for a taxane regimen as those who were PN naïve, but more dramatically shifted preference away from taxanes when PN was described as severe/irreversible. When modeled after hypothetical biomarkers, as the likelihood of PN increased, the preference for taxane-containing regimens decreased; similarly, as the likelihood of CHF increased, the preference for anthracycline regimens decreased. When evaluating validated biomarkers for PN and CHF, this knowledge did alter regimen preference. Conclusions Patients faced with multi-faceted decisions consider personal experience and perceived risk of recurrent disease. Biomarkers providing information on likelihood of toxicity risk do influence treatment choices, and patients may accept reduced benefit when faced with higher risk of toxicity in the curative setting
    • …
    corecore