35,442 research outputs found
David G. Sansing Memorialized With Historical Marker
Plaque between Bondurant and Bishop halls pays tribute to iconic university historia
Recommended from our members
Using reversible computing to achieve fail-safety
This paper describes a fail-safe design approach that can be used to achieve a high level of fail-safety with conventional computing equipment which may contain design flaws. The method is based on the well-established concept of reversible computing. Conventional programs destroy information and hence cannot be reversed. However it is easy to define a virtual machine that preserves sufficient intermediate information to permit reversal. Any program implemented on this virtual machine is inherently reversible. The integrity of a calculation can therefore be checked by reversing back from the output values and checking for the equivalence of intermediate values and original input values. By using different machine instructions on the forward and reverse paths, errors in any single instruction execution can be revealed. Random corruptions in data values are also detected. An assessment of the performance of the reversible computer design for a simple reactor trip application indicates that it runs about ten times slower than a conventional software implementation and requires about 20 kilobytes of additional storage. The trials also show a fail-safe bias of better than 99.998% for random data corruptions, and it is argued that failures due to systematic flaws could achieve similar levels of fail-safe bias. Potential extensions and applications of the technique are discussed
MC/DC based estimation and detection of residual faults in PLC logic networks
A logic coverage measure related to MC/DC testing is used to estimate residual faults. The residual fault prediction method is evaluated on an industrial PLC logic example. A randomized form of MC/DC testing is used to maximize coverage growth and fault detection efficiency
Techniques for determination of impact forces during walking and running in a zero-G environment
One of the deleterious adaptations to the microgravity conditions of space flight is the loss of bone mineral content. This loss appears to be at least partially attributable to the minimal skeletal axial loading concomitant with microgravity. The purpose of this study was to develop and fabricate the instruments and hardware necessary to quantify the vertical impact forces (Fz) imparted to users of the space shuttle passive treadmill during human locomotion in a three-dimensional zero-gravity environment. The shuttle treadmill was instrumented using a Kistler forceplate to measure vertical impact forces. To verify that the instruments and hardware were functional, they were tested both in the one-G environment and aboard the KC-135 reduced gravity aircraft. The magnitude of the impact loads generated in one-G on the shuttle treadmill for walking at 0.9 m/sec and running at 1.6 and 2.2 m/sec were 1.1, 1.7, and 1.7 G, respectively, compared with loads of 0.95, 1.2, and 1.5 G in the zero-G environment
Recommended from our members
Overcoming non-determinism in testing smart devices: how to build models of device behaviour
Justification of smart instruments has become an important topic in the nuclear industry. In practice, however, the publicly available artefacts are often the only source of information about the device. Therefore, in many cases independent black-box testing may be the only way to increase the confidence in the device. In this paper we provide a set of recommendations, which we consider to be the best practices for performing black-box assessments. We present our method of testing smart instruments, in which we use the publicly available artefacts only. We present a test harness and describe a method of test automation. We focus on the analysis of test results, which is made particularly complex by the inherent non determinism in the testing of analogue devices. In the paper we analyse the sources of non-determinism, which for instance may arise from inaccuracy in an analogue measurement made by the device when two alternative actions are possible. We propose three alternative ideas on how to build models of device behaviour, which can cope with this kind of non-determinism. We compare and contrast these three solutions, and express our recommendations. Finally, we use a case study, in which a black box assessment of two similar smart instruments is performed to illustrate the differences between the solutions
Letter to Philander Chase
G. W. Marriott tells Philander Chase that the Bishop of Salisbury wanted Chase to visit him, but the post was so late that Chase missed the time to meet completely. Marriott anticipates that Chase\u27s Cause will be well supported in Oxford, and reminds him that getting support in Canada is also important. The Dean of Westminster has agreed to support Chase. Marriott advises Chase to write to the Bishop of Salisbury as soon as he can, as the Bishop\u27s support will be influential especially in Ireland.https://digital.kenyon.edu/chase_letters/1106/thumbnail.jp
Worst Case Reliability Prediction Based on a Prior Estimate of Residual Defects
In this paper we extend an earlier worst case bound reliability theory to derive a worst case reliability function R(t), which gives the worst case probability of surviving a further time t given an estimate of residual defects in the software N and a prior test time T. The earlier theory and its extension are presented and the paper also considers the case where there is a low probability of any defect existing in the program. For the "fractional defect" case, there can be a high probability of surviving any subsequent time t. The implications of the theory are discussed and compared with alternative reliability models
- …