4,276 research outputs found
Recommended from our members
New evidence on the green building rent and price premium
This paper investigates the effect of voluntary eco-certification on the rental and sale prices of US commercial office properties. Hedonic and logistic regressions are used to test whether there are rental and sale price premiums for LEED and Energy Star certified buildings. The results of the hedonic analysis suggest that there is a rental premium of approximately 6% for LEED and Energy Star certification. A sale price premium of approximately 35% was found for 127 price observations involving LEED rated buildings and 31% for 662 buildings involving Energy Star rated buildings. When compared to samples of similar buildings identified by a binomial logistic regression for LEED-certified buildings, the existence of a rent and sales price premium is confirmed albeit with differences regarding the magnitude of the premium. Overall, the results of this study confirm that LEED and Energy Star buildings exhibit higher rental rates and sales prices per square foot controlling for a large number of location- and property-specific factors
Software reliability through fault-avoidance and fault-tolerance
The use of back-to-back, or comparison, testing for regression test or porting is examined. The efficiency and the cost of the strategy is compared with manual and table-driven single version testing. Some of the key parameters that influence the efficiency and the cost of the approach are the failure identification effort during single version program testing, the extent of implemented changes, the nature of the regression test data (e.g., random), and the nature of the inter-version failure correlation and fault-masking. The advantages and disadvantages of the technique are discussed, together with some suggestions concerning its practical use
Experiments in fault tolerant software reliability
Twenty functionally equivalent programs were built and tested in a multiversion software experiment. Following unit testing, all programs were subjected to an extensive system test. In the process sixty-one distinct faults were identified among the versions. Less than 12 percent of the faults exhibited varying degrees of positive correlation. The common-cause (or similar) faults spanned as many as 14 components. However, a majority of these faults were trivial, and easily detected by proper unit and/or system testing. Only two of the seven similar faults were difficult faults, and both were caused by specification ambiguities. One of these faults exhibited variable identical-and-wrong response span, i.e. response span which varied with the testing conditions and input data. Techniques that could have been used to avoid the faults are discussed. For example, it was determined that back-to-back testing of 2-tuples could have been used to eliminate about 90 percent of the faults. In addition, four of the seven similar faults could have been detected by using back-to-back testing of 5-tuples. It is believed that most, if not all, similar faults could have been avoided had the specifications been written using more formal notation, the unit testing phase was subject to more stringent standards and controls, and better tools for measuring the quality and adequacy of the test data (e.g. coverage) were used
Multiversion software reliability through fault-avoidance and fault-tolerance
In this project we have proposed to investigate a number of experimental and theoretical issues associated with the practical use of multi-version software in providing dependable software through fault-avoidance and fault-elimination, as well as run-time tolerance of software faults. In the period reported here we have working on the following: We have continued collection of data on the relationships between software faults and reliability, and the coverage provided by the testing process as measured by different metrics (including data flow metrics). We continued work on software reliability estimation methods based on non-random sampling, and the relationship between software reliability and code coverage provided through testing. We have continued studying back-to-back testing as an efficient mechanism for removal of uncorrelated faults, and common-cause faults of variable span. We have also been studying back-to-back testing as a tool for improvement of the software change process, including regression testing. We continued investigating existing, and worked on formulation of new fault-tolerance models. In particular, we have partly finished evaluation of Consensus Voting in the presence of correlated failures, and are in the process of finishing evaluation of Consensus Recovery Block (CRB) under failure correlation. We find both approaches far superior to commonly employed fixed agreement number voting (usually majority voting). We have also finished a cost analysis of the CRB approach
Software reliability through fault-avoidance and fault-tolerance
Twenty independently developed but functionally equivalent software versions were used to investigate and compare empirically some properties of N-version programming, Recovery Block, and Consensus Recovery Block, using the majority and consensus voting algorithms. This was also compared with another hybrid fault-tolerant scheme called Acceptance Voting, using dynamic versions of consensus and majority voting. Consensus voting provides adaptation of the voting strategy to varying component reliability, failure correlation, and output space characteristics. Since failure correlation among versions effectively reduces the cardinality of the space in which the voter make decisions, consensus voting is usually preferable to simple majority voting in any fault-tolerant system. When versions have considerably different reliabilities, the version with the best reliability will perform better than any of the fault-tolerant techniques
An Experimental Approach to Comparing Trust in Pastoral and Non-Pastoral Australia
It is generally held that rural Australians are more cooperative in character than their urban counterparts. To explore one aspect of this notion, we conducted an experiment which compared trust and trustworthiness among a sample of Australian senior high school students which included students with both pastoral and non-pastoral backgrounds. While student behaviour is unlikely to mimic adult behaviour, any significant differences between pastoral and non-pastoral students would suggest differences do exist between the social norms that guide pastoral and non-pastoral communities. We repeated our experiment at three different schools containing students from both pastoral and non-pastoral backgrounds, allowing us to draw comparisons. In total 78 students participated. Our experiments were based on similar experiments that have been applied across a range of contexts internationally (trust game/investment game). We did not find evidence of differences between students with pastoral and non-pastoral backgrounds, either in the level of trust in others or in trustworthiness, though our methods probably have a bias towards this conclusion. Our results concurred with other studies in showing that social distance is an important determinant of the level of cooperation.rural urban relations, economic behaviour, culture, arid zones, semiarid zones, pastoral society
Parallel algorithms for interactive manipulation of digital terrain models
Interactive three-dimensional graphics applications, such as terrain data representation and manipulation, require extensive arithmetic processing. Massively parallel machines are attractive for this application since they offer high computational rates, and grid connected architectures provide a natural mapping for grid based terrain models. Presented here are algorithms for data movement on the massive parallel processor (MPP) in support of pan and zoom functions over large data grids. It is an extension of earlier work that demonstrated real-time performance of graphics functions on grids that were equal in size to the physical dimensions of the MPP. When the dimensions of a data grid exceed the processing array size, data is packed in the array memory. Windows of the total data grid are interactively selected for processing. Movement of packed data is needed to distribute items across the array for efficient parallel processing. Execution time for data movement was found to exceed that for arithmetic aspects of graphics functions. Performance figures are given for routines written in MPP Pascal
Developing A Self-Directed Computer Training Program For El Camino College Faculty
This study arose from discussions with the deans of instruction of El Camino College, California, during which it appeared that there was a need to develop self-directed faculty training programs in the use of computers. There was little necessity to convince the computer-literate faculty that microcomputers should be used in their educational activities. However, it was difficult to make faculty who were not computer-literate see the usefulness of microcomputers in their work, or to convince them to take advantage of existing staff development programs in computer training. Developing self-directed programs in computer training and basic computer literacy seemed to be one answer to this problem. The purposes of this study were (1) to establish existing computer-literacy levels among faculty members at El Camino College; (2) to determine the principal obstacles to self-directed computer training, along with strategies designed to overcome these obstacles; and ( 3) to develop recommendations concerning the structure of self-directed computer training programs for faculty at El Camino College. Faculty members were surveyed to identify the perceived need for self-directed computer training programs, and the willingness of various faculty groups to take formal versus self-directed computer training. Faculty group differences were tested at a .05 level of significance for math/science compared with humanities faculty. In both groups the percentage of non-computer-literate faculty was greater than fifty-percent. In addition, the most preferred method of computer-training for computer-literate college faculty was self-directed training or self-taught at seventy percent. A twenty-five member survey group of computer-literate educators at El Camino College ranked the following obstacles most important: (1) nonavailability of personal computers, (2) lack of troubleshooting assistance when needed, (3) lack of interest in computers, (4) lack of motivation/reward for learning to use computers, (5) inability to understand written directions for use of computers or software, (6) other obligations or demands on time, and {7) computer anxiety. A second survey then produced a list of top-ranked strategies needed to overcome each of the seven major obstacles to self-directed computer training at El Camino College that were identified by the survey group. Application of the computer usage survey instrument of Appendix C to two groups of twenty-five randomly selected humanities and math/science faculty showed that there were significant differences in computer-literacy rates between the humanities group and the math/science group. Forty-eight percent of the math/science group were computer-literate, while only twenty percent of the humanities group were computer-literate. Demographic data compiled during the survey showed that there were no obvious age differences in computer-literacy. An extensive literature search substantiated most of the results obtained in this study. For example, researchers have reported in the literature that self-instruction is the largest source of computer training among those faculty members who are computer-literate, and that self-directed computer training is the preferred means of training for most faculty members. The diffusion and implementation process for this research study took place on the local, state, and national levels. At El Camino College, the diffusion process consisted of presenting final copies of the study results to (1) the President and Vice-President of Instruction; (2) the, departmental deans of instruction; (3) staff development officers; and (4) the College Academic Senate. Interested individuals will receive a briefing based upon the findings of this study with a special emphasis on the recommended strategies needed to overcome the barriers to self-directed computer training. The implications for improvement of educational practice arising from this Major Applied Research Project are that a long-standing problem in the computer-literacy training of faculty at El Camino College can be solved. It is expected that many of the results and recommendations, concerning the structure of self-directed computer training obtained in this study will be applicable to other community colleges across the nation. Sample survey instruments and computer-usage questionnaires are presented in Appendix A, B, and C. Examples of faculty development computer-training programs are given in Appendix D and E. An extensive bibliography on the subject of self-directed computer training for college faculty is also included
Experiments in fault tolerant software reliability
The reliability of voting was evaluated in a fault-tolerant software system for small output spaces. The effectiveness of the back-to-back testing process was investigated. Version 3.0 of the RSDIMU-ATS, a semi-automated test bed for certification testing of RSDIMU software, was prepared and distributed. Software reliability estimation methods based on non-random sampling are being studied. The investigation of existing fault-tolerance models was continued and formulation of new models was initiated
- …