42,695 research outputs found
Study of fault-tolerant software technology
Presented is an overview of the current state of the art of fault-tolerant software and an analysis of quantitative techniques and models developed to assess its impact. It examines research efforts as well as experience gained from commercial application of these techniques. The paper also addresses the computer architecture and design implications on hardware, operating systems and programming languages (including Ada) of using fault-tolerant software in real-time aerospace applications. It concludes that fault-tolerant software has progressed beyond the pure research state. The paper also finds that, although not perfectly matched, newer architectural and language capabilities provide many of the notations and functions needed to effectively and efficiently implement software fault-tolerance
Automated Detection of Usage Errors in non-native English Writing
In an investigation of the use of a novelty detection algorithm for identifying inappropriate word
combinations in a raw English corpus, we employ an
unsupervised detection algorithm based on the one-
class support vector machines (OC-SVMs) and extract
sentences containing word sequences whose frequency
of appearance is significantly low in native English
writing. Combined with n-gram language models and
document categorization techniques, the OC-SVM classifier assigns given sentences into two different
groups; the sentences containing errors and those
without errors. Accuracies are 79.30 % with bigram
model, 86.63 % with trigram model, and 34.34 % with four-gram model
Implementation of a production Ada project: The GRODY study
The use of the Ada language and design methodologies that encourage full use of its capabilities have a strong impact on all phases of the software development project life cycle. At the National Aeronautics and Space Administration/Goddard Space Flight Center (NASA/GSFC), the Software Engineering Laboratory (SEL) conducted an experiment in parallel development of two flight dynamics systems in FORTRAN and Ada. The differences observed during the implementation, unit testing, and integration phases of the two projects are described and the lessons learned during the implementation phase of the Ada development are outlined. Included are recommendations for future Ada development projects
Big data analytics:Computational intelligence techniques and application areas
Big Data has significant impact in developing functional smart cities and supporting modern societies. In this paper, we investigate the importance of Big Data in modern life and economy, and discuss challenges arising from Big Data utilization. Different computational intelligence techniques have been considered as tools for Big Data analytics. We also explore the powerful combination of Big Data and Computational Intelligence (CI) and identify a number of areas, where novel applications in real world smart city problems can be developed by utilizing these powerful tools and techniques. We present a case study for intelligent transportation in the context of a smart city, and a novel data modelling methodology based on a biologically inspired universal generative modelling approach called Hierarchical Spatial-Temporal State Machine (HSTSM). We further discuss various implications of policy, protection, valuation and commercialization related to Big Data, its applications and deployment
An experiment in software reliability
The results of a software reliability experiment conducted in a controlled laboratory setting are reported. The experiment was undertaken to gather data on software failures and is one in a series of experiments being pursued by the Fault Tolerant Systems Branch of NASA Langley Research Center to find a means of credibly performing reliability evaluations of flight control software. The experiment tests a small sample of implementations of radar tracking software having ultra-reliability requirements and uses n-version programming for error detection, and repetitive run modeling for failure and fault rate estimation. The experiment results agree with those of Nagel and Skrivan in that the program error rates suggest an approximate log-linear pattern and the individual faults occurred with significantly different error rates. Additional analysis of the experimental data raises new questions concerning the phenomenon of interacting faults. This phenomenon may provide one explanation for software reliability decay
Recommended from our members
Articular human joint modelling
Copyright @ Cambridge University Press 2009.The work reported in this paper encapsulates the theories and algorithms developed to drive the core analysis modules of the software which has been developed to model a musculoskeletal structure of anatomic joints. Due to local bone surface and contact geometry based joint kinematics, newly developed algorithms make the proposed modeller different from currently available modellers. There are many modellers that are capable of modelling gross human body motion. Nevertheless, none of the available modellers offer complete elements of joint modelling. It appears that joint modelling is an extension of their core analysis capability, which, in every case, appears to be musculoskeletal motion dynamics. It is felt that an analysis framework that is focused on human joints would have significant benefit and potential to be used in many orthopaedic applications. The local mobility of joints has a significant influence in human motion analysis, in understanding of joint loading, tissue behaviour and contact forces. However, in order to develop a bone surface based joint modeller, there are a number of major problems, from tissue idealizations to surface geometry discretization and non-linear motion analysis. This paper presents the following: (a) The physical deformation of biological tissues as linear or non-linear viscoelastic deformation, based on spring-dashpot elements. (b) The linear dynamic multibody modelling, where the linear formulation is established for small motions and is particularly useful for calculating the equilibrium position of the joint. This model can also be used for finding small motion behaviour or loading under static conditions. It also has the potential of quantifying the joint laxity. (c) The non-linear dynamic multibody modelling, where a non-matrix and algorithmic formulation is presented. The approach allows handling complex material and geometrical nonlinearity easily. (d) Shortest path algorithms for calculating soft tissue line of action geometries. The developed algorithms are based on calculating minimum ‘surface mass’ and ‘surface covariance’. An improved version of the ‘surface covariance’ algorithm is described as ‘residual covariance’. The resulting path is used to establish the direction of forces and moments acting on joints. This information is needed for linear or non-linear treatment of the joint motion. (e) The final contribution of the paper is the treatment of the collision. In the virtual world, the difficulty in analysing bodies in motion arises due to body interpenetrations. The collision algorithm proposed in the paper involves finding the shortest projected ray from one body to the other. The projection of the body is determined by the resultant forces acting on it due to soft tissue connections under tension. This enables the calculation of collision condition of non-convex objects accurately. After the initial collision detection, the analysis involves attaching special springs (stiffness only normal to the surfaces) at the ‘potentially colliding points’ and motion of bodies is recalculated. The collision algorithm incorporates the rotation as well as translation. The algorithm continues until the joint equilibrium is achieved. Finally, the results obtained based on the software are compared with experimental results obtained using cadaveric joints
Integrated analysis of error detection and recovery
An integrated modeling and analysis of error detection and recovery is presented. When fault latency and/or error latency exist, the system may suffer from multiple faults or error propagations which seriously deteriorate the fault-tolerant capability. Several detection models that enable analysis of the effect of detection mechanisms on the subsequent error handling operations and the overall system reliability were developed. Following detection of the faulty unit and reconfiguration of the system, the contaminated processes or tasks have to be recovered. The strategies of error recovery employed depend on the detection mechanisms and the available redundancy. Several recovery methods including the rollback recovery are considered. The recovery overhead is evaluated as an index of the capabilities of the detection and reconfiguration mechanisms
- …