13,094 research outputs found
A probabilistic model checking approach to analysing reliability, availability, and maintainability of a single satellite system
Satellites now form a core component for space
based systems such as GPS and GLONAS which provide
location and timing information for a variety of uses. Such
satellites are designed to operate in-orbit and have lifetimes of
10 years or more. Reliability, availability and maintainability
(RAM) analysis of these systems has been indispensable in
the design phase of satellites in order to achieve minimum
failures or to increase mean time between failures (MTBF)
and thus to plan maintainability strategies, optimise reliability
and maximise availability. In this paper, we present formal
modelling of a single satellite and logical specification of
its reliability, availability and maintainability properties. The
probabilistic model checker PRISM has been used to perform
automated quantitative analyses of these properties
Experimental analysis of computer system dependability
This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance
Advanced information processing system for advanced launch system: Avionics architecture synthesis
The Advanced Information Processing System (AIPS) is a fault-tolerant distributed computer system architecture that was developed to meet the real time computational needs of advanced aerospace vehicles. One such vehicle is the Advanced Launch System (ALS) being developed jointly by NASA and the Department of Defense to launch heavy payloads into low earth orbit at one tenth the cost (per pound of payload) of the current launch vehicles. An avionics architecture that utilizes the AIPS hardware and software building blocks was synthesized for ALS. The AIPS for ALS architecture synthesis process starting with the ALS mission requirements and ending with an analysis of the candidate ALS avionics architecture is described
Security Analysis and Improvement Model for Web-based Applications
Today the web has become a major conduit for information. As the World Wide
Web?s popularity continues to increase, information security on the web has become an
increasing concern. Web information security is related to availability, confidentiality,
and data integrity. According to the reports from http://www.securityfocus.com in May
2006, operating systems account for 9% vulnerability, web-based software systems
account for 61% vulnerability, and other applications account for 30% vulnerability.
In this dissertation, I present a security analysis model using the Markov Process
Model. Risk analysis is conducted using fuzzy logic method and information entropy
theory. In a web-based application system, security risk is most related to the current
states in software systems and hardware systems, and independent of web application
system states in the past. Therefore, the web-based applications can be approximately
modeled by the Markov Process Model. The web-based applications can be conceptually
expressed in the discrete states of (web_client_good; web_server_good,
web_server_vulnerable, web_server_attacked, web_server_security_failed; database_server_good, database_server_vulnerable, database_server_attacked,
database_server_security_failed) as state space in the Markov Chain. The vulnerable
behavior and system response in the web-based applications are analyzed in this
dissertation. The analyses focus on functional availability-related aspects: the probability
of reaching a particular security failed state and the mean time to the security failure of a
system. Vulnerability risk index is classified in three levels as an indicator of the level of
security (low level, high level, and failed level). An illustrative application example is
provided. As the second objective of this dissertation, I propose a security improvement
model for the web-based applications using the GeoIP services in the formal methods. In
the security improvement model, web access is authenticated in role-based access control
using user logins, remote IP addresses, and physical locations as subject credentials to
combine with the requested objects and privilege modes. Access control algorithms are
developed for subjects, objects, and access privileges. A secure implementation
architecture is presented. In summary, the dissertation has developed security analysis
and improvement model for the web-based application. Future work will address Markov
Process Model validation when security data collection becomes easy. Security
improvement model will be evaluated in performance aspect
Recommended from our members
Evaluation of software dependability
It has been said that the term software engineering is an aspiration not a description. We would like to be able to claim that we engineer software, in the same sense that we engineer an aero-engine, but most of us would agree that this is not currently an accurate description of our activities. My suspicion is that it never will be.
From the point of view of this essay – i.e. dependability evaluation – a major difference between software and other engineering artefacts is that the former is pure design. Its unreliability is always the result of design faults, which in turn arise as a result of human intellectual failures. The unreliability of hardware systems, on the other hand, has tended until recently to be dominated by random physical failures of components – the consequences of the ‘perversity of nature’. Reliability theories have been developed over the years which have successfully allowed systems to be built to high reliability requirements, and the final system reliability to be evaluated accurately. Even for pure hardware systems, without software, however, the very success of these theories has more recently highlighted the importance of design faults in determining the overall reliability of the final product. The conventional hardware reliability theory does not address this problem at all.
In the case of software, there is no physical source of failures, and so none of the reliability theory developed for hardware is relevant. We need new theories that will allow us to achieve required dependability levels, and to evaluate the actual dependability that has been achieved, when the sources of the faults that ultimately result in failure are human intellectual failures
- …