32 research outputs found

    Methods of fault tree analysis and their limits

    Get PDF

    Markovian and stochastic differential equation based approaches to computer virus propagation dynamics and some models for survival distributions

    Get PDF
    This dissertation is divided in two Parts. The first Part explores probabilistic modeling of propagation of computer \u27malware\u27 (generally referred to as \u27virus\u27) across a network of computers, and investigates modeling improvements achieved by introducing a random latency period during which an infected computer in the network is unable to infect others. In the second Part, two approaches for modeling life distributions in univariate and bivariate setups are developed. In Part I, homogeneous and non-homogeneous stochastic susceptible-exposed-infectious- recovered (SEIR) models are specifically explored for the propagation of computer virus over the Internet by borrowing ideas from mathematical epidemiology. Large computer networks such as the Internet have become essential in today\u27s technological societies and even critical to the financial viability of the national and the global economy. However, the easy access and widespread use of the Internet makes it a prime target for malicious activities, such as introduction of computer viruses, which pose a major threat to large computer networks. Since an understanding of the underlying dynamics of their propagation is essential in efforts to control them, a fair amount of research attention has been devoted to model the propagation of computer viruses, starting from basic deterministic models with ordinary differential equations (ODEs) through stochastic models of increasing realism. In the spirit of exploring more realistic probability models that seek to explain the time dependent transient behavior of computer virus propagation by exploiting the essential stochastic nature of contacts and communications among computers, the present study introduces a new refinement in such efforts to consider the suitability and use of the stochastic SEIR model of mathematical epidemiology in the context of computer viruses propagation. We adapt the stochastic SEIR model to the study of computer viruses prevalence by incorporating the idea of a latent period during which computer is in an \u27exposed state\u27 in the sense that the computer is infected but cannot yet infect other computers until the latency is over. The transition parameters of the SEIR model are estimated using real computer viruses data. We develop the maximum likelihood (MLE) and Bayesian estimators for the SEIR model parameters, and apply them to the \u27Code Red worm\u27 data. Since network structure can be a possibly important factor in virus propagation, multi-group stochastic SEIR models for the spreading of computer virus in heterogeneous networks are explored next. For the multi-group stochastic SEIR model using Markovian approach, the method of maximum likelihood estimation for model parameters of interest are derived. The method of least squares is used to estimate the model parameters of interest in the multi-group stochastic SEIR-SDE model, based on stochastic differential equations. The models and methodologies are applied to Code Red worm data. Simulations based on different models proposed in this dissertation and deterministic/ stochastic models available in the literature are conducted and compared. Based on such comparisons, we conclude that (i) stochastic models using SEIR framework appear to be relatively much superior than previous models of computer virus propagation - even up to its saturation level, and (ii) there is no appreciable difference between homogeneous and heterogeneous (multi-group) models. The \u27no difference\u27 finding of course may possibly be influenced by the criterion used to assign computers in the overall network to different groups. In our study, the grouping of computers in the total network into subgroups or, clusters were based on their geographical location only, since no other grouping criterion were available in the Code Red worm data. Part II covers two approaches for modeling life distributions in univariate and bivariate setups. In the univariate case, a new partial order based on the idea of \u27star-shaped functions\u27 is introduced and explored. In the bivariate context; a class of models for joint lifetime distributions that extends the idea of univariate proportional hazards in a suitable way to the bivariate case is proposed. The expectation-maximization (EM) method is used to estimate the model parameters of interest. For the purpose of illustration, the bivariate proportional hazard model and the method of parameter estimation are applied to two real data sets

    Patents as Options: Some Estimates of the Value of Holding European Patent Stocks

    Get PDF
    In many countries holders of patents must pay an annual renewal fee in order to keep their patents in force. This paper uses data on the proportion of patents renewed, and the renewal fees faced by, post World War II cohorts of patents in France, the United Kingdom, and Germany, in conjunction with a model of patent holders' renewal decisions, to estimate the returns earned from holding patents in these countries. Since patents are often applied for at a nearly stage in the innovation process, the model allows agents to be uncertain about the sequence of returns that will be earned if the patent is kept inforce. Formally, then, the paper presents and solves a discrete choice optimal stochastic control model, derives the implications of the model on aggregate behaviour, and then estimates the parameters of the model from aggregate data. The estimates enable a detailed description of the evolution of the distribution of returns earned from holding patents over their life spans,and calculations of both; the annual returns earned from holding the patents still in force (or the patent stocks) in the alternative countries, and the distribution of the discounted value of returns earned from holding the patents in a cohort.

    A generalised semi-Markov reliability model.

    Get PDF
    The thesis reviews the history and literature of reliability theory. The implicit assumptions of the basic reliability model are identified and their potential for generalisation investigated. A generalised model of reliability is constructed, in which components and systems can take any values in an ordered discrete or continuous state-space representing various levels of partial operation. For the discrete state-space case, the enumeration of suitable system structure functions is discussed, and related to the problem posed by Dedekind in 1897 on the cardinality of the free distributive lattice. Some numerical enumerations are evaluated, and several recursive bounds are derived. In the special case of the usual dichotomic reliability model, a new upper bound is shown to be superior to the best explicit and non-asymptotic upper bound previously derived. The relationship of structure functions to event networks is also examined. Some specific results for the state probabilities of components with small numbers of states are derived. Discrete and continuous examples of the generalised model of reliability are investigated, and properties of the model are derived. Various forms of independence between components are shown to be equivalent, but this equivalence does not completely generalise to the property of zero-covariance. Alternative forms of series and parallel connections are compared, together with the effects of replacement. Multiple time scales are incorporated into the formulation. The above generalised reliability model is subsequently specialised and extended so as to study the optimal tuning of partially operating components. Simple drift and catastrophic failure mechanisms are considered. Explicit and graphical solutions are derived, together with several bounds. The optimal retuning of such units is also studied and bounds are again obtained, together with some explicit solutions

    Proceedings of the Eindhoven FASTAR Days 2004 : Eindhoven, The Netherlands, September 3-4, 2004

    Get PDF
    The Eindhoven FASTAR Days (EFD) 2004 were organized by the Software Construction group of the Department of Mathematics and Computer Science at the Technische Universiteit Eindhoven. On September 3rd and 4th 2004, over thirty participants|hailing from the Czech Republic, Finland, France, The Netherlands, Poland and South Africa|gathered at the Department to attend the EFD. The EFD were organized in connection with the research on finite automata by the FASTAR Research Group, which is centered in Eindhoven and at the University of Pretoria, South Africa. FASTAR (Finite Automata Systems|Theoretical and Applied Research) is an in- ternational research group that aims to lead in all areas related to finite state systems. The work in FASTAR includes both core and applied parts of this field. The EFD therefore focused on the field of finite automata, with an emphasis on practical aspects and applications. Eighteen presentations, mostly on subjects within this field, were given, by researchers as well as students from participating universities and industrial research facilities. This report contains the proceedings of the conference, in the form of papers for twelve of the presentations at the EFD. Most of them were initially reviewed and distributed as handouts during the EFD. After the EFD took place, the papers were revised for publication in these proceedings. We would like to thank the participants for their attendance and presentations, making the EFD 2004 as successful as they were. Based on this success, it is our intention to make the EFD into a recurring event. Eindhoven, December 2004 Loek Cleophas Bruce W. Watso

    Essays on statistical economics with applications to financial market instability, limit distribution of loss aversion, and harmonic probability weighting functions

    Get PDF
    This dissertation is comprised of four essays. It develops statistical models of decision making in the presence of risk with applications to economics and finance. The methodology draws upon economics, finance, psychology, mathematics and statistics. Each essay contributes to the literature by either introducing new theories and empirical predictions or extending old ones with novel approaches .The first essay (Chapter II) includes, to the best of our knowledge, the first known limit distribution of the myopic loss aversion (MLA) index derived from micro-foundations of behavioural economics. That discovery predicts several new results. We prove that the MLA index is in the class of α-stable distributions. This striking prediction is upheld empirically with data from a published meta-study on loss aversion; published data on cross-country loss aversion indexes; and macroeconomic loss aversion index data for US and South Africa. The latter results provide contrast to Hofstede's cross-cultural uncertainty avoidance index for risk perception. We apply the theory to information based asset pricing and show how the MLA index mimics information flows in credit risk models. We embed the MLA index in the pricing kernel of a behavioural consumption based capital asset pricing model (B-CCAPM) and resolve the equity premium puzzle. Our theory predicts: (1) stochastic dominance of good states in the B-CCAPM Markov matrix induce excess volatility; and (2) a countercyclical fourfold pattern of risk attitudes. The second essay (Chapter III) introduces a probability model of "irrational exuberance "and financial market instability implied by index option prices. It is based on a behavioural empirical local Lyapunov exponent (BELLE) process we construct from micro-foundations of behavioural finance. It characterizes stochastic stability of financial markets, with risk attitude factors in fixed point neighbourhoods of the probability weighting functions implied by index option prices. It provides a robust early warning system for market crash across different credit risk sources. We show how the model would have predicted the Great Recession of 2008. The BELLE process characterizes Minskys financial instability hypothesis that financial markets transit from financial relations that make them stable to those that make them unstable

    Vol. 13, No. 2 (Full Issue)

    Get PDF

    Low-cost and efficient fault detection and diagnosis schemes for modern cores

    Get PDF
    Continuous improvements in transistor scaling together with microarchitectural advances have made possible the widespread adoption of high-performance processors across all market segments. However, the growing reliability threats induced by technology scaling and by the complexity of designs are challenging the production of cheap yet robust systems. Soft error trends are haunting, especially for combinational logic, and parity and ECC codes are therefore becoming insufficient as combinational logic turns into the dominant source of soft errors. Furthermore, experts are warning about the need to also address intermittent and permanent faults during processor runtime, as increasing temperatures and device variations will accelerate inherent aging phenomena. These challenges specially threaten the commodity segments, which impose requirements that existing fault tolerance mechanisms cannot offer. Current techniques based on redundant execution were devised in a time when high penalties were assumed for the sake of high reliability levels. Novel light-weight techniques are therefore needed to enable fault protection in the mass market segments. The complexity of designs is making post-silicon validation extremely expensive. Validation costs exceed design costs, and the number of discovered bugs is growing, both during validation and once products hit the market. Fault localization and diagnosis are the biggest bottlenecks, magnified by huge detection latencies, limited internal observability, and costly server farms to generate test outputs. This thesis explores two directions to address some of the critical challenges introduced by unreliable technologies and by the limitations of current validation approaches. We first explore mechanisms for comprehensively detecting multiple sources of failures in modern processors during their lifetime (including transient, intermittent, permanent and also design bugs). Our solutions embrace a paradigm where fault tolerance is built based on exploiting high-level microarchitectural invariants that are reusable across designs, rather than relying on re-execution or ad-hoc block-level protection. To do so, we decompose the basic functionalities of processors into high-level tasks and propose three novel runtime verification solutions that combined enable global error detection: a computation/register dataflow checker, a memory dataflow checker, and a control flow checker. The techniques use the concept of end-to-end signatures and allow designers to adjust the fault coverage to their needs, by trading-off area, power and performance. Our fault injection studies reveal that our methods provide high coverage levels while causing significantly lower performance, power and area costs than existing techniques. Then, this thesis extends the applicability of the proposed error detection schemes to the validation phases. We present a fault localization and diagnosis solution for the memory dataflow by combining our error detection mechanism, a new low-cost logging mechanism and a diagnosis program. Selected internal activity is continuously traced and kept in a memory-resident log whose capacity can be expanded to suite validation needs. The solution can catch undiscovered bugs, reducing the dependence on simulation farms that compute golden outputs. Upon error detection, the diagnosis algorithm analyzes the log to automatically locate the bug, and also to determine its root cause. Our evaluations show that very high localization coverage and diagnosis accuracy can be obtained at very low performance and area costs. The net result is a simplification of current debugging practices, which are extremely manual, time consuming and cumbersome. Altogether, the integrated solutions proposed in this thesis capacitate the industry to deliver more reliable and correct processors as technology evolves into more complex designs and more vulnerable transistors.El continuo escalado de los transistores junto con los avances microarquitectónicos han posibilitado la presencia de potentes procesadores en todos los segmentos de mercado. Sin embargo, varios problemas de fiabilidad están desafiando la producción de sistemas robustos. Las predicciones de "soft errors" son inquietantes, especialmente para la lógica combinacional: soluciones como ECC o paridad se están volviendo insuficientes a medida que dicha lógica se convierte en la fuente predominante de soft errors. Además, los expertos están alertando acerca de la necesidad de detectar otras fuentes de fallos (causantes de errores permanentes e intermitentes) durante el tiempo de vida de los procesadores. Los segmentos "commodity" son los más vulnerables, ya que imponen unos requisitos que las técnicas actuales de fiabilidad no ofrecen. Estas soluciones (generalmente basadas en re-ejecución) fueron ideadas en un tiempo en el que con tal de alcanzar altos nivel de fiabilidad se asumían grandes costes. Son por tanto necesarias nuevas técnicas que permitan la protección contra fallos en los segmentos más populares. La complejidad de los diseños está encareciendo la validación "post-silicon". Su coste excede el de diseño, y el número de errores descubiertos está aumentando durante la validación y ya en manos de los clientes. La localización y el diagnóstico de errores son los mayores problemas, empeorados por las altas latencias en la manifestación de errores, por la poca observabilidad interna y por el coste de generar las señales esperadas. Esta tesis explora dos direcciones para tratar algunos de los retos causados por la creciente vulnerabilidad hardware y por las limitaciones de los enfoques de validación. Primero exploramos mecanismos para detectar múltiples fuentes de fallos durante el tiempo de vida de los procesadores (errores transitorios, intermitentes, permanentes y de diseño). Nuestras soluciones son de un paradigma donde la fiabilidad se construye explotando invariantes microarquitectónicos genéricos, en lugar de basarse en re-ejecución o en protección ad-hoc. Para ello descomponemos las funcionalidades básicas de un procesador y proponemos tres soluciones de `runtime verification' que combinadas permiten una detección de errores a nivel global. Estas tres soluciones son: un verificador de flujo de datos de registro y de computación, un verificador de flujo de datos de memoria y un verificador de flujo de control. Nuestras técnicas usan el concepto de firmas y permiten a los diseñadores ajustar los niveles de protección a sus necesidades, mediante compensaciones en área, consumo energético y rendimiento. Nuestros estudios de inyección de errores revelan que los métodos propuestos obtienen altos niveles de protección, a la vez que causan menos costes que las soluciones existentes. A continuación, esta tesis explora la aplicabilidad de estos esquemas a las fases de validación. Proponemos una solución de localización y diagnóstico de errores para el flujo de datos de memoria que combina nuestro mecanismo de detección de errores, junto con un mecanismo de logging de bajo coste y un programa de diagnóstico. Cierta actividad interna es continuamente registrada en una zona de memoria cuya capacidad puede ser expandida para satisfacer las necesidades de validación. La solución permite descubrir bugs, reduciendo la necesidad de calcular los resultados esperados. Al detectar un error, el algoritmo de diagnóstico analiza el registro para automáticamente localizar el bug y determinar su causa. Nuestros estudios muestran un alto grado de localización y de precisión de diagnóstico a un coste muy bajo de rendimiento y área. El resultado es una simplificación de las prácticas actuales de depuración, que son enormemente manuales, incómodas y largas. En conjunto, las soluciones de esta tesis capacitan a la industria a producir procesadores más fiables, a medida que la tecnología evoluciona hacia diseños más complejos y más vulnerables
    corecore