134,692 research outputs found

    Software systems for operation, control, and monitoring of the EBEX instrument

    Full text link
    We present the hardware and software systems implementing autonomous operation, distributed real-time monitoring, and control for the EBEX instrument. EBEX is a NASA-funded balloon-borne microwave polarimeter designed for a 14 day Antarctic flight that circumnavigates the pole. To meet its science goals the EBEX instrument autonomously executes several tasks in parallel: it collects attitude data and maintains pointing control in order to adhere to an observing schedule; tunes and operates up to 1920 TES bolometers and 120 SQUID amplifiers controlled by as many as 30 embedded computers; coordinates and dispatches jobs across an onboard computer network to manage this detector readout system; logs over 3~GiB/hour of science and housekeeping data to an onboard disk storage array; responds to a variety of commands and exogenous events; and downlinks multiple heterogeneous data streams representing a selected subset of the total logged data. Most of the systems implementing these functions have been tested during a recent engineering flight of the payload, and have proven to meet the target requirements. The EBEX ground segment couples uplink and downlink hardware to a client-server software stack, enabling real-time monitoring and command responsibility to be distributed across the public internet or other standard computer networks. Using the emerging dirfile standard as a uniform intermediate data format, a variety of front end programs provide access to different components and views of the downlinked data products. This distributed architecture was demonstrated operating across multiple widely dispersed sites prior to and during the EBEX engineering flight.Comment: 11 pages, to appear in Proceedings of SPIE Astronomical Telescopes and Instrumentation 2010; adjusted metadata for arXiv submissio

    Exploiting the power of multiplicity: a holistic survey of network-layer multipath

    Get PDF
    The Internet is inherently a multipath network: For an underlying network with only a single path, connecting various nodes would have been debilitatingly fragile. Unfortunately, traditional Internet technologies have been designed around the restrictive assumption of a single working path between a source and a destination. The lack of native multipath support constrains network performance even as the underlying network is richly connected and has redundant multiple paths. Computer networks can exploit the power of multiplicity, through which a diverse collection of paths is resource pooled as a single resource, to unlock the inherent redundancy of the Internet. This opens up a new vista of opportunities, promising increased throughput (through concurrent usage of multiple paths) and increased reliability and fault tolerance (through the use of multiple paths in backup/redundant arrangements). There are many emerging trends in networking that signify that the Internet's future will be multipath, including the use of multipath technology in data center computing; the ready availability of multiple heterogeneous radio interfaces in wireless (such as Wi-Fi and cellular) in wireless devices; ubiquity of mobile devices that are multihomed with heterogeneous access networks; and the development and standardization of multipath transport protocols such as multipath TCP. The aim of this paper is to provide a comprehensive survey of the literature on network-layer multipath solutions. We will present a detailed investigation of two important design issues, namely, the control plane problem of how to compute and select the routes and the data plane problem of how to split the flow on the computed paths. The main contribution of this paper is a systematic articulation of the main design issues in network-layer multipath routing along with a broad-ranging survey of the vast literature on network-layer multipathing. We also highlight open issues and identify directions for future work

    The role of expert systems in federated distributed multi-database systems/Ince Levent

    Get PDF
    A shared information system is a series of computer systems interconnected by some kind of communication network. There are data repositories residing on each computer. These data repositories must somehow be integrated. The purpose for using distributed and multi-database systems is to allow users to view collections of data repositories as if they were a single entity. Multidatabase systems, better known as heterogeneous multidatabase systems, are characterized by dissimilar data models, concurrency and optimization strategies and access methods. Unlike homogenous systems, the data models that compose the global database can be based on different types of data models. It is not necessary that all participant databases use the same data model. Federated distributed database systems are a special case of multidatabase systems. They are completely autonomous and do not rely on the global data dictionary to process distributed queries. Processing distributed query requests in federated databases is very difficult since there are multiple independent databases with their own rules for query optimization, deadlock detection, and concurrency. Expert systems can play a role in this type of environment by supplying a knowledge base that contains rules for data object conversion, rules for resolving naming conflicts, and rules for exchanging data.http://archive.org/details/theroleofexperts109459362Turkish Navy author.Approved for public release; distribution is unlimited

    Markovian and stochastic differential equation based approaches to computer virus propagation dynamics and some models for survival distributions

    Get PDF
    This dissertation is divided in two Parts. The first Part explores probabilistic modeling of propagation of computer \u27malware\u27 (generally referred to as \u27virus\u27) across a network of computers, and investigates modeling improvements achieved by introducing a random latency period during which an infected computer in the network is unable to infect others. In the second Part, two approaches for modeling life distributions in univariate and bivariate setups are developed. In Part I, homogeneous and non-homogeneous stochastic susceptible-exposed-infectious- recovered (SEIR) models are specifically explored for the propagation of computer virus over the Internet by borrowing ideas from mathematical epidemiology. Large computer networks such as the Internet have become essential in today\u27s technological societies and even critical to the financial viability of the national and the global economy. However, the easy access and widespread use of the Internet makes it a prime target for malicious activities, such as introduction of computer viruses, which pose a major threat to large computer networks. Since an understanding of the underlying dynamics of their propagation is essential in efforts to control them, a fair amount of research attention has been devoted to model the propagation of computer viruses, starting from basic deterministic models with ordinary differential equations (ODEs) through stochastic models of increasing realism. In the spirit of exploring more realistic probability models that seek to explain the time dependent transient behavior of computer virus propagation by exploiting the essential stochastic nature of contacts and communications among computers, the present study introduces a new refinement in such efforts to consider the suitability and use of the stochastic SEIR model of mathematical epidemiology in the context of computer viruses propagation. We adapt the stochastic SEIR model to the study of computer viruses prevalence by incorporating the idea of a latent period during which computer is in an \u27exposed state\u27 in the sense that the computer is infected but cannot yet infect other computers until the latency is over. The transition parameters of the SEIR model are estimated using real computer viruses data. We develop the maximum likelihood (MLE) and Bayesian estimators for the SEIR model parameters, and apply them to the \u27Code Red worm\u27 data. Since network structure can be a possibly important factor in virus propagation, multi-group stochastic SEIR models for the spreading of computer virus in heterogeneous networks are explored next. For the multi-group stochastic SEIR model using Markovian approach, the method of maximum likelihood estimation for model parameters of interest are derived. The method of least squares is used to estimate the model parameters of interest in the multi-group stochastic SEIR-SDE model, based on stochastic differential equations. The models and methodologies are applied to Code Red worm data. Simulations based on different models proposed in this dissertation and deterministic/ stochastic models available in the literature are conducted and compared. Based on such comparisons, we conclude that (i) stochastic models using SEIR framework appear to be relatively much superior than previous models of computer virus propagation - even up to its saturation level, and (ii) there is no appreciable difference between homogeneous and heterogeneous (multi-group) models. The \u27no difference\u27 finding of course may possibly be influenced by the criterion used to assign computers in the overall network to different groups. In our study, the grouping of computers in the total network into subgroups or, clusters were based on their geographical location only, since no other grouping criterion were available in the Code Red worm data. Part II covers two approaches for modeling life distributions in univariate and bivariate setups. In the univariate case, a new partial order based on the idea of \u27star-shaped functions\u27 is introduced and explored. In the bivariate context; a class of models for joint lifetime distributions that extends the idea of univariate proportional hazards in a suitable way to the bivariate case is proposed. The expectation-maximization (EM) method is used to estimate the model parameters of interest. For the purpose of illustration, the bivariate proportional hazard model and the method of parameter estimation are applied to two real data sets
    corecore