41,681 research outputs found

    Local area network with fault-checking, priorities, and redundant backup

    Get PDF
    This invention is a redundant error detecting and correcting local area networked computer system having a plurality of nodes each including a network connector board within the node for connecting to an interfacing transceiver operably attached to a network cable. There is a first network cable disposed along a path to interconnect the nodes. The first network cable includes a plurality of first interfacing transceivers attached thereto. A second network cable is disposed in parallel with the first cable and, in like manner, includes a plurality of second interfacing transceivers attached thereto. There are a plurality of three position switches each having a signal input, three outputs for individual selective connection to the input, and a control input for receiving signals designating which of the outputs is to be connected to the signal input. Each of the switches includes means for designating a response address for responding to addressed signals appearing at the control input and each of the switches further has its signal input connected to a respective one of the input/output lines from the nodes. Also, one of the three outputs is connected to a repective one of the plurality of first interfacing transceivers. There is master switch control means having an output connected to the control inputs of the plurality of three position switches and an input for receiving directive signals for outputting addressed switch position signals to the three position switches as well as monitor and control computer means having a pair of network connector boards therein connected to respective ones of one of the first interfacing transceivers and one of the second interfacing transceivers and an output connected to the input of the master switch means for monitoring the status of the networked computer system by sending messages to the nodes and receiving and verifying messages therefrom and for sending control signals to the master switch to cause the master switch to cause respective ones of the nodes to use a desired one of the first and second cables for transmitting and receiving messages and for disconnecting desired ones of the nodes from both cables

    Off-line computing for experimental high-energy physics

    Get PDF
    The needs of experimental high-energy physics for large-scale computing and data handling are explained in terms of the complexity of individual collisions and the need for high statistics to study quantum mechanical processes. The prevalence of university-dominated collaborations adds a requirement for high-performance wide-area networks. The data handling and computational needs of the different types of large experiment, now running or under construction, are evaluated. Software for experimental high-energy physics is reviewed briefly with particular attention to the success of packages written within the discipline. It is argued that workstations and graphics are important in ensuring that analysis codes are correct, and the worldwide networks which support the involvement of remote physicists are described. Computing and data handling are reviewed showing how workstations and RISC processors are rising in importance but have not supplanted traditional mainframe processing. Examples of computing systems constructed within high-energy physics are examined and evaluated

    Tarmo: A Framework for Parallelized Bounded Model Checking

    Full text link
    This paper investigates approaches to parallelizing Bounded Model Checking (BMC) for shared memory environments as well as for clusters of workstations. We present a generic framework for parallelized BMC named Tarmo. Our framework can be used with any incremental SAT encoding for BMC but for the results in this paper we use only the current state-of-the-art encoding for full PLTL. Using this encoding allows us to check both safety and liveness properties, contrary to an earlier work on distributing BMC that is limited to safety properties only. Despite our focus on BMC after it has been translated to SAT, existing distributed SAT solvers are not well suited for our application. This is because solving a BMC problem is not solving a set of independent SAT instances but rather involves solving multiple related SAT instances, encoded incrementally, where the satisfiability of each instance corresponds to the existence of a counterexample of a specific length. Our framework includes a generic architecture for a shared clause database that allows easy clause sharing between SAT solver threads solving various such instances. We present extensive experimental results obtained with multiple variants of our Tarmo implementation. Our shared memory variants have a significantly better performance than conventional single threaded approaches, which is a result that many users can benefit from as multi-core and multi-processor technology is widely available. Furthermore we demonstrate that our framework can be deployed in a typical cluster of workstations, where several multi-core machines are connected by a network

    Libraries in transition: evolving the information ecology of the Learning Commons: a sabbatical report

    Get PDF
    This sabbatical report studied various models in order to determine best practices for design, implementation and service of Leaning Commons, a library service model which functionally and spatially integrates library services, information technology services, and media services to provide a continuum of services to the user

    Parallel symbolic state-space exploration is difficult, but what is the alternative?

    Full text link
    State-space exploration is an essential step in many modeling and analysis problems. Its goal is to find the states reachable from the initial state of a discrete-state model described. The state space can used to answer important questions, e.g., "Is there a dead state?" and "Can N become negative?", or as a starting point for sophisticated investigations expressed in temporal logic. Unfortunately, the state space is often so large that ordinary explicit data structures and sequential algorithms cannot cope, prompting the exploration of (1) parallel approaches using multiple processors, from simple workstation networks to shared-memory supercomputers, to satisfy large memory and runtime requirements and (2) symbolic approaches using decision diagrams to encode the large structured sets and relations manipulated during state-space generation. Both approaches have merits and limitations. Parallel explicit state-space generation is challenging, but almost linear speedup can be achieved; however, the analysis is ultimately limited by the memory and processors available. Symbolic methods are a heuristic that can efficiently encode many, but not all, functions over a structured and exponentially large domain; here the pitfalls are subtler: their performance varies widely depending on the class of decision diagram chosen, the state variable order, and obscure algorithmic parameters. As symbolic approaches are often much more efficient than explicit ones for many practical models, we argue for the need to parallelize symbolic state-space generation algorithms, so that we can realize the advantage of both approaches. This is a challenging endeavor, as the most efficient symbolic algorithm, Saturation, is inherently sequential. We conclude by discussing challenges, efforts, and promising directions toward this goal

    Functional requirements document for the Earth Observing System Data and Information System (EOSDIS) Scientific Computing Facilities (SCF) of the NASA/MSFC Earth Science and Applications Division, 1992

    Get PDF
    Five scientists at MSFC/ESAD have EOS SCF investigator status. Each SCF has unique tasks which require the establishment of a computing facility dedicated to accomplishing those tasks. A SCF Working Group was established at ESAD with the charter of defining the computing requirements of the individual SCFs and recommending options for meeting these requirements. The primary goal of the working group was to determine which computing needs can be satisfied using either shared resources or separate but compatible resources, and which needs require unique individual resources. The requirements investigated included CPU-intensive vector and scalar processing, visualization, data storage, connectivity, and I/O peripherals. A review of computer industry directions and a market survey of computing hardware provided information regarding important industry standards and candidate computing platforms. It was determined that the total SCF computing requirements might be most effectively met using a hierarchy consisting of shared and individual resources. This hierarchy is composed of five major system types: (1) a supercomputer class vector processor; (2) a high-end scalar multiprocessor workstation; (3) a file server; (4) a few medium- to high-end visualization workstations; and (5) several low- to medium-range personal graphics workstations. Specific recommendations for meeting the needs of each of these types are presented

    Public Libraries and the Internet 2006

    Get PDF
    Examines the capability of public libraries to provide and sustain public access Internet services and resources that meet community needs, including serving as the first choice for content, resources, services, and technology infrastructure

    Taking A Stand: The Effects Of Standing Desks On Task Performance And Engagement

    Get PDF
    Time spent sitting is associated with negative health outcomes, motivating some individuals to adopt standing desk workstations. This study represents the first investigation of the effects of standing desk use on reading comprehension and creativity. In a counterbalanced, within-subjects design, 96 participants completed reading comprehension and creativity tasks while both sitting and standing. Participants self-reported their mood during the tasks and also responded to measures of expended effort and task difficulty. In addition, participants indicated whether they expected that they would perform better on work-relevant tasks while sitting or standing. Despite participants’ beliefs that they would perform worse on most tasks while standing, body position did not affect reading comprehension or creativity performance, nor did it affect perceptions of effort or difficulty. Mood was also unaffected by position, with a few exceptions: Participants exhibited greater task engagement (i.e., interest, enthusiasm, and alertness) and less comfort while standing rather than sitting. In sum, performance and psychological experience as related to task completion were nearly entirely uninfluenced by acute (~30-min) standing desk use. View Full-Tex

    Virtualizing Monitoring and Control Systems: First Operational Experience and Future Applications

    Get PDF
    Virtualization is a technology that allows emulating a complete computer platform. The potential use ranges from consolidating hardware to running several different operating systems in parallel on one computer to preserving the operability of heritage software. GSOC has been investigating the possibilities of virtualization for some time. Aside from the usual approach of virtualizing the central servers out of administrational, consolidational reasons, the possibilities and advantages of control room client virtualization was explored. While moving mainstream in other businesses, the space community is cautious to apply this technique to the mission critical monitoring and control systems. This paper illustrates three virtualization steps that are underway at GSOC and presents the experiences gained
    corecore