28,549 research outputs found

    Off-line computing for experimental high-energy physics

    Get PDF
    The needs of experimental high-energy physics for large-scale computing and data handling are explained in terms of the complexity of individual collisions and the need for high statistics to study quantum mechanical processes. The prevalence of university-dominated collaborations adds a requirement for high-performance wide-area networks. The data handling and computational needs of the different types of large experiment, now running or under construction, are evaluated. Software for experimental high-energy physics is reviewed briefly with particular attention to the success of packages written within the discipline. It is argued that workstations and graphics are important in ensuring that analysis codes are correct, and the worldwide networks which support the involvement of remote physicists are described. Computing and data handling are reviewed showing how workstations and RISC processors are rising in importance but have not supplanted traditional mainframe processing. Examples of computing systems constructed within high-energy physics are examined and evaluated

    Developing the human-computer interface for Space Station Freedom

    Get PDF
    For the past two years, the Human-Computer Interaction Laboratory (HCIL) at the Johnson Space Center has been involved in prototyping and prototype reviews of in support of the definition phase of the Space Station Freedom program. On the Space Station, crew members will be interacting with multi-monitor workstations where interaction with several displays at one time will be common. The HCIL has conducted several experiments to begin to address design issues for this complex system. Experiments have dealt with design of ON/OFF indicators, the movement of the cursor across multiple monitors, and the importance of various windowing capabilities for users performing multiple tasks simultaneously

    A novel isolator-based system promotes viability of human embryos during laboratory processing

    Get PDF
    In vitro fertilisation (IVF) and related technologies are arguably the most challenging of all cell culture applications. The starting material is a single cell from which one aims to produce an embryo capable of establishing a pregnancy eventually leading to a live birth. Laboratory processing during IVF treatment requires open manipulations of gametes and embryos, which typically involves exposure to ambient conditions. To reduce the risk of cellular stress, we have developed a totally enclosed system of interlinked isolator-based workstations designed to maintain oocytes and embryos in a physiological environment throughout the IVF process. Comparison of clinical and laboratory data before and after the introduction of the new system revealed that significantly more embryos developed to the blastocyst stage in the enclosed isolator-based system compared with conventional open-fronted laminar flow hoods. Moreover, blastocysts produced in the isolator-based system contained significantly more cells and their development was accelerated. Consistent with this, the introduction of the enclosed system was accompanied by a significant increase in the clinical pregnancy rate and in the proportion of embryos implanting following transfer to the uterus. The data indicate that protection from ambient conditions promotes improved development of human embryos. Importantly, we found that it was entirely feasible to conduct all IVF-related procedures in the isolator-based workstations

    Tarmo: A Framework for Parallelized Bounded Model Checking

    Full text link
    This paper investigates approaches to parallelizing Bounded Model Checking (BMC) for shared memory environments as well as for clusters of workstations. We present a generic framework for parallelized BMC named Tarmo. Our framework can be used with any incremental SAT encoding for BMC but for the results in this paper we use only the current state-of-the-art encoding for full PLTL. Using this encoding allows us to check both safety and liveness properties, contrary to an earlier work on distributing BMC that is limited to safety properties only. Despite our focus on BMC after it has been translated to SAT, existing distributed SAT solvers are not well suited for our application. This is because solving a BMC problem is not solving a set of independent SAT instances but rather involves solving multiple related SAT instances, encoded incrementally, where the satisfiability of each instance corresponds to the existence of a counterexample of a specific length. Our framework includes a generic architecture for a shared clause database that allows easy clause sharing between SAT solver threads solving various such instances. We present extensive experimental results obtained with multiple variants of our Tarmo implementation. Our shared memory variants have a significantly better performance than conventional single threaded approaches, which is a result that many users can benefit from as multi-core and multi-processor technology is widely available. Furthermore we demonstrate that our framework can be deployed in a typical cluster of workstations, where several multi-core machines are connected by a network

    The impact of active workstations on workplace productivity and performance: a systematic review

    Get PDF
    Active workstations have been recommended for reducing sedentary behavior in the workplace. It is important to understand if the use of these workstations has an impact on worker productivity. The aim of this systematic review was to examine the effect of active workstations on workplace productivity and performance. A total of 3303 articles were initially identified by a systematic search and seven articles met eligibility criteria for inclusion. A quality appraisal was conducted to assess risk of bias, confounding, internal and external validity, and reporting. Most of the studies reported cognitive performance as opposed to productivity. Five studies assessed cognitive performance during use of an active workstation, usually in a single session. Sit-stand desks had no detrimental effect on performance, however, some studies with treadmill and cycling workstations identified potential decreases in performance. Many of the studies lacked the power required to achieve statistical significance. Three studies assessed workplace productivity after prolonged use of an active workstation for between 12 and 52 weeks. These studies reported no significant effect on productivity. Active workstations do not appear to decrease workplace performance

    Windows .NET Network Distributed Basic Local Alignment Search Toolkit (W.ND-BLAST)

    Get PDF
    BACKGROUND: BLAST is one of the most common and useful tools for Genetic Research. This paper describes a software application we have termed Windows .NET Distributed Basic Local Alignment Search Toolkit (W.ND-BLAST), which enhances the BLAST utility by improving usability, fault recovery, and scalability in a Windows desktop environment. Our goal was to develop an easy to use, fault tolerant, high-throughput BLAST solution that incorporates a comprehensive BLAST result viewer with curation and annotation functionality. RESULTS: W.ND-BLAST is a comprehensive Windows-based software toolkit that targets researchers, including those with minimal computer skills, and provides the ability increase the performance of BLAST by distributing BLAST queries to any number of Windows based machines across local area networks (LAN). W.ND-BLAST provides intuitive Graphic User Interfaces (GUI) for BLAST database creation, BLAST execution, BLAST output evaluation and BLAST result exportation. This software also provides several layers of fault tolerance and fault recovery to prevent loss of data if nodes or master machines fail. This paper lays out the functionality of W.ND-BLAST. W.ND-BLAST displays close to 100% performance efficiency when distributing tasks to 12 remote computers of the same performance class. A high throughput BLAST job which took 662.68 minutes (11 hours) on one average machine was completed in 44.97 minutes when distributed to 17 nodes, which included lower performance class machines. Finally, there is a comprehensive high-throughput BLAST Output Viewer (BOV) and Annotation Engine components, which provides comprehensive exportation of BLAST hits to text files, annotated fasta files, tables, or association files. CONCLUSION: W.ND-BLAST provides an interactive tool that allows scientists to easily utilizing their available computing resources for high throughput and comprehensive sequence analyses. The install package for W.ND-BLAST is freely downloadable from . With registration the software is free, installation, networking, and usage instructions are provided as well as a support forum

    Exploring heterogeneity of unreliable machines for p2p backup

    Full text link
    P2P architecture is a viable option for enterprise backup. In contrast to dedicated backup servers, nowadays a standard solution, making backups directly on organization's workstations should be cheaper (as existing hardware is used), more efficient (as there is no single bottleneck server) and more reliable (as the machines are geographically dispersed). We present the architecture of a p2p backup system that uses pairwise replication contracts between a data owner and a replicator. In contrast to standard p2p storage systems using directly a DHT, the contracts allow our system to optimize replicas' placement depending on a specific optimization strategy, and so to take advantage of the heterogeneity of the machines and the network. Such optimization is particularly appealing in the context of backup: replicas can be geographically dispersed, the load sent over the network can be minimized, or the optimization goal can be to minimize the backup/restore time. However, managing the contracts, keeping them consistent and adjusting them in response to dynamically changing environment is challenging. We built a scientific prototype and ran the experiments on 150 workstations in the university's computer laboratories and, separately, on 50 PlanetLab nodes. We found out that the main factor affecting the quality of the system is the availability of the machines. Yet, our main conclusion is that it is possible to build an efficient and reliable backup system on highly unreliable machines (our computers had just 13% average availability)
    corecore