1,302 research outputs found

    Off-line computing for experimental high-energy physics

    Get PDF
    The needs of experimental high-energy physics for large-scale computing and data handling are explained in terms of the complexity of individual collisions and the need for high statistics to study quantum mechanical processes. The prevalence of university-dominated collaborations adds a requirement for high-performance wide-area networks. The data handling and computational needs of the different types of large experiment, now running or under construction, are evaluated. Software for experimental high-energy physics is reviewed briefly with particular attention to the success of packages written within the discipline. It is argued that workstations and graphics are important in ensuring that analysis codes are correct, and the worldwide networks which support the involvement of remote physicists are described. Computing and data handling are reviewed showing how workstations and RISC processors are rising in importance but have not supplanted traditional mainframe processing. Examples of computing systems constructed within high-energy physics are examined and evaluated

    Fault-tolerant control for Scalable Distributed Data Structures

    Get PDF
    calable Distributed Data Structures (SDDS) can be applied for multicomputers. Multicomputers were developed as a response to market demand for scalable and dependable but not expensive systems. SDDS consists of two components dynamically spread across a multicomputer: records belonging to a file and a mechanism controlling record placement in the file. Methods of making records of the file more or less fault-tolerant have already been given. Methods of making the mechanism controlling record placement in the file fault-tolerant have not been studied, yet, although it seems that this is more important for the system dependability than record fault tolerance. Faults in control may lead an application to crash, while record data faults may cause invalid computations at most. In the paper a fault-tolerant control for SDDS is given. It is based on an application of Job Comparison Technique along with TMR. Time overhead due to redundancy introduced is estimated, too

    The Borexino detector at the Laboratori Nazionali del Gran Sasso

    Full text link
    Borexino, a large volume detector for low energy neutrino spectroscopy, is currently running underground at the Laboratori Nazionali del Gran Sasso, Italy. The main goal of the experiment is the real-time measurement of sub MeV solar neutrinos, and particularly of the mono energetic (862 keV) Be7 electron capture neutrinos, via neutrino-electron scattering in an ultra-pure liquid scintillator. This paper is mostly devoted to the description of the detector structure, the photomultipliers, the electronics, and the trigger and calibration systems. The real performance of the detector, which always meets, and sometimes exceeds, design expectations, is also shown. Some important aspects of the Borexino project, i.e. the fluid handling plants, the purification techniques and the filling procedures, are not covered in this paper and are, or will be, published elsewhere (see Introduction and Bibliography).Comment: 37 pages, 43 figures, to be submitted to NI

    Full Issue Vol. 33 No. 3

    Get PDF

    Human-Centeredness: A Paradigm Shift Invoked by the Emerging Cyberspaces

    Get PDF
    Against the background of growing cyberspaces, I am exploring here some consequences for understanding digital technologies along three paths. The first path begins by noting how the increasing efficiency of computation has given birth to a new kind of artifact, the interface. Its characteristics require us to abandon naturalistic conceptions of technology and call, instead, for an effort to understand their users\u27 diverse understandings, to a second order understanding of technology. My second path starts from the curious fact that 17th Century Enlightenment ideas still permeate our celebrations of what the new technologies do while blinding us to the coordination of human activities they cause on an unprecedented scale. This leads us to a new image of human beings as dialogical constituents of networks. My last path begins with interfaces, with what is left in cyberspaces after data, algorithms, and networks have taken up their places, and leads us to languaging as a window to a second-order understanding of others, as a community’s way of co-ordinating co-ordination (of technology), and as our opportunity to redirect our creative attention towards keeping technology human-centered

    Networking high-end CAD systems based on PC/MS-DOS platforms

    Get PDF
    The concept of today\u27s technology has been dropped. Everything is now either oobsolete or experimental. Yesterday\u27s technology is appealing only because it is tried-and-true and prices are reduced for clearance. Tomorrow\u27s technology is exciting, somewhat expensive and not well tested. In the field of architecture, where most firms are medium or small, having limited resources, the high cost initially required for a CAD installation was generally impossible to meet not too many years ago. From spreadsheets and CAD graphics to network file systems and distributed database management, the basic systems and application tools have matured to the point that the possibilities are now limited mainly by how creatively the architects can apply them. CAD systems on the market today are not so different from the systems of the mid 70s, except they have gone from hardware costing a hundred thousand dollar to PC based systems, costing under ten thousand dollars. Choices of hardware and software for CAD systems undergo continual changes in power and efficiency. There will come a point where upgrading will create more a deficiency rather than an augmentation of capability efficiency and overall function. Thus it becomes a major problem for the prospect buyer
    corecore