18,192 research outputs found

    Survey and Analysis of Production Distributed Computing Infrastructures

    Full text link
    This report has two objectives. First, we describe a set of the production distributed infrastructures currently available, so that the reader has a basic understanding of them. This includes explaining why each infrastructure was created and made available and how it has succeeded and failed. The set is not complete, but we believe it is representative. Second, we describe the infrastructures in terms of their use, which is a combination of how they were designed to be used and how users have found ways to use them. Applications are often designed and created with specific infrastructures in mind, with both an appreciation of the existing capabilities provided by those infrastructures and an anticipation of their future capabilities. Here, the infrastructures we discuss were often designed and created with specific applications in mind, or at least specific types of applications. The reader should understand how the interplay between the infrastructure providers and the users leads to such usages, which we call usage modalities. These usage modalities are really abstractions that exist between the infrastructures and the applications; they influence the infrastructures by representing the applications, and they influence the ap- plications by representing the infrastructures

    Making intelligent systems team players: Case studies and design issues. Volume 1: Human-computer interaction design

    Get PDF
    Initial results are reported from a multi-year, interdisciplinary effort to provide guidance and assistance for designers of intelligent systems and their user interfaces. The objective is to achieve more effective human-computer interaction (HCI) for systems with real time fault management capabilities. Intelligent fault management systems within the NASA were evaluated for insight into the design of systems with complex HCI. Preliminary results include: (1) a description of real time fault management in aerospace domains; (2) recommendations and examples for improving intelligent systems design and user interface design; (3) identification of issues requiring further research; and (4) recommendations for a development methodology integrating HCI design into intelligent system design

    A National Collaboratory to Advance the Science of High Temperature Plasma Physics for Magnetic Fusion

    Full text link

    Using visual analytics to develop situation awareness in astrophysics

    Get PDF
    We present a novel collaborative visual analytics application for cognitively overloaded users in the astrophysics domain. The system was developed for scientists who need to analyze heterogeneous, complex data under time pressure, and make predictions and time-critical decisions rapidly and correctly under a constant influx of changing data. The Sunfall Data Taking system utilizes several novel visualization and analysis techniques to enable a team of geographically distributed domain specialists to effectively and remotely maneuver a custom-built instrument under challenging operational conditions. Sunfall Data Taking has been in production use for 2 years by a major international astrophysics collaboration (the largest data volume supernova search currently in operation), and has substantially improved the operational efficiency of its users. We describe the system design process by an interdisciplinary team, the system architecture and the results of an informal usability evaluation of the production system by domain experts in the context of Endsley's three levels of situation awareness

    Collaborative Human-Computer Interaction with Big Wall Displays - BigWallHCI 2013 3rd JRC ECML Crisis Management Technology Workshop

    Get PDF
    The 3rd JRC ECML Crisis Management Technology Workshop on Human-Computer Interaction with Big Wall Displays in Situation Rooms and Monitoring Centres was co-organised by the European Commission Joint Research Centre and the University of Applied Sciences St. Pölten, Austria. It took place in the European Crisis Management Laboratory (ECML) of the JRC in Ispra, Italy, from 18 to 19 April 2013. 40 participants from stakeholders in the EC, civil protection bodies, academia, and industry attended the workshop. The hardware of large display areas is on the one hand mature since many years and on the other hand changing rapidly and improving constantly. This high pace developments promise amazing new setups with respect to e.g., pixel density or touch interaction. On the software side there are two components with room for improvement: 1. the software provided by the display manufacturers to operate their video walls (source selection, windowing system, layout control) and 2. dedicated ICT systems developed to the very needs of crisis management practitioners and monitoring centre operators. While industry starts to focus more on the collaborative aspects of their operating software already, the customized and tailored ICT applications needed are still missing, unsatisfactory, or very expensive since they have to be developed from scratch many times. Main challenges identified to enhance big wall display systems in crisis management and situation monitoring contexts include: 1. Interaction: Overcome static layouts and/or passive information consumption. 2. Participatory Design & Development: Software needs to meet users’ needs. 3. Development and/or application of Information Visualisation & Visual Analytics principle to support the transition from data to information to knowledge. 4. Information Overload: Proper methods for attention management, automatic interpretation, incident detection, and alarm triggering are needed to deal with the ever growing amount of data to be analysed.JRC.G.2-Global security and crisis managemen

    ROOT - A C++ Framework for Petabyte Data Storage, Statistical Analysis and Visualization

    Full text link
    ROOT is an object-oriented C++ framework conceived in the high-energy physics (HEP) community, designed for storing and analyzing petabytes of data in an efficient way. Any instance of a C++ class can be stored into a ROOT file in a machine-independent compressed binary format. In ROOT the TTree object container is optimized for statistical data analysis over very large data sets by using vertical data storage techniques. These containers can span a large number of files on local disks, the web, or a number of different shared file systems. In order to analyze this data, the user can chose out of a wide set of mathematical and statistical functions, including linear algebra classes, numerical algorithms such as integration and minimization, and various methods for performing regression analysis (fitting). In particular, ROOT offers packages for complex data modeling and fitting, as well as multivariate classification based on machine learning techniques. A central piece in these analysis tools are the histogram classes which provide binning of one- and multi-dimensional data. Results can be saved in high-quality graphical formats like Postscript and PDF or in bitmap formats like JPG or GIF. The result can also be stored into ROOT macros that allow a full recreation and rework of the graphics. Users typically create their analysis macros step by step, making use of the interactive C++ interpreter CINT, while running over small data samples. Once the development is finished, they can run these macros at full compiled speed over large data sets, using on-the-fly compilation, or by creating a stand-alone batch program. Finally, if processing farms are available, the user can reduce the execution time of intrinsically parallel tasks - e.g. data mining in HEP - by using PROOF, which will take care of optimally distributing the work over the available resources in a transparent way
    • …
    corecore